Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
local assistants running purely on the host device will eventually be a thing, and it's probably better to leave these sorts of things for that point in time. current llms reinforce the "being in a bubble" effect, and it's not a good thing.
ai generated reviews, ai generated summaries from reviews, ai model gaming of phrase for favorable summaries.. like teachers using llms to grade papers and students using llms to write the papers, one has to wonder what the need is for the middle man at all, eventually.
not to mention, llms waste huge amounts of energy, they're not efficient. leaning into them left and right is becoming a big problem already, much of it for that very reason.
*insert dumb memes and ASCII art*"
Good system sounds real good.
No thanks.
All of the AI summaries out there are subpar and often full of errors. We need less of them, not more.