While this is pretty hilarious LLMs don't actually "know" anything in the usual sense of the word. An LLM, or a Large Language Model is a basically a system that maps "words" to other "words" to allow a computer to understand language. IE all an LLM knows is that when it sees "I love" what probably comes next is "my mom|my dad|ect". Because of this behavior, and the fact we can train them on the massive swath of people asking questions and getting awnsers on the internet LLMs essentially by chance are mostly okay at "answering" a question but really they are just picking the next most likely word over and over from their training which usually ends up reasonably accurate.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
An llm simply has remembered facts. If that is smart, then sure, no human can compete.
Now ask an llm to build a house. Oh shit, no legs and cant walk. A human can walk without thinking about it even.
In the future though, there will be robots who can build houses using AI models to learn from. But not in a long time.
3d-printed concrete houses are already a thing, there's no need for human-like machines to build stuff. They can be purpose-built to perform whatever portion of the house-building task they need to do. There's absolutely no barrier today from having a hive of machines built for specific purposes build houses, besides the fact that no-one as of yet has stitched the necessary components together.
It's not at all out of the question that an AI can be trained up on a dataset of engineering diagrams, house layouts, materials, and construction methods, with subordinate AIs trained on the specific aspects of housing systems like insulation, roofing, plumbing, framing, electrical, etc. which are then used to drive the actual machines building the house. The principal human requirement at that point would be the need for engineers to check the math and sign-off on a design for safety purposes.
If you trained it on all of that it wouldn't be a good builder. Actual builders would tell you it's bad and you would ignore them.
LLMs do not give you accurate results. They can simply strong along words into coherent sentences and that's the extent of their capacity. They just agree with whatever the prompter is pushing and it makes simple people think it's smart.
AI will not be building you a house unless you count a 3D printed house and we both know that's overly pedantic. If that were the case a music box from 1780 is an AI.
Hallucination comes off as confidence. Very human like behavior tbh.
At least half of US adults think that they themselves are smarter than they actually are, so this tracks.
Maybe if the adults actually didn't use the LLMs so much this wouldn't be the case.
Wow. Reading these comments so many people here really don't understand how LLMs work or what's actually going on at the frontier of the field.
I feel like there's going to be a cultural sonic boom, where when the shockwave finally catches up people are going to be woefully under prepared based on what they think they saw.
What that overwhelming, uncritical, capitalist propaganda do...
LLMs are smart, they are just not intelligent
Why are you even surprised at this point, when it comes to Americans ?