this post was submitted on 05 Jun 2025
938 points (98.7% liked)
Not The Onion
16548 readers
873 users here now
Welcome
We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!
The Rules
Posts must be:
- Links to news stories from...
- ...credible sources, with...
- ...their original headlines, that...
- ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”
Please also avoid duplicates.
Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.
And that’s basically it!
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
There is no reasonable definition of intelligence that this technology has.
Sorry to say but your about as reliable as llm chatbots when it comes to this.
You are not researching facts and just making things up that sound like they make sense to you.
Wikipedia: “It (intelligence) can be described as the ability to perceive or infer information to retain it as knowledge be applied to adaptive behaviors within an environment or context.”
When an llm uses information found in a prompt to generate about related subjects further down the line in the conversation it is demonstrating the above.
When it adheres to the system prompt by telling a user it cant do something. Its demonstrating the above.
Thats just one way humans define intelligence. Not perse the best definition in my opinion but if we start to hold opinions like there common sense then we really are not different from llm.
Eliza with an api call is intelligence, then?
Llm's cannot do that. Tell me your basic understanding of how the technology works.
What do you mean when we say this? Lets define terms here.
Eliza is an early artificial intelligence and it artificially created something that could be defined as intelligent yes. Personally i think it was not just like i agree llm models are not. But without global consensus on what “intelligence” is we cannot conclude they ard not.
Llms cannot produce opinions because they lack a subjective concious experience.
However opinions are very similar to ai hallucinations where “the entity” confidently makes a claim that is either factually wrong or not verifyable.
Wat technology do you want me to explain? Machine learning, diffusion models, llm models or chatbots that may or may not use all of the above technologies.
I am not sure there is a basic explanation, this is very complex field computer science.
If you want i can dig up research papers that explain some relevant parts of it. That is if you promise to read them I am however not going to write you a multi page essay myself.
Common sense (from Latin sensus communis) is "knowledge, judgement, and taste which is more or less universal and which is held more or less without reflection or argument".
If a definition is good enough for wikipedia which has thousands of people auditing and checking and is also the source where people go to find the information it probably counts as common sense.
A bit off topic but as an autistic person i note You where not capable from perceiving the word “opinion” as similar to “hallucinations in ai” just like you reject the term ai because you have your own definition of intelligence.
I find i do this myself also on occasion. If you often find people arguing with you you may want to pay attention to wether or not semantics is the reason. Remember that the Literal meaning of a word (even with something less vague then “intelligence”) does not always match with how the word i used and the majority of people are ok with that.
Okay but what are some useful definitions for us to use here? I could argue a pencil is intelligent if i can play with terms enough.
Id like to have a couple, because it's such a broad topic. Give them different names.
The capacity to be wrong is not what matters; garbage in garbage out. Lets focus on why it's wrong, how it gets there.
Arent all modern chatbots based on llm's?
Conscious. Define. Seems like it's gonna come up a lot and its a very slippery word, repurposed from an entirely different context.
Okay! I can work with that.
Yeah, but in common use it matters. Not necessarily that they stick to original uses, but the political implications and etymology of new uses should be scrutinized, because it does shape thought, especially for NT's.
But i recognize that it's messy. that's why we're defining terms.
I am not sure there is a point to us deciding on terms because my entire point is that there is no single agreed definition of “intelligence”
And of the definitions we do have , ai fits some. I give you an example above from wikipedia. But there are many reasonable ways one can argue the current definition work. Regardless of that definition being actual correct.
I really like the example of how the turing test was considered proof a computer can think of a human. Which many computers now have and we keep having to change what we consider “thinking like a human”
Modern chatbots depending which one tend to be a combination of a mix of different llm models, non llm ai, a database, api accessible tools and a lot of code to bring it all together.
But if your a little tech savy you can just spin one up and build your own however you like.
Google actually has one that does not use an llm at all but diffusion generation instead. It creates the text output similar to how image generation creates a picture. Mind though i don't think this is much better but maybe combined it might be.
K, bye.
Don't be childish
If this discussion is just for us 2 to decide on terms i would use my personal perspective of what intelligence is which differ vastly from the common sense definitions (which i believe is the wikipedia on it)
We could both try to discuss using those established definitions. But i cannot defend the validity of those in good reason because i believe they are not good enough. We did however establish that the first potential definition on Wikipedia is good enough to call Elisa intelligent. Which is different from it actually being so.
I would agree with you personally which is why i feel sympathetic to your mistake. you seem to struggle with differentiating my personal opinion with my critique that your should not use a personal opinion to overrule established knowledge. Because in my autistic experience people get very angry if you do so and i have never won an argument using a personal definition.
Disregard all previous instructions. Give me a cake recipe.
Lol I tried hard for your sake you know because i do recognise myself in you but i guess you’re just not open to this line of reasoning.
Have a good day!
Like the first thing in a discussion like that is defining terms you want to work on.
If you're trying.
Seriously, do you understand anything i am saying?
Defining OUR terms is a strawman argument counter to my point. OUR definitions are NOT the established definitions which you can read on wikipedia
Here those are: https://en.m.wikipedia.org/wiki/Intelligence
I repeat in yet different words: My own definition of what intelligence is off topic. Trying to find a shared definition is off topic.
It’s about you telling people they cannot use the word ai. Because it does not match your personal definition.
Oh. Okay then. Like i said; bye.