People are assigning thought and specifically intent to the replies that they get from ML and LLMs. They don't realize that the model is essentially just an unchecked auto correct that uses the entirety of everything posted on the public Internet as it's basis for what to reply to a prompt, the same way your phone tries to predict what word you want to say next based on what you've typed so far.
It's just a lot bigger and more complex than the auto correct and word prediction that your phone has.
But that's it. That's all it does. It's not thinking. It's not intelligent. It has no intent. It cannot cognitively understand what it's saying or doing.
Taking to "AI" is basically having the average of all Internet content as a basis for the reply. That means it's going to make shit up, tell you to eat glue, and generally fuck around.
But most people seem to assign it human-like traits of reasoning and intent, when there isn't any. CEOs included.
My guess?
anthropomorphism.
People are assigning thought and specifically intent to the replies that they get from ML and LLMs. They don't realize that the model is essentially just an unchecked auto correct that uses the entirety of everything posted on the public Internet as it's basis for what to reply to a prompt, the same way your phone tries to predict what word you want to say next based on what you've typed so far.
It's just a lot bigger and more complex than the auto correct and word prediction that your phone has.
But that's it. That's all it does. It's not thinking. It's not intelligent. It has no intent. It cannot cognitively understand what it's saying or doing.
Taking to "AI" is basically having the average of all Internet content as a basis for the reply. That means it's going to make shit up, tell you to eat glue, and generally fuck around.
But most people seem to assign it human-like traits of reasoning and intent, when there isn't any. CEOs included.