Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:
- Confident: 57% say the main LLM they use seems to act in a confident way.
- Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
- Sense of humor: 32% say their main LLM seems to have a sense of humor.
- Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes.
Sarcasm: 17% say their prime LLM seems to respond sarcastically.
- Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
Is it though?
I'm the expert in this situation and I'm getting tired explaining to Jr Engineers and laymen that it is a media hype train.
I worked on ML projects before they got rebranded as AI. I get to sit in the room when these discussion happen with architects and actual leaders. This is Hype. Anyone who tells you other wise is lying or selling you something.