I'd say that it's simply because most people on the internet (the dataset the LLMs are trained on) say a lot of things with absolute confidence, no matter if they actually know what they are talking about or not. So AIs will talk confidently because most people do so. It could also be something about how they are configured.
Again, they don't know if they know the answer, they just say what's the most statistically probable thing to say given your message and their prompt.
Even when we go per capita the US stays a shithole, it's not like they were trying to actively misinform people.