Today I Learned
What did you learn today? Share it with us!
We learn something new every day. This is a community dedicated to informing each other and helping to spread knowledge.
The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:
Rules (interactive)
Rule 1- All posts must begin with TIL. Linking to a source of info is optional, but highly recommended as it helps to spark discussion.
** Posts must be about an actual fact that you have learned, but it doesn't matter if you learned it today. See Rule 6 for all exceptions.**
Rule 2- Your post subject cannot be illegal or NSFW material.
Your post subject cannot be illegal or NSFW material. You will be warned first, banned second.
Rule 3- Do not seek mental, medical and professional help here.
Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.
Rule 4- No self promotion or upvote-farming of any kind.
That's it.
Rule 5- No baiting or sealioning or promoting an agenda.
Posts and comments which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.
Rule 6- Regarding non-TIL posts.
Provided it is about the community itself, you may post non-TIL posts using the [META] tag on your post title.
Rule 7- You can't harass or disturb other members.
If you vocally harass or discriminate against any individual member, you will be removed.
Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.
For further explanation, clarification and feedback about this rule, you may follow this link.
Rule 8- All comments should try to stay relevant to their parent content.
Rule 9- Reposts from other platforms are not allowed.
Let everyone have their own content.
Rule 10- Majority of bots aren't allowed to participate here.
Unless included in our Whitelist for Bots, your bot will not be allowed to participate in this community. To have your bot whitelisted, please contact the moderators for a short review.
Partnered Communities
You can view our partnered communities list by following this link. To partner with our community and be included, you are free to message the moderators or comment on a pinned post.
Community Moderation
For inquiry on becoming a moderator of this community, you may comment on the pinned post of the time, or simply shoot a message to the current moderators.
view the rest of the comments
They're probably talking about this. It's been too long since I read it so I won't be discussing it, but I'll share a paragraph so folks don't have to click the link to see the gist. https://economicsfromthetopdown.com/2022/04/08/the-dunning-kruger-effect-is-autocorrelation/
No, no, no, I am an econometrician, this Blair Fix person is an 'enthusiast of economics' who actually doesn't know how statistics or data modelling works.
Their whole blog post boils down to them not liking the format the graph is presented in.
I can assure you this a common way to visualize this kind of a data set.
When this Blair person presents their own 'test' later in the post, they are literally making shit up, they did not perform any test, they just generated random noise and then went 'see it kinda looks the same!'
Were they serious about this ... analysis approach, they would have compared their random noise to the actual dunning krueger data set and then done actual statistical tests to see if the dk set was statistically significantly different than a battery of say 1000 runs of their statistical noise generation, and to what extent it was.
They did not do this, at all.
They then cite papers from no name colleges no one has ever heard of that basically just argue that a histogram is 'the right way to present this', even though that completely destroys any visual concept of differentiating between where ones actual ability level is vs where one estimates it to be, that just flattens it to 'look at this psuedo normal distribution of how many people are wrong by how much', again with no reference to their actual competency level as a factor in to what degree they overestimate themselves.
You've fallen for a random shit poster who shit posts on a blog instead of tiktok or instagram or reddit or WSJ/WaPo Op-Eds.
You have been bamboozeled not by lies, not by damned lies, but by an idiot attempting to do statistics.
.........
If you wanted to maybe better visually portray the DK data, you coukd have the original graph, and then another graph, a bar graph, that shows the % difference between actual and perceived competency for each quartile.
And that would look like this:
(I am doing the digital equivalent of a napkin drawing here, from a phone, this is broadly accurate, but not precise.)
The lowest competency quartile believes they score at about 55th percentile when they actually score at about 10th percentile, so they overestimate themselves by about 450%.
2nd quartile; actual score is about 35 ptile, estimated score is 60 ptile, so they overestimate themselves by about 70%.
3rd quartile; actual score is about 60 ptile, estimated score is about 70 ptile, so they overestimate themselves by about 17%.
4th quartile; actual score is about 85 ptile, estimated score is about 70 ptile, so they overestimate_themselves by about negative 20%
So, there you go, you have a bar chart with 4 bars.
1st is 45 units tall,
2nd is 7,
3rd is 1.7,
4th is -2, going under the x axis.
collapsed inline media
Vertical height represents the magnitude of overestimation of a quartile's actual competency.
That is to say, the dumbest 25% of people think they are 4.5x more competent than they actually are, in terms of comparing themselves to all people broadly, whereas the smartest 25% of people actually think they are 0.8x as competent as they actually are.
This effect at the top quartile is roughly otherwise known as 'impostor syndrome', another thing that is well studied and definitely real.
But the main thing that should be visually striking from this kind of presentation is that dumb people, that bottom quartile, are literally in another order of magnitude of overestimating their abilities, they are in fact so wildly off that the rest of the graph is basically just noise around the x axis in comparison, they are in fact so stupid that they have no idea how stupid they are.
For a real world example case of this, go visit the Oval Office.
I thought I made it clear I didn't necessarily agree with it, but I guess I wasn't. I'm not saying this article is correct, just that it's likely what they were thinking of when they said it was "disproved" since it made the rounds a few years ago.
Sure, fine, but this is exactly, precisely how misinformation spreads.
People who lack the sufficient knowledge set to evaluate a complex and technical claim present what appears to a layman to be a plausible idea as being roughly equivalent to the ideas that actual experts have, another brand name worth considering in the free market of ideas, more or less just another neutral option, with no strong feelings either way.
This is (unintentionally) subversive, because it elevates a ludicrous notion to a degree of plausibility that it absolutely does not deserve.
I do not mean to attack you as a person or say that you should feel bad or anything like that, I am simply here to be the counterforce, to try to explain how and why this is very silly.
Part of doing that effectively is crafting an engaging narrative.
Making punny jokes and being a bit vitriolic is engaging for other readers; again, not meant to attack or demean you as a person, but meant to mock this specific notion/idea/"theory".
After all, at the end of the day, we could stand to be a little more capable of intellectual humility, eh?
There's absolutely no problem with being wrong sometimes, understanding when why and how one can be or is wrong is how people learn, which should be celebrated, imo.
I dont understand. The additional experiment data is fairly convincing, but the random data example doesnt seem to disprove the effect in itself. With random data you are going to get a predicted score of 50 for every group, which is what is shown, but this seems to still indicate that, if this is really what people predicted, that low skill people are overestimating their ability. Obviously random data would exhibit the effect; why should it not?
Edit: i think i get it. The random data doesnt show that the low performers dont underestimate and the high performers dont overestimate on average, but this is the natural result if everyone has no idea how they performed. Thus my question above is exactly what they are trying to say; if everyone predicts randomly (everyone equally bad at predicting) the effect arises. So there might be no relashionship between performance prediction and performance
Like I said after my semicolon (
;
), that it's a statistical truism doesn't mean it doesn't exist. https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect#Statistical_and_better-than-average_effect One can prove that this effect will always exist for statistical reasons? That's good to know, that not-so-competent people are more prone to overestimating.I'm not making an argument, just saying this is likely the thing they're thinking of.