David Potts on the Dunning-Kruger Effect

It’s a little known fact that some of PoT’s most avid and engaged readers lurk behind the scenes, being too bashful to log onto the site and call attention to themselves by writing for public consumption. What they do instead is read what the rest of us extroverts write, and send expert commentary to my email inbox. I implore some of these people to say their piece on the site itself, but they couldn’t, possibly. They’re too private for the unsavory paparazzi lifestyle associated with blogging.

About a month ago, I posted an entry here inspired–if you want to call it that–by a BHL post on graduate school. Part of the post consisted of a rant of mine partly concerning this comment by Jason Brennan, directed at a commenter named Val.

Val, I bet you just think you’re smart because of the Dunning-Kruger effect.

Clinical psych is easy as pie. It’s what people with bad GRE or MCAT scores do.

My rant focused on Brennan’s conflation of psychiatry and clinical psychology in the second sentence (along with the belligerent stupidity of the claim made about clinical psychology), but a few weeks ago, a friend of mine–David Potts–sent me an interesting email about the Dunning-Kruger effect mentioned in the first sentence. David happens to have doctorates in both philosophy and cognitive psychology, both from the University of Illinois at Chicago; he currently teaches philosophy at the City College of San Francisco. In any case, when David talks, I tend to listen.

After justifiably taking issue with my handwaving (and totally uninformed) quasi-criticisms of Jonathan Haidt in the just-mentioned post, David had this to say about the Dunning-Kruger effect (excerpted below, and reproduced with David’s permission). I’ll try to get my hands on the papers to which David refers, and link to them when I get the chance. I’ve edited the comment very slightly for clarity. I think I’m sufficiently competent to do that, but who knows?

First, about the Dunning-Kruger effect. I had never heard of it, which got my attention because I don’t like there to be things of this kind I’ve never heard of. So I got their paper and a follow-up paper and read them. But I was not much impressed by what I read. How is Dunning-Kruger different from the well-established better-than-average effect? For one thing, [Dunning-Kruger] show — interestingly — that the better-than-average effect is not a constant increment of real performance. That is, it’s not the case that, at all levels of competence, people think they’re, say, 20% better than they really are. Rather, everybody thinks they’re literally above average, no matter how incompetent they are. This is different from, say, knowledge miscalibration. Knowledge miscalibration really is a matter of overestimating one’s chances of being right in one’s beliefs by 20% or so. (That is, people who estimate their chances of being right about some belief at 80% actually turn out to be right on average 60% of the time; estimates of 90% correspond to actually being right 70% of the time, etc.) But in the cases that Kruger and Dunning investigate, nearly everybody thinks they’re in the vicinity of the 66th percentile of performance, no matter what their real performance. So that’s interesting.

But that is not the way Dunning and Kruger themselves interpret the importance of their findings. What they take themselves to have shown is that incompetent people have a greater discrepancy between their self-estimates and their actual performance because, being incompetent, they are simply unable to judge good performance. If your grasp of English grammar is poor, you will lack the ability to tell whether your performance on a grammar test is good or bad. You won’t know how good you are — or how good anyone else is for that matter — because of your lack of competence in the domain. Lacking any real knowledge of how good you are, you just assume you’re pretty good. On this basis, they predict that incompetent people will very greatly overestimate their own competence in any domain where the skill required to perform is the same as the skill required to evaluate the performance. (Thus, they do not suppose that, for example, incompetent violin players will fail to recognize their incompetence.)

The trouble I have with this is that it is not well supported by the data. What their data really show, it seems to me, is that in the domains they investigate, nobody is very well able to recognize their own competence level. The plot of people’s estimates of their own abilities (both comparative and absolute) against measured ability does slope gently upwards, but very gently, usually a  15% – 25% increase despite an 80% increase in real (comparative) ability level. The highly competent do seem to be reasonably well able to predict their own raw test scores, but they do not seem to realize their own relative level of competence particularly well. They consistently rate their own relative performances below actuality. For example, in one experiment people did a series of logic problems based on the Wason 4-card task. Participants who were actually in the 90th percentile of performance thought they would be in about the 75th percentile. In another study, of performance on a grammar test, people who performed at the 89th percentile judged that they would be in the 70th. Then they got to look at other participants’ test papers and evaluate them (according to their own understanding). This raised their self-estimates, but only to the 80th percentile.

It is true that poor performers do not recognize how bad they are doing in absolute terms. But the discrepancy is not nearly as great as the discrepancy with regard to comparative performance. In the logic study, after doing the problem set and giving their estimates of their own performance, people were taught the correct way to do the problems. This caused the poor performers to revise their estimates of their own raw scores to essentially correct estimates. But they still thought their percentile rankings compared to others were more than double what they really were. (They did revise these estimates down substantially, but not enough.)

I think Dunning and Kruger have latched onto a logical argument for the unrecognizability of own-incompetence in certain domains and that they are letting that insight drive their research rather than measurements. No doubt if the knowledge of a domain necessary to perform well is also essential to evaluating performance in that domain — one’s own or anyone else’s — then poor performers will be poor judges. This almost has to be right. But the effect seems small insofar as it is attributable to the logical point Dunning and Kruger focus on. The bulk of their findings seems to be attributable, not to metacognitive blindness, but to social blindness to relative performance on tasks where fast, unambiguous feedback is in short supply. In domains where fast, abundant, clear feedback is lacking (driving ability, leadership potential, job prospects, English grammar, logic), nobody really knows very well how they compare with others. So they rate themselves average, or rather — since people don’t want to think they’re merely average — a little above average. And this goes for the competent (who accordingly rate themselves lower than they should) as well as the incompetent.

My low opinion of the Dunning-Kruger effect seems to be shared by others. I have on my shelf six psychology books published after Kruger and Dunning’s paper became common coin, which thoroughly review the heuristics and biases literature, four of which I’ve read cover to cover, and only two of them make any mention of this paper at all. One cites it together with two other, unrelated papers merely as finding support for the better-than-average effect, and the other cites it as showing that even the very worst performers nevertheless tend to rate themselves as above average. In other words, none of these books makes any mention at all of the Dunning-Kruger effect.

But if the Dunning-Kruger effect isn’t of much value as psychology, it’s great for insulting people! Which is no doubt why it is well known on the Internet.

I didn’t know any of that, and thought it would better serve PoT’s readers to have it on the site than moldering in my inbox.
PS. I’ve been having trouble with the paragraph spacing function in this post, as I sometimes do, so apologies for that. I don’t know how to fix it; when I do, it seems fixed, and then the problem spontaneously recurs. (I guess I’m an incompetent editor after all.)
Postscript, December 20, 2015: More on the Dunning-Kruger effect (ht: Slate Star Codex).

One thought on “David Potts on the Dunning-Kruger Effect

  1. Pingback: New Bloggers at Policy of Truth | Policy of Truth

Leave a comment