KM 5433 Blog/Joe Colannino

A blog discussing knowledge management and library science issues.

Thursday, April 22, 2010

Expert Political Judgment (Philip E. Tetlock)/Book Review: J. Colannino

"Expert political judgment" -- it sounds like an oxymoron, but only because it is.  Philip E. Tetlock's groundbreaking research shows that experts are no better than the rest of us when it comes to political prognostication.  But then again, you probably had a sneaking hunch that that was so.  You need rely on hunches no more. Tetlock is Professor of Leadership at the Haas Management of Organizations Group, U.C. Berkeley.  A Yale graduate with his Ph.D. in Psychology, Expert Political Judgment is the result of his 20 year statistical study of nearly 300 impeccably credentialed political pundits responding to more than 80,000 questions in total.  The results are sobering.  In most cases political pundits did no better than dart throwing chimps in prediciting political futures.  Of course, Tetlock did not actually hire dart throwing chimps -- he simulated their responses with the statistical average.  If the computer was programmed to use more sophisticated statistical forecasting techniques (e.g., autoregressive distributed lag models), it beat the experts even more resoundingly. 

Were the experts better at anything?  Well, they were pretty good at making excuses.  Here are a few: 1. I made the right mistake.  2. I'm not right yet, but you'll see.  3. I was almost right.  4. Your scoring system is flawed.  5. Your questions aren't real world.  6. I never said that.  7. Things happen.  Of course, experts applied their excuses only when they got it wrong... er... I mean almost right... that is, about to be right, or right if you looked at it in the right way, or what would have been right if the question were asked properly, or right if you applied the right scoring system, or... well... that was a dumb question anyway, or.... 

Not only did experts get it wrong, but they were so wedded to their opinions that they failed to update their forecasts even in the face of building evidence to the contrary.  And then a curious thing happened -- after they got it wrong and exhausted all their excuses, they forgot they were wrong in the first place.  When Tetlock did follow-up questions at later dates, experts routinely misremembered their predictions. When the expert's models failed, they merely updated their models post hoc, giving them the comforting illusion that their expert judgment and simplified model of social behavior remained intact.  Compare this with another very complex system -- predicting the weather.  In this latter case, there is a very big difference in the predictive abilities of experts and lay persons. Meteorologists do not use over-simplified models like "red in the morning, sailor's warning."  They use complex modeling, statistical forecasting, computer simulations, etc.  When they are wrong, weathermen do not say, well, it almost rained; or, it just hasn't rained yet; or, it didn't rain, but predicting rain was the right mistake to make; or, there's something wrong with the rain guage; or, I didn't say it was going to rain; or, what kind of a question is that?

Political experts, unlike weathermen, live in an infinite variety of counterfactual worlds; or as Tetlock writes, "Counterfactual history becomes a convenient graveyard for burying embarrassing conditional forecasts."  That is: sure, given x, y, and z, the former Soviet Union collapsed; but if z had not occurred, the former Soviet Union would have remained intact.  Really?  Considering the expert got it wrong in the first place, how could they possibly know the outcome in a hypothetical counterfactual world?  At best, this is intellectual dishonesty.  At worst, it is fraud.

But some experts did better than others.  In particular, those who were less dogmatic and frequently updated their predictions in response to countervailing evidence (Tetlock's "foxes") did much better than the opposing camp (termed "hedgehogs").  The problem is that hedgehogs climb the ladder faster and have positions of greater prominence.  My Machiavellian take?  You might as well make dogmatic pronouncements because all the hedgehogs you work for aren't any better at predicting the future than you are -- they're just more sure of themselves.  So, work on your self-confidence.  It is apparently the only thing anyone pays any attention to.

0 Comments:

Post a Comment

<< Home