Wednesday, November 30, 2005

Manna from Heaven, if you're starved for epistemology

Updated Jan. 10 (at end)
------------

OK, so some of us have bizarre dietary needs...

In a comment recently I'd lamented:
We need an online course in practical epistemology.*

And lo, within the week one shows up on my radar screen, and you'll never guess who teaches* it: The CIA.
I've just skimmed the surface but here's the flavor - from Chapter 1:
Thinking analytically is a skill like carpentry or driving a car. It can be taught, it can be learned, and it can improve with practice. But like many other skills, such as riding a bike, it is not learned by sitting in a classroom and being told how to do it. ...
...
The disadvantage of a mind-set is that it can color and control our perception to the extent that an experienced specialist may be among the last to see what is really happening when events take a new and unexpected turn...[since those] who know the most about a subject have the most to unlearn...

And it came to light in comments on another beyond-outrageously-good-and-topical book, Expert Political Judgment: How Good Is It? How Can We Know?.
Psychologist Philip Tetlock did the research to find out, and what he found was fascinating and not altogether expected.
[Tetlock] picked two hundred and eighty-four people who made their living "commenting or offering advice on political and economic trends," and he started asking them to assess the probability that various things would or would not come to pass, both in the areas of the world in which they specialized and in areas about which they were not expert.
Among the observations:
people who make prediction their business - people who appear as experts on television, get quoted in newspaper articles, advise governments and businesses, and participate in punditry roundtables - are no better than the rest of us. When they’re wrong, they're rarely held accountable, and they rarely admit it, either. ... the better known and more frequently quoted they are, the less reliable their guesses about the future are likely to be.
When television pundits make predictions, the more ingenious their forecasts the greater their cachet.
...both their status as experts and their appeal as performers require them to predict futures that are not obvious to the viewer.
[as a result, those who watch them are not well informed]
We are not natural falsificationists: we would rather find more reasons for believing what we already believe than look for reasons that we might be wrong.
why some people make better forecasters than other people...has to do not with what [they] believe but with the way they think:
...Low scorers look like hedgehogs: thinkers who "know one big thing," aggressively extend the explanatory reach of that one big thing into new domains, display bristly impatience with those who “do not get it,” and express considerable confidence that they are already pretty proficient forecasters, at least in the long term. High scorers look like foxes: thinkers who know many small things (tricks of their trade), are skeptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible “ad hocery” that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess.
"we as a society would be better off if participants in policy debates stated their beliefs in testable forms"-that is, as probabilities-"monitored their forecasting performance, and honored their reputational bets."
... we're suffering from our primitive attraction to deterministic, overconfident hedgehogs.
... the only thing the electronic media like better than a hedgehog is two hedgehogs who don't agree...
...[Most "public intellectual" hedgehogs] are dealing in "solidarity" goods, not "credence" goods. Their analyses and predictions are tailored to make their ideological brethren feel good - more white swans for the white-swan camp.


Bonus link: Daniel Conover proposes
The Intelligence Briefing model of journalism (Jay Rosen's response: "what's hard to convey to people is how different the transaction between "journalist" and "public" (readers, users) could actually be."*)
(Here's how it is now)

------------
Jan 10 update:
in the comments to this post I'd said
Also - the review ended with reviewer suggesting a take-home message of "think for yourself" - but AIRC provided no empirical evidence that this actually would work better. I'd love to see someone do the research to find what method works best for ordinary people - and suspect the optimum would involve a lot of outsourcing.

And lo, we have corroboration from Tetlock himself, via Carl Bialik in the Wall Street Journal Jan. 6:
The New Yorker's review of [Tetlock's] book surveyed the grim state of expert political predictions and concluded by advising readers, "Think for yourself." Prof. Tetlock isn't sure he agrees with that advice. He pointed out an exercise he conducted in the course of his research, in which he gave Berkeley undergraduates brief reports from Facts on File about political hot spots, then asked them to make forecasts. Their predictions -- based on far less background knowledge than his pundits called upon -- were the worst he encountered, even less accurate than the worst hedgehogs. "Unassisted human intuition is a bomb here," Prof. Tetlock told me.

4 comments:

Russ Steele said...

Anna:

Thanks for the reference. Very intersting reading. Many points were certainly true in the electronics intelligence business. We were sometimes captured by the rut of our experience. It was my job to find new paths to insight, to connect the dots, to kick butts when required. One fun job.

Anna said...

I'm still looking forward to reading the CIA 'book'.

Re the "pundit track record" research, a couple more things -

What I didn't mention but should have, is that Tetlock said it wasn't what people thought that made them hedgehogs vs foxes, it was how they thought. Hedgehogs come in both "left" and "right" varieties; the only pattern (mentioned, that I recall) was that they were more extreme politically than the foxes.

and second - what would the results have been if the subject matter had been science rather than politics/economics? Maybe the same result, but maybe not.

Also - the review ended with reviewer suggesting a take-home message of "think for yourself" - but AIRC provided no empirical evidence that this actually would work better. I'd love to see someone do the research to find what method works best for ordinary people - and suspect the optimum would involve a lot of outsourcing.

Sisyphus said...

Anna,

You are probably already aware of Kent Bye's Echo Chamber Project.

I'm sure he'd be interested in your comments on Migrating Open Source Intelligence Insights Into Participatory Journalism

Anna said...

Thanks Sisyphus for the Bye links. Not sure how well the proposed system would work in practice though, it seems like it could be 'gamed' by culture-warrior participants. Without an objective measure of 'ultimate' accuracy, the less rigorous the methodology, the greater the likelihood of the practitioners' skewing their results.
(at least that was my (definitely fallible) impression when I followed the links a while back))