This is a wonderful and important book. Philip Tetlock is a political psychologist who has a knack for innovative research projects (e.g., his earlier work on how people cope with trade-offs in politics). In this book, he addresses a question that would scare away more timid souls: How well do experts predict political and economic events?
Most of Tetlock's findings are based on questions posed in 1988 and 1992, when he asked experts to make predictions both within their fields of expertise and more generally. Experts were asked to predict whether the value of selected variables would go down, remain the same, or increase. They were asked to predict a wide range of possible developments: short-term and long-term electoral success of political parties, levels of political freedom, political stability, marginal tax rates, central bank interest rates, central government expenditure, central government deficit, education spending, health care spending, defense spending, use of military force, participation in international peacekeeping, and acquisition of nuclear weapons. Tetlock also studied predictions about more specific events such as transition from communism (including the rate of privatization of state-owned industries, unemployment rates in postcommunist countries); the (first) Persian Gulf War (whether a war would break out, how many causalities there would be); likely human-caused disasters in the next five, ten, or twenty-five years (e.g., mass starvation, massacres, epidemics); predictions about developments in the European Union; predictions about the developments of the Internet and dot-com firms; and global warming.
Tetlock begins by examining what he calls the “radical skeptic” view, which is championed both by those of us who build on complexity theory to argue that complex systems such as the political order or economy are in principle unpredictable, and by those who appeal to psychological and epistemic considerations to hold that humans are not up to such predictions. Tetlock is not himself a radical skeptic: his aim is to find out how experts make predictions so that they can do it better. That it cannot be done at all, or not by humans, is a “challenge” that he wishes to put aside, not a conclusion to be embraced. The problem is that Tetlock finds it very difficult to reject the radical skeptic's hypothesis. Tetlock distinguishes two criteria of a good prediction: discrimination (how precise the prediction is) and calibration (how accurate the prediction is). The good news for those who would reject the skeptical hypothesis is that political and economic experts do better on both measures than undergraduates at predicting future events in their field of expertise. Unfortunately, that is about all the good news. Experts do not do significantly better than what Tetlock calls “dilettantes”: People who regularly read The Economist or The New York Times. On the discrimination measure—how precise the predictions are—the experts and dilettantes would beat a chimp who made predictions by throwing a dart at a board in which the dart can land on “variable will go up,” “variable will go down,” or “variable will stay the same.” Unfortunately, the chimp beats the dilettantes and experts on the accuracy score. Still, experts are better on the discrimination dimension: They make more precise, if less accurate, predictions than would the chimp. How good are they? The better half of the expert group predicts a meager 18% of the variance, the less good group about 14%. An average of about 16% of the variance is accounted for by expert prediction. Even more embarrassing for the experts is that almost any mathematical model, even very simple-minded ones extrapolating the future on the basis of the past, beat them on both dimensions. In every domain of study, crude models beat experts. Based on these findings, Tetlock is forced to concede the crux of the skeptical hypothesis: Expert prediction and guesswork are essentially the same.
As I said, Tetlock is not himself a skeptic. He hopes that we can improve public policymaking, so he focuses on differences within the expert group, looking at which sorts of experts tend to do better. Remember, this is variance within a group that tends to be pretty awful, but still, there is variance. Tetlock advances two important findings. First, Isaiah Berlin famously distinguished between two sorts of intellectual styles: the hedgehog and the fox. Hedgehogs see one thing: They are captivated by a single theory, a single, clear view of the world. Foxes, as Berlin conceived of them, see many truths: They are sensitive to indications that they might be mistaken and are suspicious that there is any one great truth. Tetlock shows that foxes are the better predictors: Within the expert group, foxlike predictors clearly outperform hedgehogs. Second, the other main predictor of expert accuracy that Tetlock discovers is how famous an expert is and how often he is consulted by the media. Unfortunately, the correlation is negative: The more well-known an expert is, the worse his predictions. The experts that more people listen to and read are systematically the worst predictors.
I have focused on some of Tetlock's fascinating findings (there are other intriguing analyses, such as his study of counterfactual historical judgments). Tetlock also spends a great deal of time exploring counterarguments by hedgehogs that their cognitive style really does make for better predictions, once we get clearer about what is a “better” prediction. Throughout, Tetlock impresses the reader with his intellectual honesty, never failing to do justice to alternative hypotheses. I do not wish to suggest there are no worries at all about the data or his analysis: It can be very difficult to track down in the appendix how many were asked which questions; looking for raw data can be frustrating. These though would be mere quibbles. This is a great book.