Forecasting: Who keeps the score?

My colleague Hites Ahir has a review and summary of Superforecasting

Superforecasting: The Art and Science of Prediction. By Philip Tetlock and Dan Gardner. Crown; 352 pages; $28.

Here are four forecasts that have been made in the technology field. First: “There is no reason anyone would want a computer in their home”, this was a forecast made in 1977 by Ken Olson—the president of Digital Equipment Corporation. Second: “There’s no chance that the iPhone is going to get any significant market share. No chance”, that was the forecast made in 2007 by Steve Ballmer—CEO of Microsoft. Third: “In five years I don’t think there’ll be a reason to have a tablet anymore”, forecast made in 2013 by Thorsten Heins—CEO of BlackBerry. Fourth: “Yes, the iPad Pro is a replacement for a notebook or a desktop for many, many people. They will start using it and conclude they no longer need to use anything else, other than their phones”, this forecast was made few weeks ago by Tim Cook—CEO of Apple. In the first three cases, it is safe to say that we know the outcome. In the last case, we will have to wait and see.

Can ordinary people also make forecasts? Who keeps the score of all the forecasts that are made? Why keeping the score matters? What is needed in the forecasting field? Can we do better at forecasting? These are some of the questions that are discussed in a fascinating new book: Superforecasting: The Art and Science of Prediction by Philip E. Tetlock and Dan Gardner. Tetlock is a professor at the University of Pennsylvania and Gardner is a journalist, author, and a lecturer.

The new book by Tetlock and Gardner describes the results from a massive forecasting tournament—the Good Judgment Project—sponsored by Intelligence Advanced Research Projects Activity (IARPA). The idea behind the project was to see who could invent the best methods of making forecasts that intelligence analysts make every day. The participants were asked to make a forecast on different topics. Some of the topics included were: Will OPEC agree to cut its oil output at or before its November 2014 meeting? Will the president of Tunisia flee to a cushy exile in the next month? Will the gold price exceed $1,850 on September 30, 2011? Will the euro fall below $1.20 in the next twelve months? The project recruited a very high number of volunteers. These volunteers came from a very wide range of backgrounds: from retired computer programmer, social service worker, to a homemaker. Below are some of the interesting parts of the book.

Can ordinary people also make forecasts? Here is one example from the forecasting tournament: “With his gray beard, thinning hair, and glasses, Doug Lorch doesn’t look like a threat to anyone. He looks like a computer programmer, which he was, for IBM. He is retired now. (…) Doug likes to drive his little red convertible Miata around the sunny streets, enjoying the California breeze, but that can only occupy so many hours in the day. Doug has no special expertise in international affairs, but he has a healthy curiosity about what’s happening. He reads the New York Times. He can find Kazakhstan on a map. So he volunteered for the Good Judgment Project. Once a day, for an hour or so, his dinning room table became his forecasting center, where he opened his laptop, read the news, and tried to anticipate the fate of the world. (…) In year 1 alone, Doug Lorch made roughly one thousand separate forecasts. Doug’s accuracy was as impressive as his volume (…) putting him in fifth spot among the 2,800 competitors in the Good Judgment Project. (…) In year 2, Doug joined a superforecaster team and did even better, (…) making him the best forecaster of the 2,800 GJP volunteers. (…) This is a man with no applicable experience or education, and no access to classified information. The only payment he received was the $250 Amazon gift certificate that all volunteers got at the end of each season. Doug Lorch was (…) so good at it that there wasn’t a lot of room for an experienced intelligence analyst with a salary, a security clearance, and a desk in CIA headquarters to do better. Someone might ask why the United States spends billions of dollars every year on geopolitical forecasting when it could give Doug a gift certificate and let him do it.”

Who keeps the score of all the forecasts that are made? “More often forecasts are made and then … nothing. Accuracy is seldom determined after the fact and is almost never done with sufficient regularity and rigor that conclusions can be drawn. The reason? Mostly it’s a demand-side problem: The consumers of forecasting—governments, business, and the public—don’t demand evidence of accuracy. So there is no measurement. Which means no revision. And without revision, there can be no improvement.”


Why keeping the score matters? “With scores and leaderboards, forecasting tournaments may look like games but the stakes are real and substantial. In business, good forecasting can be the difference between prosperity and bankruptcy; in government, the difference between policies that give communities a boost and those that inflict unintended consequences and waste tax dollars; in national security, the difference between peace and war. If the US intelligence community had not told Congress it was certain that Saddam Hussein had weapons of mass destruction, a disastrous invasion might have been averted.”

What is needed in the forecasting field? “For centuries, it hobbled progress in medicine. When physicians finally accepted that their experience and perceptions were not reliable means of determining whether a treatment works, they turned to scientific testing—and medicine finally started to make rapid advances. The same revolution needs to happen in forecasting.”

Can we do better at forecasting? “(…) it turns out that forecasting is not you have it or you don’t talent. It is a skill that can be cultivated.” Here are some tips for aspiring superforecasters: “Break seemingly intractable problems into tractable sub-problems (…) Strike the right balance between inside and outside views (…) Strike the right balance between under- and overreacting to evidence (…) Don’t treat commandments as commandments.”

Posted by at 7:43 PM

Labels: Forecasting Forum

Home

Subscribe to: Posts