On "Prediction Accuracy"

Subject(s):
Josh Calder's picture

Journalists often write articles compiling forecasts that have not come to pass, attributing them to futurists. Real futurists always respond, “Futurists don’t make predictions!”

That is true in theory: the goal of futurism is not to make predictions. In any case, the mockery-inducing forecasts are usually by non-futurists.

Still, the truth is that real futurists, people who get called futurists, and people who could be called futurists make forecasts that sound a great deal like predictions all the time.

You can attack these forecasts cheaply and sloppily, as the media often does, but they can also be approached as genuinely useful tools in diagnosing the quality of someone’s foresight thinking. Forecast accuracy can help illuminate three things:

  • Subject knowledge: If the person is making a forecast about a topic, this is fair game, even if many futurists concentrate on process, not content. Accuracy can help reveal whether they know enough about a topic to work effectively in the area — and whether they understand the limitations of their knowledge.
  • Perceptions of change: A basic futurist skill is having a feel for change: how fast or slow change tends to go, and the plausible bounds of that speed, in different arenas and systems. Incorrect forecasts are often due to a failure in this critical area.
  • Systems thinking: Forecast failures often reveal inadequate systems thinking, another basic futures competency. The person may not have understood the driver or actors in the system, or might have failed to anticipate a discontinuity.

So forecast accuracy should be used with care in evaluating the quality of foresight, but it can be a meaningful yardstick.

Comments

On prediction accuracy...also for futurists

Thank you for this note Josh! You fully echo my thoughts.

As far as process is concerned (your paragraph on subject knowledge), it seems to me that verifying accuracy is even more important. Indeed, process involves also methodology, and it should allow getting the best possible probabilistic "exploratory foresight" (to use Glenn and Gordon typology) for the best possible actions. Even if a range of opposite or diverse scenarios are involved, those are meant to define the borders of plausibility, thus we can estimate the validity of the scenarios in this framework, identify where we have been right, or wrong, where our methodology on the one hand, knowledge on the other must be improved, changed, or even completely revised.

Even normative foresight could be assessed, using the benchmark of what did we achieve compared with what we would have liked to, and lessons can be learned here too.

Post new comment

The content of this field is kept private and will not be shown publicly.
By submitting this form, you accept the Mollom privacy policy.