AI Wealth Truth (23): Why Experts' Forecasts Can Be Worse Than Random
Hedgehogs vs foxes: the most confident experts are often the most wrong. People who say "I don't know" can be more accurate
I. Turn on the TV, open a newspaper, scroll social media. You see experts forecasting everywhere. "The stock market will rise 15% next year." "Housing prices will bottom in 2025." "Bitcoin will hit $100,000." But how accurate are these forecasts?
II. In 2005, psychologist Philip Tetlock published the results of a 20-year research project. He collected more than 28,000 forecasts on political and economic events from 284 experts. Then he tracked their accuracy. The conclusion was shocking: experts were only slightly better than random guessing. In some domains, they were even worse than random.
III. How is that possible? They are experts.
IV. Tetlock found a key distinction: hedgehog-style experts vs fox-style experts. The metaphor comes from an ancient Greek line: "The fox knows many things, but the hedgehog knows one big thing."
V. Hedgehog experts: They have a powerful single framework. They use it to explain everything. They are extremely confident in their predictions. They show up in media often and speak in absolute terms. They sound authoritative, but their forecasting accuracy is the worst.
VI. Fox experts: They do not rely on one grand theory. They analyze from multiple angles. They stay humble about predictions. They often say "it depends" or "I'm not sure". They sound less authoritative, but their forecasting accuracy is higher.
VII. Why do hedgehog experts perform worse?
VIII. Reason 1: overconfidence. Hedgehogs trust their theory too much. They ignore evidence that contradicts it. They over-interpret evidence that supports it. Confirmation bias traps them inside their own theory bubble.
IX. Reason 2: no updating. When their forecasts are wrong, hedgehogs do not admit it. They say "it is not time yet" or "an unexpected event changed things". They do not update their model. Foxes revise their views when new information arrives.
X. Reason 3: incentives reward the wrong behavior. Media likes confident predictions, not "I don't know". Hedgehogs look better on camera. Audiences remember their confident calls and forget their error records. Media ecosystems select the worst forecasters.
XI. An even more interesting finding: fame and accuracy are negatively correlated. The more famous an expert is, the less accurate they tend to be. Because fame requires confidence and storytelling, not accuracy. The experts you see on TV are often the ones least worth listening to.
XII. Another finding: extreme forecasts are more likely to be wrong. "A dramatic shift is coming" is more often wrong than "things will stay mostly stable". Because most of the time, there is no dramatic shift. But extreme forecasts have higher news value and attract more attention. You hear more extreme forecasts, and extreme forecasts are more wrong.
XIII. Will the AI era change this?
XIV. AI can process more data and detect patterns humans miss. But AI forecasting has limits. AI is trained on historical data. It is good at predicting cases where "the past repeats". But the events that matter most are often unprecedented. AI cannot predict black swans either.
XV. There is a bigger problem: AI gives people a false sense of precision. AI outputs forecasts with many decimals. "GDP growth will be 3.27%." This creates an illusion of accuracy. In reality, any economic forecast beyond one decimal place is mostly noise. AI's precision is pseudo-precision.
XVI. There is another risk: AI may amplify the hedgehog tendency. If an AI is designed to "give a certain answer", it becomes a hedgehog. It will not say "I don't know". It will fabricate a plausible-sounding answer. AI confidence has no necessary link to AI accuracy.
XVII. So how should you treat expert forecasts?
XVIII. 1. Ignore specific numbers. "The market will rise 15% next year" is not useful. Ranges are more meaningful. "Maybe between -10% and +30%" is more honest than "15%".
XIX. 2. Look at the forecaster's cognitive style. If an expert speaks in absolutes and is deeply convinced by one theory, be careful. They may be a hedgehog. If an expert acknowledges uncertainty and mentions multiple possibilities, they may be more worth listening to.
XX. 3. Look at long-term track records, not recent hits. A single win or loss can be luck. Only long-term records separate skill from luck. The problem is: most people do not track experts over the long run.
XXI. 4. Beware hindsight. Experts can always explain the past with perfect logic. But explaining the past does not mean predicting the future. Post-hoc explanation is cheap. Forward-looking prediction is valuable.
XXII. 5. Accept unpredictability. The future is fundamentally uncertain. Anyone who claims to precisely predict the future is either lying to you or lying to themselves. The most honest forecast is: "I don't know."
XXIII. Nobel laureate Daniel Kahneman once said: "If someone tells you the thing that needs forecasting is forecastable, they are fooling you." The value of experts is not prediction. It is helping you understand frameworks, identify risks, and prepare. Predicting specific outcomes is an impossible task.
XXIV. Next time you see an expert forecast, ask yourself: How confident are they? Confidence is a danger signal. Does their theory explain everything? A theory that explains everything often explains nothing. Do they admit what they do not know? Admitting ignorance is a sign of wisdom. The most accurate forecasters are often the most humble. In the AI era, the forecasting market is booming. But forecasting quality has not improved. The only forecast you can make with certainty is: most forecasts are wrong.
AI Wealth Truth (22): Why Gut-Level Investment Decisions Can Be Better
Simple heuristics, less is more: under information overload, complex models can underperform simple rules
AI Wealth Truth (24): Why Your Intuition About Low-Probability Events Is Catastrophically Wrong
The probability weighting function: the brain turns 1% into 5%, and 99% into 80%
AI Practice Knowledge Base