The tables have turned on AI sceptics
Epistemic conservatism no longer favours long timelines
Could we have human-level AI within the next few decades? For a long time, many people have dismissed this idea as armchair speculation. In their view, we shouldn’t ground our beliefs about transformative technologies in vague hunches and fragile multi-step arguments. We need more solid evidence, like clear empirical trends. We need to be epistemically conservative.
I have some conservative instincts myself, but I’m not sure they favour long AI timelines anymore. That might have been the case ten or even five years ago, but things have changed.
Bio Anchors
It’s no accident that the AI timelines debate long lacked empirical grounding. While climate change has a natural metric – temperature – AI progress doesn’t. As a result, forecasts have often relied on intuition.
But in recent years, some researchers have tried to put timeline forecasting on a firmer empirical footing. One attempt that received plenty of attention was Ajeya Cotra’s Bio Anchors report (2020), which plotted compute projections against estimates of the human brain’s compute usage. The model produced multiple forecasts of when it would become feasible to train transformative AI, with a median date around 2052.
Bio Anchors was an impressive research effort, based on real empirical trends. But even so, it was far more theory-laden than climate-style projections. The analogy between brain compute and AI training compute is far from obvious. In addition, the model’s forecasts varied by several decades depending on parameter choices, such as whether AI training was compared to learning over a human lifetime, the evolutionary process that produced human intelligence, or other biological processes. It wasn’t solid evidence by the standards of epistemic conservatism.
Capability benchmarks
A more direct approach is to estimate AI performance on suites of relevant tasks. For instance, METR tracks the length of tasks AI can complete, as measured by the time they’d take a human expert. According to their most recent estimates, this time horizon is now doubling every three months. If this trend continues, AI could be doing tasks that take humans a month within a few years.
Most people would agree that METR’s method involves fewer contestable assumptions than Bio Anchors. Instead of looking at inputs and biological comparisons, METR focuses on outputs: what AI systems can actually do. This is much closer to what the epistemic conservative wants.
METR’s work has generally been well received, but it also has its limitations. To facilitate their evaluations, the problems they study are unusually well-defined. It’s not clear that the results generalise to the messier, more open-ended tasks of real-world jobs. Even some of METR’s own researchers acknowledge that this is an important issue.
Revenue growth
But I think there are even less theoretically loaded reasons to think AI timelines won’t be very long. People are paying more and more money for AI. Plausible extrapolations of this revenue growth provide arguably the most direct empirical case that AI will become a large share of the economy within the next decade.
Some sceptics suggest that this growth merely reflects hype – that AI isn’t as valuable as the numbers suggest. But that is unconvincing. Claims about hype and bubbles carry more force when directed against valuations and investments than against actual consumption. It’s not particularly conservative to assume that people who use AI on a day-to-day basis are mistaken about its value.
While we cannot rule out that growth will taper off, the current momentum does seem incredibly strong. And it converges with other evidence like the rapid benchmark improvements. Very long timelines would require revenue growth to slow dramatically. I think those who claim that have the burden of proof.
Expert surveys
But there’s another kind of evidence that’s important for epistemic conservatives: expert surveys. Some of them suggest that a transformative economic impact is still many decades away. In a recent survey by the Forecasting Research Institute, most respondents thought that AI would only increase economic growth fairly modestly. The economists surveyed projected just 3.5% annual growth by 2050, even assuming rapid AI progress to 2030 – AI outperforming top humans in research, coding, and leadership, producing award-winning creative work, and handling nearly all physical tasks. Likewise, they expected the labour force participation rate to be 55%, down only slightly from today’s 61%. And while the survey’s AI experts predicted a greater economic impact, they also believed that most people would still be working by 2050 even under this rapid scenario.
I’m generally a fan of expert surveys, but there are some reasons to interpret these results carefully. As I’ve previously discussed, respondents may not have fully internalised the rapid progress scenario when answering questions about its economic impacts. Relatedly, I think they simply haven’t thought very much about the impact of AI on economic growth. It’s not clear they’re experts in the same sense as climate scientists who are asked about future warming. Therefore, I think epistemic conservatives should put less weight on FRI’s results than on extrapolations from benchmark and revenue trends.
*
This doesn’t mean that long timelines can be ruled out. Besides new technical obstacles, political intervention could delay AI progress. I take this possibility seriously, and plan to return to it. But the point isn’t that the trend couldn’t break – it’s that this is hardly a conservative position. The AI sceptics can no longer dismiss short or medium timelines as speculation.






