For years, warnings about artificial intelligence have grown increasingly dramatic. Some of the world’s most prominent researchers have suggested that advanced AI could pose an existential threat to humanity — possibly within decades, or even sooner.
Now, one of those leading experts has revised their timeline.
In a recent reassessment reported by The Guardian, a prominent AI safety researcher said that while the risk of catastrophic AI outcomes remains real, the most extreme scenarios may be further away than previously feared. The update has sparked debate across the tech world: Is this a sign that AI fears were overblown — or a reminder that even distant risks demand urgent preparation?
The answer, according to many experts, is more nuanced than the headlines suggest.

What Changed in the Expert’s Assessment
The revised timeline does not mean the expert believes AI is safe. Instead, it reflects new judgments about how quickly certain dangerous capabilities might emerge.
Key reasons cited for pushing the timeline back include:
- Slower-than-expected progress toward fully autonomous, self-improving systems
- Continued reliance of AI on human oversight and infrastructure
- Technical challenges in creating systems that can independently pursue long-term goals
- Improved understanding of where current models still fall short
In other words, AI is advancing rapidly — but not yet in the specific ways required for the most extreme scenarios.
What Experts Mean by “Destruction of Humanity”
The phrase often misunderstood in public debate.
AI researchers are not typically imagining a single dramatic event. Instead, they describe cascading risks, such as:
- Loss of human control over critical systems
- AI-driven economic and political destabilization
- Autonomous weapons escalation
- Large-scale misinformation undermining democracy
- Gradual erosion of human agency
Existential risk doesn’t require a sudden apocalypse — it can emerge through systemic failure over time.
Why Delaying the Timeline Is Not a Reassurance
Many AI safety experts stress that pushing back timelines should not reduce urgency.
A longer timeline can actually:
- Increase complacency among policymakers
- Encourage faster commercial deployment without safeguards
- Reduce funding for safety and alignment research
- Shift attention away from long-term planning
As one researcher put it, “Delayed risk is not eliminated risk.”

The Divisions Inside the AI Community
The AI field is far from unified on existential risk.
Those Who Emphasize Long-Term Catastrophic Risk
This group argues that even a small probability of human extinction justifies immediate global action, given the stakes.
Those Who Focus on Near-Term Harms
Others believe current dangers — bias, surveillance, labor disruption, misinformation — deserve more attention than speculative future scenarios.
Those Who Reject Existential Risk Framing
A minority of researchers argue that fears of human extinction distract from practical governance and exaggerate AI’s autonomy.
The timeline shift has intensified — not resolved — these disagreements.
Why AI Progress Is Hard to Predict
One reason timelines keep changing is that AI development is nonlinear.
Breakthroughs can:
- Appear suddenly after years of slow progress
- Come from unexpected research directions
- Be accelerated by economic or geopolitical pressure
At the same time, real-world deployment introduces friction: regulation, costs, energy limits, and social resistance all slow adoption.
This makes forecasting AI’s trajectory inherently uncertain.
The Role of Incentives and Competition
Even if catastrophic AI capabilities are decades away, current incentives push toward rapid scaling.
- Companies race to release increasingly powerful models
- Governments fear falling behind rivals
- Investment flows reward speed over caution
These dynamics mean that safety preparation must happen long before danger is obvious.
What Responsible AI Preparation Looks Like
Experts broadly agree on several steps that remain urgent regardless of timelines:
- Increased funding for AI safety and alignment research
- Mandatory testing and evaluation of advanced systems
- Limits on autonomous decision-making in high-risk domains
- International coordination on AI governance
- Transparency around model capabilities and failures
Delaying action until certainty emerges would be a mistake.
Why the Public Debate Often Misses the Point
Media narratives tend to swing between:
- “AI will destroy humanity soon”
- “AI fears are exaggerated and unserious”
Most experts reject both extremes.
The real concern is not a specific date — but the gap between AI capability growth and society’s ability to govern it.
Frequently Asked Questions
Does this mean AI is no longer an existential threat?
No. The expert still believes extreme risks are possible, just less imminent than previously thought.
Why do AI experts keep changing timelines?
Because AI development is complex, uncertain, and influenced by many technical and social factors.
Should governments slow AI development?
Opinions vary. Many experts support targeted pauses or restrictions on the most advanced systems while safety measures catch up.
Is focusing on extinction risk distracting from real harms?
Some argue yes, others argue both long-term and near-term risks must be addressed simultaneously.
What happens if society ignores long-term AI risk?
Risks may become irreversible once systems are deeply embedded in infrastructure and economies.
Can AI safety actually be solved?
There is no guarantee — which is precisely why many experts argue preparation must begin early.

The Bottom Line
The decision by a leading AI expert to push back predictions of humanity’s destruction should not be mistaken for reassurance.
It is a reminder that AI risk is about trajectories, not deadlines.
The future of artificial intelligence will be shaped not by a single breakthrough, but by countless choices — technical, political, and ethical — made long before catastrophe is obvious.
The question is no longer when AI might become dangerous.
It’s whether humanity will use the time it has — however long that turns out to be — wisely.
Sources The Guardian


