Artificial intelligence is advancing faster than any previous technology in human history—and society may not be ready for what comes next. That is the central warning issued by Dario Amodei, chief executive of AI safety-focused company Anthropic, who says humanity is approaching a moment that could fundamentally reshape civilisation.
In a detailed 38-page essay titled The Adolescence of Technology, Amodei argues that powerful AI systems are no longer a distant possibility. Instead, they may emerge within the next one to two years, bringing both extraordinary benefits and unprecedented risks. According to him, the challenge is not whether AI will become powerful—but whether humans have the wisdom, institutions, and safeguards to control it.
A Defining Moment for Humanity
Amodei describes the current phase of AI development as a “civilisational rite of passage.” He believes humanity is about to be handed immense intellectual power, comparable to the combined capabilities of millions of elite experts operating at machine speed.
To explain the scale of this shift, he offers a striking analogy: a “country of geniuses in a data centre.” In this imagined scenario, AI systems would outthink and outwork the most capable human minds across science, governance, economics, and warfare—simultaneously.
From a national security perspective, Amodei argues such a development would likely be classified as the most serious strategic threat in generations. He stresses that this is not science fiction, but a realistic near-term possibility that demands urgent attention.
High-Risk Domains: Biology and Security
Among the many risks outlined in the essay, Amodei identifies biological misuse as the most dangerous. Advances in AI could drastically lower the barriers to creating or modifying harmful biological agents, placing devastating capabilities into the wrong hands.
Historically, biological weapons required large teams, specialised equipment, and advanced expertise. With AI assistance, those constraints could weaken. Amodei warns that individuals with malicious intent could gain capabilities once limited to top-level research scientists, creating serious global security vulnerabilities.
This concern extends beyond terrorism. AI-driven biological research could outpace the world’s ability to detect, prevent, or respond to engineered threats, leaving governments struggling to keep up.
The Geopolitical Race for AI Power
Amodei also highlights the geopolitical implications of advanced AI. Nations that achieve AI dominance could use it to expand military power, economic influence, and domestic surveillance. In authoritarian states, this could entrench control and suppress freedoms at an unprecedented scale.
While he avoids singling out countries unfairly, Amodei notes that AI development within tightly controlled political systems raises unique risks. He has consistently argued that exporting advanced AI hardware to authoritarian regimes could undermine global stability, comparing it to handing over strategic weapons.
In his view, unchecked AI competition between nations could destabilize the international order unless clear norms and safeguards are established.
Accountability for AI Companies
Importantly, Amodei does not place responsibility solely on governments. He emphasizes that AI companies themselves hold enormous power and must be subject to scrutiny. These firms control massive computational resources, develop frontier models, and influence millions of users daily.
Because of this concentration of influence, Amodei believes AI companies should adopt strong internal governance, transparency, and safety mechanisms. He supports voluntary commitments to restrict dangerous uses of AI, particularly in sensitive areas such as biology, cyber operations, and mass surveillance.
Without responsible leadership from within the industry, he warns, regulation alone will not be sufficient.
Economic Disruption and Job Losses
Beyond security risks, Amodei addresses the economic impact of AI—particularly the threat of large-scale job displacement. He has previously warned that up to half of entry-level white-collar jobs could be automated within the next five years.
Such disruption, he argues, could widen inequality and concentrate wealth unless proactive steps are taken. Amodei urges companies to invest in retraining, redeployment, and long-term support for workers displaced by AI-driven productivity gains.
He also suggests that in a future of extreme abundance, new economic models—possibly including income guarantees or long-term employee support—may become necessary. Anthropic, he notes, is actively exploring such approaches internally.
A Call for Measured, Global Action
Despite the seriousness of his warnings, Amodei remains cautiously hopeful. He believes the risks posed by AI can be managed through a combination of responsible corporate behaviour, thoughtful government policy, and international cooperation.
He cautions, however, that regulation must be precise and evidence-based. Poorly designed rules could stifle innovation or create unintended consequences, making the situation worse rather than better.
“This is a serious civilisational challenge,” Amodei argues, “but if we act decisively and carefully, our chances are good.”
As AI continues its rapid ascent, Amodei’s message is clear: the technology itself is not the enemy. The real test lies in whether humanity can match AI’s power with maturity, foresight, and collective responsibility.

