The Perfect Holiday Gift Gift Now

THE AI BAN WE NEED

TIME Magazine

|

November 24, 2025

No one knows how to control Als that are vastly more competent than any human, yet we are getting closer and closer to developing them, with many experts expecting superintelligence in the next five years, at the current pace. This is why leading AI scientists warn that developing superintelligence— Al that outperforms humans across all cognitive tasks—could result in humanity's extinction.

- BY ANDREA MIOTTI

THE AI BAN WE NEED

Satellite images of a growing, then stabilizing ozone hole show an existential threat to humanity stopped when the world quickly came together

Tech companies are pouring billions of dollars into reaching superintelligence as fast as possible. Once we develop machines significantly more competent than us across all domains, we will most likely be at the mercy of the superintelligent machines themselves, as currently no country, no company, and no person knows how to control them. In theory, a superintelligent AI would pursue its own goals, and if those goals were incompatible with sustaining human life, we would be annihilated.

To make matters worse, AI developers do not understand how current powerful AI systems actually work. Unlike bridges or power plants, which are designed to precise human specifications, today's AI systems are "grown" from vast datasets, through processes their own creators cannot interpret. Even Anthropic CEO Dario Amodei admits that we only "understand 3% of how they work." Despite this danger, superintelligence remains the goal of leading AI companies: OpenAI, Anthropic, Google DeepMind, Meta, xAI, DeepSeek. And given the skyrocketing valuation of these companies, they are not about to stop by themselves.

Listen

Translate

Share

-
+

Change font size