More Than 800 AI Experts And Public Figures Have Signed A Statement Urging A Halt To Developing AI Systems More Intelligent Than Humans
A big coalition of AI Researchers, ethicists, tech leaders, and culture figures signed a statement. The purpose behind this is to urge a global pause on creating AI systems that are more intelligent than human beings. The Future of Life Institute coordinated it. The institute is well known for its previous AI-related warnings issued in 2023 and 2024.
According to the statement, “We call for a binding global agreement to prohibit the development and deployment of superintelligence until it can be done safely and controllably.” The statement further added that “We urge governments to work together to establish enforceable international frameworks that ensure AI systems remain under human control.”

Why Experts Say Superintelligent AI Could Threaten Human Survival and Autonomy
The signature of this statement included Geoffrey Hinton and Yoshua Bengio, a well-known Turing Award winner. Besides these, Steve Wozniak, Apple’s co-founder, Richard Branson, Meghan Markle, and Susan Rice, who is the former national security advisor of the United States, are also among the signatories.
The statement emphasized that “Superintelligence is not merely a technological milestone, it is a potential point of no return for human agency. The development of superintelligence must be treated as a global priority alongside pandemics and nuclear weapons.”
So it can’t be left unchecked. They warn that superintelligent AI could “outperform humans in virtually every domain” and poses “profound risks to humanity”. AI is now becoming more analytical than humans because AI tools have nothing to distract them. However, human beings have to struggle for their survival. Humans have to indulge in relationships, deal with health issues, control thoughts, and manage hormonal imbalances.
On the other hand, AI doesn’t have to tackle anything like humans. The systems work on what they have been instructed to do. They don’t have any survival needs, and humans take care of them, allowing them to focus entirely on work and surpass human capabilities. They are quite good at strategic planning because of the abundant data they have.
Humans make decisions in two ways: by conducting research and gaining knowledge, or by learning from previous experiences or viral trends. However, AI explores vast amounts of data within milliseconds and plans what is most needed and impactful.
Key Risks Outlined In The Statement
According to the statement, “The risks of superintelligence are not hypothetical. They are foreseeable, and they are preventable, if we act now.” They asserted that AI systems are leading towards a lack of human control. Now, they are not strictly restricted to their original design and purpose. Rather, now they are making decisions beyond human oversight and even comprehension.
They also emphasized that AI tools are now weaponized, which means authoritarian regimes are now using them for the manipulation of human minds and achieving their political interests. They are molding their political agendas. Rogue actors are using them for autonomous warfare and surveillance that’s really harmful for human life.
Besides this, AI systems are now causing economic disruptions because of mass job displacements. They are destabilizing the labor market and bringing something more powerful than the human workforce. Rationally, AI systems don’t need any jobs, but humans do because they struggle for survival. So replacing the human workforce with AI is not wise at all.
In this digital era, the need for truth is more than ever. However, the AI-generated misinformation and psychological profiling are badly ruining social ethics. Social manipulation is at its peak due to AI systems. Similarly, there is a high probability of emerging existential risk. It means that if AI systems act against human interests, there is a high chance of irreversible outcomes.
What the Statement Demands
The statement says, “Until such frameworks are in place, we call for a moratorium on training AI systems more powerful than those currently deployed.” They want a global governance body with enforcement power. The statement calls for setting safety benchmarks and conducting a complete mandatory testing of every system. They want public transparency and democratic inputs to AI.
