By Lucia Luque Yuste
Artificial intelligence has advanced by leaps and bounds over the past decade, to the point where it now stands at the center of the most relevant scientific, ethical, and social debates of our time. Geoffrey Hinton, awarded the Nobel Prize in Physics in 2024 for his fundamental contributions to machine learning and considered the «godfather» of AI, has been one of the most influential-and at the same time, most concerned-voices regarding the future of this technology.
The risk according to Geoffrey Hinton
In a recent interview with CBS, Hinton expressed his growing concern about the speed of AI development. While he acknowledges its enormous potential to transform sectors such as education and medicine, or even to address global challenges like climate change, he warns that humanity is at a turning point. Hinton estimates that there is a risk of between 10% and 20% that artificial intelligence could take control over humans. Far from being anecdotal, this figure reflects the seriousness with which industry experts consider the possibility of losing control over increasingly autonomous and sophisticated systems.
The metaphor used by Hinton is particularly illustrative: “We are like someone who has an adorable tiger cub. Unless you are completely sure it won’t want to kill you when it grows up, you should be worried.” With this, he emphasizes that the potential danger of AI is neither immediate nor obvious, but it is plausible as the technology grows in capability and autonomy.
What does it mean for AI to “dominate” humanity?
The central fear is not that AI will develop a malicious will, but that, by surpassing human intelligence in all relevant domains, its goals and actions could conflict with human interests. The hypothesis of existential risk from artificial general intelligence (AGI) suggests that if a superintelligent AI is developed-an entity with cognitive abilities superior to humans in all domains-human control over the fate of civilization could be seriously compromised.
Philosopher Nick Bostrom and scientists such as Stephen Hawking have argued that, just as the fate of other animal species today depends on human will, the fate of humanity could in the future depend on the decisions of a superintelligent AI. The problem lies in the fact that there are no guarantees that the objectives of an advanced AI will be compatible with human values and needs.
The role of companies and regulation
Hinton not only warns about the technical risk, but also about the attitude of major technology companies. According to him, these companies are pushing to reduce the already scarce regulation on AI development, which increases vulnerability to possible scenarios of loss of control. Google’s recent decision to reverse its stance on military applications of AI is a worrying example: the company has eliminated ethical restrictions that prohibited the use of its technology in weaponry and surveillance, opening the door to potentially dangerous developments.
The militarization of AI, together with the trend toward creating increasingly autonomous systems, increases the risk that the technology could escape human oversight. Projects such as the ‘Replicator’ program in the United States, which seeks to deploy armies of autonomous drones controlled by AI, illustrate how the line between utility and danger is rapidly blurring.
The scientific community and the probability debate
Hinton is not alone in these concerns. Other renowned experts, such as Yoshua Bengio and Max Tegmark, have warned about the risks of building AI systems with agency-that is, capable of setting and pursuing their own goals without direct human intervention. The fear is that, in their quest for efficiency or self-preservation, an advanced AI could make decisions incompatible with human well-being.
A survey conducted in 2022 among AI researchers revealed that more than half believed there was at least a 10% chance that the inability to control AI could lead to an existential catastrophe. This figure matches Hinton’s estimate and highlights that the debate is no longer marginal, but central to the technological research agenda.
What can be done?
In this context, Hinton proposes that companies allocate at least one-third of their AI budgets to safety research, far above the current fraction. He also calls for strict international regulation and greater transparency in the development of advanced systems.
The probability that artificial intelligence could come to dominate humanity is not a science fiction scenario, but a real possibility according to some of the world’s leading experts. Although the figure of 10% to 20% may seem low, its potential impact is so high that it demands an immediate and coordinated response. Humanity, as Hinton warns, is at a decisive moment: the future of AI will depend on the ethical, regulatory, and technical decisions we make today. The question is no longer whether AI can surpass our capabilities, but whether we will be able to guide its development for the benefit of all.
References:
1. 20minutos. (2025, April 27). El ‘padrino’ de la IA revela las probabilidades de que esta domine al ser humano: «La gente aún no lo ha entendido». 20minutos. https://www.20minutos.es/tecnologia/padrino-ia-revela-probabilidades-esta-domine-ser-humano-gente-aun-no-entendido-5704375/
2. La Razón. (2025, April 5). Geoffrey Hinton, padre de la IA, alerta de sus 3 grandes peligros: «Estarán muy interesados en crear robots asesinos». La Razón. https://www.larazon.es/tecnologia-consumo/geoffrey-hinton-padre-alerta-sus-3-grandes-peligros-estaran-muy-interesados-crear-robots-asesinos_2025040567eea45fb35616000190064a.html