Please consider supporting us by disabling your content blocker.
loader

Geoffrey Hinton’s Stark Warning on AI and Human Extinction

Geoffrey Hinton Warns of AI Threats

Geoffrey Hinton, a renowned computer scientist and Nobel Prize winner, has issued a grave warning concerning the future of artificial intelligence (AI) and its implications for humanity. The esteemed ‘godfather’ of AI stated that there is a ‘10% to 20%’ likelihood that the technology could lead to human extinction within the next three decades.

In a recent interview with BBC Radio 4’s Today program, Hinton was asked if he still held the same view regarding an impending AI apocalypse. In response, he affirmed, ‘Not really, 10% to 20%.’ This reflects a slight increase from his previous estimate of a 10% chance for catastrophic outcomes.

Hinton elaborated on his fears by noting, ‘If anything, the situation appears more concerning. We’ve never had to manage entities that are more intelligent than ourselves before.’ He cited the rarity of instances where a more intelligent being is controlled by a less intelligent one.

In his analogy, Hinton compared our current state as humans to that of toddlers in the presence of advanced AI systems, stating, ‘I like to think of it as imagining yourself and a three-year-old. We’ll be three-year-olds.’ His comments underscore the vulnerability of humans in relation to the rapidly advancing capabilities of AI.

After resigning from Google in 2023, Hinton voiced his concerns more openly about the reckless pace of AI development and its potential misuse by individuals with malicious intent. He believes that ‘bad actors’ could exploit AI technologies to cause great harm and chaos.

Reflecting on the swift evolution of AI technology, Hinton remarked, ‘I didn’t think we would reach this stage so soon. I anticipated it would take longer.’ With experts predicting that AI could surpass human intelligence within the next 20 years, Hinton remarked, ‘It is a very scary thought.’

He strongly advocated for government oversight in AI development, cautioning that relying solely on profit-driven large corporations would not guarantee safe AI innovations. He concluded, ‘The only way to compel these major companies to invest in safety research is through government regulation.’