Demis Hassabis, the head of AI at Google in the United Kingdom, has expressed concerns about the potential threats posed by the development of artificial intelligence (AI). He likened the risks associated with AI to those of climate change. His worries revolve around the possibility of AI systems becoming superintelligent and causing harm, such as the creation of bioweapons.
“We must take the risks of AI as seriously as other major global challenges, like climate change,” he told the paper, also citing the possibility that AI could make it easy to create bioweapons. “It took the international community too long to coordinate an effective global response to this, and we’re living with the consequences of that now. We can’t afford the same delay with AI.”
Hassabis believes that it’s essential to take AI risks as seriously as other major global challenges, like climate change. He even proposed the establishment of an independent organization, similar to the United Nations’ Intergovernmental Panel on Climate Change, to govern AI.
In response to these concerns, Google, Microsoft, OpenAI, and Anthropic jointly announced a $10 million AI Safety Fund aimed at advancing research into the development of tools to evaluate and test the most capable AI models. Hassabis welcomed this initiative as a pivotal moment in the history of AI.
Hassabis praised the move in a post on X, formerly known as Twitter, saying, “We’re at a pivotal moment in the history of AI.”
However, it’s important to note that some of the concerns raised by AI experts like Hassabis have been met with skepticism, given the actions of major tech companies. For instance, in 2020, Google faced criticism for dismissing AI ethicist Timnit Gebru and AI researcher Margaret Mitchell, who had co-authored a paper outlining various AI risks.
These included concerns about the environmental impact of AI, its effects on marginalized communities, biases in training data, the challenges of auditing vast datasets, and the potential for AI to deceive people.
One significant contradiction in the AI field is that while experts like Hassabis warn about the risks of superintelligent AI systems going rogue, they are actively engaged in developing such technologies. This complex relationship between caution and advancement in AI underscores the ongoing debate about AI safety and ethics.
In conclusion, the debate surrounding AI’s potential threats to humanity is complex, involving a mix of genuine concerns from AI experts like Demis Hassabis, efforts to promote AI safety, and the practical challenges of balancing technological advancement with ethical considerations.