This Researcher Says AI’s Biggest Risk Isn’t ‘Consciousness’ – It’s The Corporations That Control Them

Artificial intelligence (AI) is rapidly evolving and has the potential to revolutionize various industries, from healthcare to transportation. However, concerns about the ethical implications of AI have also been raised, including the possibility of it being used to harm individuals or groups of people. This has led to a growing debate about the need to balance the benefits of AI with its potential risks.

In this context, two prominent AI experts have expressed contrasting views on the dangers of AI and the importance of addressing its negative impacts.

Geoffrey Hinton, a 75-year-old computer scientist and a pioneer in artificial intelligence, has resigned from Google, warning of the possibility that AI could overtake human intelligence and become capable of destroying humanity.

However, Meredith Whittaker, a prominent AI researcher and president of the Signal Foundation, believes that Hinton’s warnings are a distraction from more pressing threats and that workers must stand up against the harms of technology from within. She also pointed out that Hinton did not show solidarity or take action when others were trying to organize and prevent the most dangerous impulses of the corporations that control AI technology.

Whittaker was forced out of Google in 2019 in part for organizing employees against the company’s deal with the Pentagon.

In addition, Whittaker believes that women, particularly Black women and women of colour, are being punished more frequently for speaking out about AI’s harms. She also believes that Hinton downplayed the existential harms of AI that are happening now, particularly to those who have been historically minoritized.

Leave a Reply

Your email address will not be published. Required fields are marked *