Elon Musk, Steve Wozniak And 1,122 Other People Have Signed An Open Letter Calling For A Halt To AI Development

Advertisement

An open letter signed by 1,124 people, including billionaire Elon Musk, Apple co-founder Steve Wozniak, and former presidential candidate Andrew Yang, calls for a six-month halt to AI experiments.

The letter warns of the “profound risks to society and humanity” that could arise if we don’t take a break. In 2015, the futurist collective, backed by Musk, had supported the development of AI to benefit society but expressed concern about its potential dangers.

“Contemporary AI systems are now becoming human-competitive at general tasks,” reads the open letter, posted on the website of Future of Life Institute, a non-profit. “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?”

The latest letter points to OpenAI’s GPT-4 as a warning sign, citing its increased accuracy, human-like qualities, ability to analyze and respond to images, and passing of a simulated bar exam.

“At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models,” a recent post from OpenAI states.

“We agree. That point is now,” the futurists write. “This does not mean a pause on AI development in general, merely a stepping back.”

It may seem at odds with the billionaire’s own artificial intelligence efforts.

“I’m a little worried about the AI stuff,” Musk said from a stage earlier this month — surrounded by Tesla executives, reports Reuters.

But researchers that have spoken to CBS News said the same — albeit a little more directly.

“I think that we should be really terrified of this whole thing,” Timnit Gebru, an AI researcher, told CBS Sunday Morning earlier this year.

Colombian Judge Uses ChatGPT In Ruling On Child’s Medical Rights Case

Some are worried that people will use ChatGPT to flood social media with phony articles that sound professional, or bury Congress with “grassroots” letters that sound authentic.

“We should understand the harms before we proliferate something everywhere, and mitigate those risks before we put something like this out there,” Gebru told Sunday Morning.

The proposed pause should be looked on as a way to make AI development “more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal,” while working alongside lawmakers to create AI governance systems.

“Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here,” the letter reads. “Let’s enjoy a long AI summer, not rush unprepared into a fall.”

Advertisement

Leave a Reply

Your email address will not be published. Required fields are marked *