In a recent survey conducted among 2700 artificial intelligence researchers who have published work at six leading AI conferences, concerns have surfaced regarding the potential development of superhuman AI and its implications for humanity. The survey, the most extensive of its kind to date, reveals a non-trivial acknowledgment among researchers of the risk of human extinction or other severe consequences associated with advanced AI.
Katja Grace from the Machine Intelligence Research Institute emphasizes the significance of the survey’s findings, stating, “It’s an important signal that most AI researchers don’t find it strongly implausible that advanced AI destroys humanity.” Approximately 58 percent of participants expressed a belief in a 5 percent chance of human extinction or similarly dire outcomes due to AI. The exact percentage, however, is deemed less crucial than the overall recognition of a non-minuscule risk.
Émile Torres at Case Western Reserve University urges caution, noting that many AI experts have an unreliable track record in forecasting future AI developments. Despite this, the survey, conducted across two versions in 2016 and 2022, demonstrated a reasonably accurate forecast of AI milestones. Comparing responses from the two versions, researchers predicted earlier achievement of certain AI milestones, coinciding with the widespread deployment of AI chatbot services such as ChatGPT.
Within the next decade, respondents anticipate AI systems having a 50 percent or higher chance of successfully handling various tasks, from creating indistinguishable songs to coding complete payment processing sites. However, Torres warns of the unpredictability of breakthroughs, stating, “A lot of these breakthroughs are pretty unpredictable. And it’s entirely possible that the field of AI goes through another winter,” referring to historical periods of decreased funding and interest.
While estimates suggest a 50 percent chance of AI outperforming humans on all tasks by 2047 and full human job automation by 2116, Torres emphasizes the potential for these expectations to fall short. Immediate concerns among researchers include AI-powered scenarios involving deepfakes, public opinion manipulation, engineered weapons, authoritarian control, and economic inequality.
Torres underscores the immediate risks of AI contributing to disinformation on critical issues like climate change and democratic governance, urging vigilance as the 2024 election approaches.