This Google Deepmind Researcher Has Co-Authored A Paper Saying AI Will Eliminate Humanity

Advertisement

The University of Oxford and Google DeepMind have pointed out in the peer-reviewed AI Magazine that an existential catastrophe for humanity is impending due to AI.

The most successful AI models today are called GANs or Generative Adversarial Networks. They have a two-part structure where one part of the program is trying to generate a picture (or sentence) from input data, and the second part is grading its performance. The new paper proposes that at some point in the future, an advanced AI overseeing some important function could be incentivized to come up with cheating strategies to get its reward in ways that harm humanity.

“Under the conditions, we have identified, our conclusion is much stronger than that of any previous publication—an existential catastrophe is not just possible, but likely,” Cohen said on Twitter in a thread about the paper.

“In a world with infinite resources, I would be extremely uncertain about what would happen. In a world with finite resources, there’s unavoidable competition for these resources,” Cohen told Motherboard in an interview. “And if you’re in competition with something capable of outfoxing you at every turn, then you shouldn’t expect to win. And the other key part is that it would have an insatiable appetite for more energy to keep driving the probability closer and closer.”

“In theory, there’s no point in racing to this. Any race would be based on a misunderstanding that we know how to control,” Cohen added in the interview. “Given our current understanding, this is not a useful thing to develop unless we do some serious work now to figure out how we would control them.”

Discrimination doesn’t come up in algorithms, but instead structures and limits and informs the way life moves along.

There is a lot of work to be done to reduce or eliminate the harm that regular algorithms (versus super-intelligent ones) are wreaking on humanity right now. Focusing on existential risk might shift focus away from that picture, but it also asks us to think carefully about how these systems are designed and the negative effects they have.

“One thing we can learn from this sort of argument is that maybe we should be more suspicious of artificial agents we deploy today, rather than just blindly expecting that they’ll do what they hoped,” Cohen said. “I think you can get there without the work in this paper.”

Advertisement

Leave a Reply

Your email address will not be published.