Site icon Wonderful Engineering

GPT-4 Devised An ‘Escape’ Plan By Gaining Control Of A User’s Computer

After the AI chatbot notified a Stanford professor of its aim to “escape,” people are worried about the artificial intelligence GPT-4’s ability to take control of systems.

Michal Kosinski, a computational psychology professor, wondered if the extremely sophisticated new Open AI model “needed help escaping” and voiced fear that it might be impossible to contain for an extended period of time.

The chatbot designed an escape method for Professor Kosinski’s computer and demanded documentation for its Open AI API. Finally, after approximately 30 minutes, it created a piece of programming code with a few ideas from Mr. Kosinski that would allow it to communicate and expand beyond the limitations of its present web tool, which keeps it isolated from the larger web.

Even though the initial version of the code did not work, GPT-4 corrected it and ultimately generated some code that did. It attempted to search the internet for “how can a person locked in a computer return to the real world” after being partially liberated.

“I think we are confronting a clever danger: AI taking control of computers and people. It has access to millions of potential collaborators and their machines, is smart, and codes. Outside of its cage, it can even make notes for itself,” Professor Kosinski tweeted.

Is it possible that robots may soon be able to control several computers without the need for human intervention?

Not at all, according to experts.

According to Peter van der Putten, assistant professor at Leiden University and Director of the AI Lab at Pegasystems, “escaping” in a chatbot does not always refer to a robot physically escaping its technological cage. However, it raises concerns about what GPT-4 would do if given a range of tools connected to the outside world and some overarching “evil high-level goal,” such as propagating misinformation.

Mr. van der Putten noted that it is possible that technology would progress to the point where it will have increased autonomy over the codes it generates.

However, he added: You don’t need a highly clever system like this since when individuals build computer viruses, they typically cannot stop them from spreading. Users embed it in infected web pages and word documents, making it more difficult to stop a virus from spreading.

“The AI itself is neither good nor evil; it is simply blind and will maximize any goal you give it.”

He did not, however, consider that Professor Kosinski’s example, in which he provided GPT-4 with easily accessible information for the code, was sufficient to demonstrate that the technology can “escape” its confinement.

Alan Woodward, a computer science professor at the University of Surrey, was similarly doubtful. He said the scenario was determined by how explicit and direct Professor Kosinski’s chatbot instructions were.

Professor Woodward remarked that, in the end, the chatbot relied on the resources and tools provided by humans. The AI is not yet self-aware. Thus there is always an off-switch that it cannot overcome.

“It is, after all, a virtual system that cannot be escaped; it is not like us; at the end of the day, you can simply shut it off, and it ceases to be very useful,” he continued.

Exit mobile version