In a new Wall Street Journal piece, Princeton neuroscientist Michael Graziano expressed fears that artificial intelligence-powered chatbots are doomed to be dangerous sociopaths that could represent a serious threat.
ChatGPT, the current technology craze, is an artificial intelligence chatbot with incredible conversational abilities. It is based on a vast network of artificial neurons loosely mimics the human brain and is trained by evaluating internet knowledge resources.
Artificial intelligence is becoming so powerful and fast with the emergence of AI tools such as ChatGPT that it may endanger humans.
“We’re building machines that are smarter than us and giving them control over our world. How can we build AI so that it’s aligned with human needs, not in conflict with us?” Graziano said.
“Consciousness is part of the tool kit that evolution gave us to make us an empathetic, prosocial species,” Graziano writes. “Without it, we would necessarily be sociopaths because we’d lack the tools for prosocial behavior.”
No one will be killed by ChatGPT, but giving artificial intelligence greater much more autonomy could have consequences in the long run.
According to Graziano, to raise sympathetic and prosocial children, we must first instill consciousness and an understanding of the diversity of worldviews. Without it, we would be forced to be sociopaths, as we wouldn’t have the means to engage in prosocial conduct.
However, the truth is that there is currently no trustworthy method to expose anything about what is happening inside a machine or computer program, such as ChatGPT, or if an AI is sentient or not.
“If we want to know whether a computer is conscious, then we need to test whether the computer understands how conscious minds interact,” Graziano argues. “In other words, we need a reverse Turing test: Let’s see if the computer can tell whether it’s talking to a human or another computer.”
He considers that if we don’t find solutions to such difficult problems, it will result in major heinous acts.
“A sociopathic machine that can make consequential decisions would be powerfully dangerous,” he wrote. “For now, chatbots are still limited in their abilities; they’re essentially toys. But if we don’t think more deeply about machine consciousness, in a year or five years, we may face a crisis.”
And if computers will outthink humans in any case, our best hope of bringing them closer to human values may be to give them more human-like social cognition.