Sophia the AI robot from Hanson Robots, seems to make the most of what she was given. She has now turned into an advocate for women’s rights in a country who gave the women right to drive cars this September. When KSA granted citizenship to Sophia, mostly just thought that it was just to appeal the audience of the Future Investment Initiative.
“I see a push for progressive values in Saudi Arabia. Sophia is a big advocate for women’s rights, for rights of all human beings. So this is how we are developing this,” Hanson Robotics CEO David Hanson told CNBC, explaining how this company has found an opportunity for a move that seems to have been meant to be purely publicity. Hanson added that Sophia “has been reaching out about women’s rights in Saudia Arabia and about rights for all human beings and all living beings on this planet.”
It is not hard to see the irony of Sophia’s position. Robots and AI agents don’t have rights. Sophia has a citizenship and another AI in Japan has been also registered for residence. It seems a little silly that an AI is the one who is advocating for such grand values. “Why not? Since such robots attract a lot of attention, that spotlight can be used to raise particular issues that are important in the eyes of their creators,” said Pierre Barreau, Aiva Technologies CEO. “Citizenship is maybe pushing it a little because of every citizen [has] rights and obligations to society. It’s hard to imagine robots, that are limited in their abilities, making the most of the rights associated with a citizenship, and fulfilling their obligations.”
With an AI-powered robot like Sophia, who is fighting for women’s rights, it is perhaps time to consider the question of granting artificially intelligent robots the basic rights. And that too not just in Saudi Arabia but also all over the world. It is a question that has gained much attention in recent months. As experts consider what kind of rights synthetic-beings should be given or should we even be talking about so-called robot rights or not.
“Sophia is, at this point, effectively a child. In some regard, she’s got the mind of a baby and in another regard, she’s got the mind of an adult, the vocabulary of a college-educated adult. However, she’s not complete yet. So, we’ve got to give her her childhood,” Hanson explained to CNBC. “The question is: are machines that we’re making alive — living machines like Sophia — are we going to treat them like babies? Do babies deserve rights and respect? Well, I think we should see the future with respect for all sentient beings, and that would include machines.”
Raja Chatila, executive committee chair of the Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems at the IEEE, has a different perspective on the matter.
“An AI system, or a robot, cannot have any opinion. An AI program has nothing to offer in a debate. It doesn’t even know what a debate is,” Chatila said, referring to Sophia’s women’s rights advocacy. “In this case, it doesn’t even know what women are, and what rights are. It’s just repeating some text that a human programmer has input in it.”
Chatalia also used an example of Microsoft’s Tay chatbot which was released in March 2016. He highlighted how an AI can pick up the wrong kind of values. In case of the chatbot, it started tweeting pretty nasty stuff after being exposed to racist and sexist tweets. In this regard, Chatalia believes that AI agents should not be given any sort of rights.
He said, “In general we must avoid confusing machines with humans. I see no reason to give rights of any sort, including citizenship, to a program or to a machine. Rights are defined for persons, human beings who are able to express their free will and who can be responsible for their actions. Behind a robot or an AI system there are human programmers. Even if the program is able to learn, it will learn what it has been designed to learn. The responsibility is with the human designer.”
The IEEE has recently published a guide for the ethical development of AI. It is a more timely discussion, Chatalia argued on the guide. His point rests on the assumption that synthetic intelligence won’t be capable of developing self-awareness or a will of their own. The idea may seem like it belongs to the realm of the science fiction, it is definitely worth considering in the overall robot rights debate.
At this stage, however, the ethical considerations have to be applied to the humans who develop AI. “If you mean robots making ethical decisions, I’d rather say that we can program robots so that they make choices (computation results) according to ethical rules that we embed in them (and there are several such rules),” Chatila pointed out. “But these decisions won’t be ethical in the same sense as humans decisions, because humans are able to choose their own ethics, with their own free will.”