A UN body has requested a moratorium on the deployment of AI technologies that “pose a serious risk to human rights.” It singled out some facial recognition programs used in public places.
On Wednesday, Michelle Bachelet, the United Nations High Commissioner for Human Rights, urged to halt the sale and use of artificial intelligence (AI) systems that endanger human rights until proper safeguards are in place to ensure that the technology is not exploited.
“We cannot afford to continue playing catch-up regarding AI – allowing its use with limited or no boundaries or oversight, and dealing with the almost inevitable human rights consequences after the fact,” Bachelet said in a press release.
Governmental “social score” systems that identify people based on their conduct, as well as certain AI-based tools that put people into groups based on race or gender, should be forbidden.
“AI-based technologies can be a force for good, but they can also have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights,” Bachelet said in a statement.
The report expresses deep concern about some countries implementing AI applications without properly examining the myriad risks of the technology. According to the UN office, there have already made some dangerous mistakes, with people being refused social security payments or arrested due to faulty facial recognition.
“This is not about not having AI,” Peggy Hicks, the rights office’s director of thematic engagement, told journalists. “It’s about recognizing that if AI is going to be used in these human rights — very critical — function areas, that it’s got to be done the right way. And we simply haven’t yet put in place a framework that ensures that happens.”
Data is frequently collected, shared, merged, and analyzed in mysterious ways by AI systems. As a result, the data that AI collects could be tampered with, outdated, or even discriminatory.
“The risk of discrimination linked to AI-driven decisions – decisions that can change, define or damage human lives – is all too real,” Bachelet said. “This is why there needs to be systematic assessment and monitoring of the effects of AI systems to identify and mitigate human rights risks.”
While no countries were named in the report, China has been one of the countries to implement facial recognition technology, primarily for surveillance in Xinjiang. The report’s main authors stated that mentioning individual countries was not in their mission and that doing so could be detrimental.
“In the Chinese context, as in other contexts, we are concerned about transparency and discriminatory applications that addresses particular communities,” said Hicks.
The report also expresses scepticism towards technologies that attempt to discern people’s emotional and mental states by analyzing their facial expressions or body movements, claiming that such technology is prone to bias, misinterpretations and has a scientific foundation.
“The use of emotion recognition systems by public authorities, for instance for singling out individuals for police stops or arrests or to assess the veracity of statements during interrogations, risks undermining human rights, such as the rights to privacy, to liberty and to a fair trial,” the report says.
European regulators have already taken steps to limit the most dangerous AI applications. The proposed restrictions unveiled by European Union authorities this year would outlaw some AI applications, such as real-time facial scanning, and strictly regulate others that could endanger people’s safety or rights.
The US government has expressed similar worries, though it hasn’t yet developed a specific plan to address them. In addition, Microsoft and other U.S. tech firms have also endorsed efforts to limit the riskiest uses of AI.
“If you think about the ways that AI could be used in a discriminatory fashion or to further strengthen discriminatory tendencies, it is pretty scary,” said U.S. Commerce Secretary Gina Raimondo at a conference in June. “We have to make sure we don’t let that happen.”