United Nations High Commissioner for Human Rights, Michelle Bachelet, on Wednesday called for a moratorium on the sale and use of Artifical Intelligence systems until sufficient safeguards against potential abuse are implemented.
Bachelet also called for AI applications that cannot be used in compliance with international human rights law to be banned.
“Artificial intelligence can be a force for good, helping societies overcome some of the great challenges of our times,” U.N. High Commissioner for Human Rights Michelle Bachelet said.
“But AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights.”
the UN Human Rights Office on Wednesday published a report that analyses how AI – including profiling, automated decision-making and other machine-learning technologies – affects people’s right to privacy and other rights, including the rights to health, education, freedom of movement, freedom of peaceful assembly and association, and freedom of expression.
“Artificial intelligence now reaches into almost every corner of our physical and mental lives and even emotional states. AI systems are used to determine who gets public services, decide who has a chance to be recruited for a job, and of course they affect what information people see and can share online,” the High Commissioner said.
The report looks at how States and businesses alike have often rushed to incorporate Artificial Intelligence applications, failing to carry out due diligence. There have already been numerous cases of people being treated unjustly because of Artifical Intelligence, such as being denied social security benefits because of faulty AI tools or arrested because of flawed facial recognition.
Bachelet’s remarks came at a Council of Europe hearing on the Pegasus scandal—in which the Israeli firm NSO Group’s spyware was used to target activists, journalists, and politicians worldwide, sparking calls for a global moratorium on the sale and transfer of surveillance technology.
According to OHCHR:
The report looks at how states and businesses alike have often rushed to incorporate AI applications, failing to carry out due diligence. There have already been numerous cases of people being treated unjustly because of AI, such as being denied social security benefits because of faulty AI tools or arrested because of flawed facial recognition. The report details how AI systems rely on large data sets, with information about individuals collected, shared, merged, and analyzed in multiple and often opaque ways. The data used to inform and guide AI systems can be faulty, discriminatory, out-of-date, or irrelevant. Long-term storage of data also poses particular risks, as data could in the future be exploited in as yet unknown ways.
“The complexity of the data environment, algorithms, and models underlying the development and operation of AI systems, as well as intentional secrecy of government and private actors, are factors undermining meaningful ways for the public to understand the effects of Artifical Intelligence systems on human rights and society,” the report states.h
“We cannot afford to continue playing catch-up regarding AI – allowing its use with limited or no boundaries or oversight, and dealing with the almost inevitable human rights consequences after the fact. The power of AI to serve people is undeniable, but so is AI’s ability to feed human rights violations at an enormous scale with virtually no visibility. Action is needed now to put human rights guardrails on the use of AI, for the good of all of us,” Bachelet stressed.