GENEVA — Police departments across the United States have been drawn to the use of digital technology for surveillance and predicting crime on the theory that it will make law enforcement more accurate, efficient and effective. The alarming reality, United Nations human rights experts warned on Thursday, is that they risk reinforcing racial bias and abuse.
The United Nations Committee on the Elimination of Racial Discrimination, an influential 18-member panel, conceded that artificial intelligence in decision-making “can contribute to greater effectiveness in some areas” but found that the increasing use of facial recognition and other algorithm-driven technologies for law enforcement and immigration control risks deepening racism and xenophobia and could lead to human rights violations.
It warned that using these technologies can even be counterproductive, as communities exposed to discriminatory law enforcement lose trust in the police and become less cooperative.
“Big data and A.I. tools may reproduce and reinforce already existing biases and lead to even more discriminatory practices,” Dr. Verene Shepherd, who led the panel’s discussions on drafting its findings and recommendations, said in a statement.
“Machines can be wrong,” she added in a phone interview. “They have been proven to be wrong, so we are deeply concerned about the discriminatory outcome of algorithmic profiling in law enforcement.”
The panel’s findings and recommendations are the result of two years’ research, but they took on greater urgency with the eruption of global Black Lives Matter protests and fears of deepening racial discrimination in the fallout from the Covid-19 pandemic.
The panel drew attention to the danger that algorithms driving these technologies can draw on biased data, including, for example, historical arrest data about a neighborhood that may reflect racially biased policing practices. “Such data will deepen the risk of over-policing in the same neighborhood, which in turn may lead to more arrests, creating a dangerous feedback loop,” she said.
The panel’s warnings add to deepening alarm among human rights bodies over the largely unregulated use of artificial intelligence across a widening spectrum of government, from social welfare delivery to “digital borders” controlling immigration.
Governments need an abrupt change of direction to avoid “stumbling zombielike into a digital welfare dystopia,” Philip G. Alston, a human rights expert reporting on poverty, told the United Nations General Assembly last year, in a report calling for the regulation of digital technologies, including artificial intelligence, to ensure compliance with human rights. The private companies that play an increasingly dominant role in social welfare delivery, he noted, “operate in a virtually human-rights-free zone.”
Last month, the U.N. expert monitoring contemporary forms of racism flagged concerns that “governments and nonstate actors are developing and deploying emerging digital technologies in ways that are uniquely experimental, dangerous, and discriminatory in the border and immigration enforcement context.”
The European Border and Coast Guard Agency, also called Frontex, has tested unpiloted military-grade drones in the Mediterranean and Aegean for the surveillance and interdiction of vessels of migrants and refugees trying to reach Europe, the expert, E. Tendayi Achiume, reported.
The U.N. antiracism panel, which is charged with monitoring and holding states to account for their compliance with the international convention on eliminating racial discrimination, said states must legislate measures combating racial bias and create independent mechanisms for handling complaints. It emphasized the need for transparency in the design and application of algorithms used in profiling.
“This includes public disclosure of the use of such systems and explanations of how the systems work, what data sets are being used and what measures preventing human rights harms are in place,” the group said.
The panel’s recommendations are aimed at a global audience of 182 states that have signed the convention, but most of the complaints it received over the past two years came from the United States, Ms. Shepherd said, and its findings amplify concerns voiced by American digital rights activists.
American police departments have fiercely resisted sharing details of the number or type of technologies they employ, and there is scarce regulation requiring any accountability for what or how they use them, said Rashida Richardson, a visiting scholar at Rutgers Law School and director of research policy at New York University’s A.I. Now Institute.
A rare exception is the Public Oversight of Technologies Law adopted by the New York City Council in June after years of debate and amid protests demanding criminal justice and policing reform. That law requires the New York Police Department to publish an annual account of the technologies it uses.
“We don’t know how many police departments use them for the same reason we also don’t know if most police departments even have policies in place to ensure constitutional compliance of these technologies,” Ms. Richardson added in a phone interview. “So we don’t know if they work, and we only find out about them after there’s some harm or risk identified.”
“The only thing that can improve this black box of predictive policing is the proliferation of transparency laws,” Ms. Richardson said. “I just hope that more recognition of the problem by government or quasi-government bodies can bring more urgency to the need for reform.”
"can" - Google News
November 27, 2020 at 09:13AM
https://ift.tt/2J4cuYd
U.N. Panel: Digital Technology in Policing Can Reinforce Racial Bias - The New York Times
"can" - Google News
https://ift.tt/2NE2i6G
https://ift.tt/3d3vX4n
Bagikan Berita Ini
0 Response to "U.N. Panel: Digital Technology in Policing Can Reinforce Racial Bias - The New York Times"
Post a Comment