close
close

Watchdog raises alarm over disparity in AI tools for law enforcement

play

An artificial intelligence tool used to identify people in investigations by law enforcement, airport security and public housing surveillance is disproportionately harming people of color and women, according to a new report from a government watchdog group.

Facial recognition technology that civil rights advocates and some lawmakers have criticized for violating privacy and being inaccurate is increasingly being used by federal agencies with weak oversight, the U.S. Commission on Civil Rights has found.

“The unregulated use of facial recognition technology poses serious civil rights risks, especially for marginalized groups that have historically borne the brunt of discriminatory practices,” said Chair Rochelle Garza. “As we work to develop AI policy, we must ensure that facial recognition technology is rigorously tested for fairness, and any differences identified between demographic groups are promptly addressed or suspended until those differences are resolved.”

Law enforcement agencies are increasingly using rapidly evolving facial recognition tools, but there are no federal laws regulating their use.

At least 18 federal agencies use facial recognition technology, according to the Government Accountability Office. In addition to the federal rollout, the Justice Department has awarded $4.2 million since 2007 to local law enforcement agencies across the country for programs that were used at least in part for facial recognition tools, public records show.

FBI’s Extensive Database Deploys Facial Recognition Software

The 184-page report released this month details how federal agencies quietly deployed facial recognition technology across the United States and its potential civil rights violations. The commission examined the Justice Department, Homeland Security Department and Housing and Urban Development Department specifically.

“While there is considerable debate about the benefits and risks of federal use of FRT, many agencies already use the technology,” the report said, adding that it could have serious consequences, including false arrests, unwarranted surveillance and discrimination.

Facial recognition uses biometric software to map a person’s facial features from a photo. The system then tries to match the face to a database of images to identify the person. The degree of accuracy depends on several factors, including the quality of the algorithm and the images used. The commission said that even with the most efficient algorithms, tests have shown that false matches are more likely to occur for certain groups, including older people, women and people of color.

The U.S. Marshals Service has used facial recognition tools in investigations involving fugitives, missing children, serious crimes and security missions, the committee report said, citing the Justice Department. The Marshals have had a contract with facial recognition software company Clearview AI for several years. Some members of Congress have urged the agency not to use Clearview AI products and other facial recognition systems in February 2022 because of potential civil rights violations, including privacy risks.

The FBI has been using facial recognition technology since at least 2011. The Justice Department told commissioners that the FBI can run facial recognition software on a wide range of images, including booking photos, driver’s licenses, public social media accounts, public websites, cellphones, images from security camera footage and photos stored by other law enforcement agencies.

The U.S. Government Accountability Office has been investigating the FBI’s use of facial recognition technology since 2016. In its report eight years ago, the office said the FBI “should do better to ensure privacy and accuracy.”

The Justice Department, which oversees the FBI and U.S. Marshals, announced an interim policy in December 2023 on facial recognition technology that says it should only be used to provide leads in an investigation, the report said. The commission added that there is not enough data on the department’s use of FRT to confirm whether it is being used.

The FBI declined to comment on the report when contacted by USA TODAY. The Justice Department and U.S. Marshals Service did not respond to requests for comment.

AI tool used in border control and immigration investigations

The commission found that the Department of Homeland Security, which oversees immigration enforcement and airport security, had deployed facial recognition tools at several agencies.

The U.S. Immigration and Customs Enforcement (IC) has been conducting searches using facial recognition technology since 2008, when it signed a contract with biometric defense firm L-1 Identity Solutions, according to the report.

The agreement allowed ICE to access the Rhode Island Department of Motor Vehicles’ facial recognition database to find undocumented immigrants accused or convicted of crimes, the commission said, citing a 2022 study by the Georgetown Law Center on Privacy & Technology.

Facial recognition technology is also being used at airports, seaports and pedestrian crossings at the southwest and northern border crossings to verify people’s identities. The report noted that in 2023, civil rights groups reported that the U.S. Customs and Border Protection mobile app had trouble identifying black asylum seekers who wanted to make appointments. CBP this year said it had an accuracy rate of more than 99% for people of all ethnicities, according to the commission’s report.

Department of Homeland Security spokeswoman Dana Gallagher told USA TODAY that the department values ​​the committee’s insights, adding that DHS is a pioneer in rigorous bias testing.

The department opened a 24,000-square-foot lab in 2014 to test biometric systems, according to the report. Gallagher said the Maryland Test Facility, which the committee visited and documented, served as a “model for testing facial recognition systems in real-world settings.”

“DHS is committed to protecting the privacy, civil rights, and civil liberties of all individuals with whom we interact as we fulfill our mission to keep the homeland and travelers safe,” Gallagher said.

Social housing agencies implement facial recognition tools

Some surveillance cameras in municipal buildings contain facial recognition technology that has led to evictions for minor offenses, a concern lawmakers have had since at least 2019, the commission said.

The U.S. Department of Housing and Urban Development did not develop any of the technology itself, the report said, but instead provided grants to public housing agencies that used them to purchase cameras equipped with the technology, which “put FRT in the hands of beneficiaries without any regulation or oversight.”

Public housing tenants are disproportionately women and people of color, meaning using the technology could violate Title VI, the commission warned. In April 2023, HUD announced that Emergency Safety and Security grants could not be used to purchase the technology, but the report noted that this did not prevent recipients who already had the tool from using it.

The commission cited a May 2023 Washington Post investigation that found the cameras were being used to punish residents and catch them in minor infractions to force evictions, such as smoking in the wrong place or removing a laundry cart. Attorneys defending evicted tenants have also reported an increase in cases citing surveillance footage as evidence to evict people, the Post reported.

The Department of Housing and Urban Development did not respond to USA TODAY’s request for comment.

The civil rights group hopes the report will prompt policy changes.

Tierra Bradford, senior program manager for criminal justice reform at the Leadership Conference on Civil and Human Rights, told USA TODAY she was pleased with the report and hopes it will lead to further action.

“I think they raise a lot of issues that we in the justice system have had for some time,” Bradford said.

She added that the U.S. criminal justice system has a history of disproportionately targeting marginalized communities, and facial recognition tools appear to be another manifestation of that problem.

“There should be a moratorium on technologies that have been shown to be biased and have disparate impacts on communities.”

National Debate on Facial Recognition Tools

The commission’s report is the result of years of debate on the use of facial recognition tools in the public and private sectors.

The Detroit Police Department announced in June that it would revise its policy on using the technology to solve crimes as part of a federal settlement with a Black man who was wrongfully arrested for theft in 2020 using facial recognition software.

The Federal Trade Commission banned Rite Aid from using AI facial recognition technology last year after finding that it subjected customers, especially people of color and women, to unreasonable searches. The FTC said the system based its alerts on low-quality images, resulting in thousands of false matches and customers being searched or kicked out of stores for crimes they didn’t commit.

In Texas, a man wrongly arrested and sentenced to nearly two weeks in prison filed a lawsuit in January blaming facial recognition software for wrongly identifying him as a suspect in a convenience store robbery. Using low-quality surveillance footage of the crime, the AI ​​software at a Sunglass Hut in Houston falsely identified Harvey Murphy Jr. as the suspect, leading to an arrest warrant being issued, according to the lawsuit.

Nationally, members of the Civil Rights Commission said they hope the report will inform lawmakers’ use of the rapidly evolving technology. The agency is pushing for a testing protocol that agencies can use to see how effective, fair and accurate their software is. It also recommends that Congress provide a “statutory mechanism for legal recourse” for people harmed by FRT.

“I hope this bipartisan report will help inform public policy that addresses the myriad issues surrounding artificial intelligence (AI) in general, but specifically, facial recognition technology,” said Commissioner Stephen Gilchrist. “Our country has a moral and legal obligation to protect the civil rights and civil liberties of all Americans.”