×

Our award-winning reporting has moved

Context provides news and analysis on three of the world’s most critical issues:

climate change, the impact of technology on society, and inclusive economies.

Chatbots in U.S. justice system raise bias, privacy concerns

by Avi Asher-Schapiro and David Sherfinski | @AASchapiro | Thomson Reuters Foundation
Tuesday, 10 May 2022 15:00 GMT

Pro-abortion demonstrators protest outside the U.S. Supreme Court after the leak of a draft majority opinion written by Justice Samuel Alito preparing for a majority of the court to overturn the landmark Roe v. Wade abortion rights decision later this year, in Washington, U.S. May 3, 2022. REUTERS/Michael A. McCoy

Image Caption and Rights Information

Advocates say automated chatbots can open up access to justice systems, while critics warn the technology can be unreliable and lead to bias against some users

  • U. S. Department of Justice explores chatbots

  • Some courts experiment with automated bots

  • Civil liberties groups warn of privacy, bias risks

By Avi Asher-Schapiro and David Sherfinski

LOS ANGELES/WASHINGTON, May 10 (Thomson Reuters Foundation) - When the U.S. state of New Jersey lifted a COVID-19 ban on foreclosures last year, court officials hatched a plan to handle the incoming influx of cases: train a chatbot to respond to queries.

The program - nicknamed JIA - is one of a number of bots being rolled out by U.S. justice systems, with advocates saying they improve access to services while critics warn automation opens the door for errors, bias, and privacy violations.

"The benefit of the chatbot is you teach it once and it knows the answer," said Jack McCarthy, chief information officer of the New Jersey court system.

"(With) a help desk or staff, you tell one person and now you've got to train every other staff member."

The trend towards such chatbots could accelerate in the near future - the U.S. Department of Justice (DOJ) last month closed a public call asking for examples of "successful implementation" of the technology in criminal justice settings.

"It raises a flag that the DOJ is going to move towards funding more automation," said Ben Winters, a lawyer with the rights group the Electronic Privacy Information Center (EPIC), which submitted a cautionary comment to the DOJ.

It urged the government to study the "very limited utility of chatbots, the potential dangers of over-reliance, and collateral consequences of widespread adoption."

The National Institute of Justice (NIJ), the DOJ's research arm, said it is simply gathering data in an effort to respond to developments in the criminal justice space and create "informative content" on emerging tech issues.

A 2021 NIJ report identified four kinds of criminal justice chatbots: those used by police, court systems, jails and prisons, and victim services.

So far, most function as glorified menus that do not use artificial intelligence (AI).

But the report predicts that much more advanced chatbots, including those that measure emotions and mimic empathy, are likely to be introduced into the criminal justice system.

JIA, for its part, was trained using machine learning from court documents and can handle 20,000 variants of questions and answers, from queries over wiping criminal records to child custody rules.

Its developers are trying to build more tailored services, allowing people to ask for personal information such as their court date.

But it is not involved in making any decisions or arbitration - "a thick line" that the courts system does not intend to cross, said Sivakumar Appavoo, a program manager working on AI and robotic automation.

HIGH STAKES

Snorri Ogata, the chief information officer of Los Angeles courts, said his staff tried to build a JIA-style chatbot, trained using years' of data from live agents handling questions about jury selection. 

But the system struggled to give accurate answers and was often confused by queries, he said. So the court settled on a series of simpler menus that do not allow open-ended questions.

"In justice and in courts, the stakes are higher, and we were stressed about directing people incorrectly," he said.

Last year, the Identity Theft Resource Center - a nonprofit that helps victims of identity theft - tried to train a chatbot to respond to victims outside working hours, when staff were not available.

But the system - supported by DOJ funding - was unable to provide consistently accurate information, or respond with appropriate nuance, said Mona Terry, the chief victims officer.

In particular, it could not adapt to new identity theft schemes that cropped up during the COVID-19 pandemic, which produced new jargon and inquiries the system had not been trained for.

"There's so much subtlety and emotion that goes into it - I'm not sure a chatbot could take that over," Terry said.

Emily Bender, a professor at the University of Washington who studies ethical issues in automated language models, said carefully built interfaces to help citizens interact with government documents can be empowering.

But trying to build chatbots that mimic human interaction in a criminal justice context carries significant risks, she said.

"We have to keep in mind that anyone interacting with the justice system is in a vulnerable position," Bender told the Thomson Reuters Foundation.

Chatbots should not be relied upon to give time-sensitive advice to those at risk, she said, while systems also need to have strong privacy protections and offer people a way to opt out so they can avoid unwanted data collection.

The DOJ did not immediately respond to a request for comment in response to these criticisms of the tech.

The 2021 government chatbot report noted "numerous benefits to implementing chatbots," including efficiency and increasing access to services, while also laying out risks stemming from biased data-sets, incorrect responses, and privacy implications.

'JUST DON'T BUILD THE DAMN THING'

EPIC, the digital rights group, urged the government to nudge the emerging market to produce bots that are transparent over their algorithms and respect user privacy.

It has called on the DOJ to step up regulation in the space, from requiring bot licenses to holding regular audits and impact assessments to hold creators accountable.

Albert Fox Cahn, the founder of the Surveillance Technology Oversight Project, said it is unclear why the DOJ should be encouraging automation at all.

"We don't want AI serving as gatekeepers for access to the justice system," he said.

But more and more advanced tools are already being deployed elsewhere.

Andrew Wilkins, the co-founder of British startup Futr, said the firm has already built bots for police to handle crime reports, from domestic abuse to COVID-19 rules violations.

"There was a hesitancy about 'what if it gets (the answer) wrong'," he said, but those concerns were overcome by making sure humans were closely overseeing the bots' interactions and looped in to answer escalating inquiries.

The company is rolling out analysis to try to detect the emotional tone of its chatbots' conversations, and developing services that work not only on police websites, but also on WhatsApp and Facebook, he said. 

"It's a way to democratize access to services," he said.

But for Fox Cahn, such tools are too risky to be relied on.

"For me, it's pretty simple: just don't build the damn thing," he said.

This article was updated on May 10, 2022, to clarify the details of the DOJ comment request about criticisms of chatbot tech.

Related stories:

AI bias: How do algorithms perpetuate discrimination?

'Unfair surveillance'? Online exam software sparks global student revolt

Global exam grading algorithm under fire for suspected bias

(Reporting by Avi Asher-Schapiro @AASchapiro and David Sherfinski. Editing by Sonia Elks. Please credit the Thomson Reuters Foundation, the charitable arm of Thomson Reuters, that covers the lives of people around the world who struggle to live freely or fairly. Visit http://news.trust.org)

Our Standards: The Thomson Reuters Trust Principles.

-->