WASHINGTON/LOS ANGELES--For people like Heather Bollin, a 43-year-old woman in Texas engaged to a man who is currently incarcerated, constant surveillance is a fact of life: the three daily phone calls they have together are subject to monitoring by prison officials.
"We are never able to communicate without being under surveillance," she told the Thomson Reuters Foundation in a phone interview, asking that the prison her fiance is in remain anonymous because she fears retaliation.
Prisons in the United States could get more high-tech help keeping tabs on what inmates are saying, after a key House of Representatives panel pressed for a report to study the use of artificial intelligence (AI) to analyze prisoners' phone calls. But prisoners' advocates and inmates' families say relying on AI to interpret communications opens up the system to mistakes, misunderstandings and racial bias.
The call for the Department of Justice (DOJ) to further explore the technology, to help prevent violent crime and suicide, accompanies an $81 billion-plus spending bill to fund the DOJ and other federal agencies in 2022 that the Appropriations Committee passed last month. The technology can automatically transcribe inmates' phone calls analyzing their tone of voice and flagging certain words or phrases, including slang, that officials pre-programme into the system.
A House Democratic aide said in an emailed statement they were encouraging the DOJ "to engage with stakeholders in the course of examining the feasibility of utilizing such a system."
Several state and local facilities across the country have already started using the tech, including in Alabama, Georgia and New York. The House panel wants the DOJ to look into potentially leveraging the technology for federal use and to identify gaps or shortcomings in the information it produces.
"It's very unsettling - what if I say something wrong on a call?" said Bollin, who worries about accidentally getting her fiance in trouble. "It could be misconstrued by this technology, and then he could be punished?"
Privacy groups say the technology could amplify racial bias in the justice system and unfairly subject prisoners to unaccountable artificial intelligence. "This Congress should be outlawing racist policing tech - it shouldn't be funding it," said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project (STOP), an advocacy group based in New York. "People who have been caught up in the criminal justice system are always turned into the subjects of experimentation for new technology systems."
Proponents dispute such criticisms, saying the tech is a vital time-saving tool for law enforcement and does not target specific groups. Bill Partridge, chief of police in Oxford, Alabama, said local forces have managed to solve cold case homicides after prisoners were flagged on the phone talking about "actually committing the murder."
Partridge's department is one of a handful of agencies in the state that have utilized software from LEO Technologies, a California-based company, that uses Amazon Web Services (AWS) natural language processing and transcription tools to process and flag inmate calls for near real-time analysis.
The police chief said the technology, called Verus, is particularly helpful in preventing suicides. "I think if the federal government starts using it, they're going to prevent a lot of inmate deaths," he said.
Scott Kernan, CEO of LEO Technologies and a former Secretary of the California Department of Corrections and Rehabilitation, said the technology is "saving lives both inside and outside of the correctional environments we monitor."
"Because we listen to all communications, we do not target a race, gender or protected group," Kernan said.
Specific public data on the number of calls Verus has flagged was not readily available.