AI systems may not be fair to criminal defendants

Artificial intelligence is becoming more and more important as automation takes over many tasks and algorithms make trillions of decisions every day. Even the criminal justice system is impacted by the use of AI, as algorithms are often used to make assessments of defendants. People in Texas who are charged with a crime are innocent until proven guilty, but AI technology like risk-assessment tools can make determinations that are disadvantageous to defendants.

An investigative reporter and ProPublica put out a report about one risk-assessment tool, called COMPAS, that was developed by Equivant. The reporter dug into how the COMPAS algorithm operated and found that the system was unreliable in the task it was designed for, which was predicting future violent crimes among defendants. According to the research, only 20 percent of those who the system thought would commit violent crimes in the future actually did so. Moreover, the algorithm was twice as likely to predict future crimes for black defendants as it was for white defendants.

Additionally, algorithms like COMPAS operate without disclosing their methodology. Equivant is not required to reveal how the algorithm works because it is proprietary technology, resulting in a criminal justice system that does not have to be accountable for how it works. A Stanford University computer science professor who is an advocate of using AI in criminal cases says it can be effective in reducing the number of violent crimes. He admitted the decisions cannot be perfectly fair, but said the algorithms can help make better decisions.

People in Texas who are charged with crimes might want to speak with a criminal defense attorney as soon as possible. Regardless of the serious of the offense, a conviction can result in significant penalties, so constructing a strategy to counter the allegations is advisable.