Last year, Judge Juan Manuel Padilla in Cartagena, Colombia, sought assistance from ChatGPT in a legal case involving insurance coverage for an autistic child’s medical treatment. Despite the verdict being in the family’s favor, the inclusion of ChatGPT in the court ruling started a debate. The Guardian highlights experts’ strong opposition to AI integration into legal proceedings, while Judge Padilla defended its use, claiming that it increases efficiency. Experts propose for judges to go through digital literacy training to understand AI’s limits and risks and how these factors can affect the use of it in the justice system. As an interesting anecdote, even ChatGPT itself cautioned against its use in legal cases and urged caution in quoting its responses. This caveat shows the importance of critically evaluating AI systems even in situations where they seem to offer quick and convenient solutions in complex domains like the legal system. 

This week’s readings collectively investigate this issue, with an overarching theme running through all the articles—the need for a significant shift in AI system development, utilization, and understanding. The central argument of our readings underscore the struggles that even AI developers face in explaining their own creation’s work. Experts and journalists advocate for a more proactive design approach, considering biases and involving users in its development through Explainable AI (XAI). Chloe Xiang notes that AI systems can perpetuate biases present in its training data, leading to disproportionate misidentifications and biased decisions.

Thinking back to how Judge Padilla brought AI into Colombia’s legal system, you can picture a concerning possibility: a future in which mass incarceration rates might double. This dystopian idea comes from the fact that the data fed into AI is already biased, particularly against disenfranchised communities and people of color. Xiang gives the example of biases in facial recognition, exposing accuracy disparities between light-skinned males and dark-skinned females which deepens pre-existing racial biases. Additionally, predictive AI systems, based on medical images, have shown racial bias and have led to less accurate diagnoses for black and female patients.

Despite a laundry list of concerns, experts agree that AI systems can be beneficial with careful examination and development. The AI Decolonial Manyfesto alludes to the potential for positive contributions of AI in shaping our socio-technical futures. Achieving this involves a careful approach in which AI acts as a mirror of our society and it’s our responsibility to make sure that the data we provide it with also considers our values. Ultimately, this alignment is key in generating results that can move us forward as a society. Nevertheless, I am still worried about the possibility of mass incarceration rates increasing because of biased data in AI systems. How do biases in AI training data affect historically marginalized communities and can white-box models or Explainable AI (XAI) help address this issue?


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *