The Success of AI

In “Scientists Increasingly Can’t Explain How AI Works,” Chloe Xiang explores Artificial Intelligence (AI) models and their failed ability to explain biased outputs. These processes are deemed opaque as the model itself is a black box. As Xiang explains, it yields an output in which scientists generally accept because it tends to be what “they are looking for.” However, what happens when these models fail? How do scientists explain AI’s inability to produce unbiased outcomes? 

With these questions, I considered Stanley Kubrick’s 2001: A Space Odyssey film as the AI computer, HAL, plays a pivotal role in the mission bond to Jupiter. Along the journey, HAL provides its feedback and maintains the life support of three crewmen onboard. Despite its intellectual abilities, HAL malfunctions. In an interview Kubrick explains, “In the specific case of HAL, he had an acute emotional crisis because he could not accept evidence of his own fallibility”(Mulkerin, 2022). HAL’s malfunctioning presents several limitations which come at the expense of the crewmen and success of the mission. Thus, HAL can be attributed to a typical AI model in which scientists uphold as a large-scale advancement while ignoring the basis behind its fallibility. These modes are indeed flawed, which comes at the expense of truth and people’s lives. 

In considering AI’s “black box model,” I was reminded of Cathy O’Neil’s “Bomb Parts,” a chapter from The Weapons of Mass Destruction, in which I read for a Digital and New Media Theory course. In “Bomb Parts,” O’Neil explores the quality of a model in which its system is embedded in opacity, racism, blindness, and complexity. It is given the name, Weapons of Mass Destruction (WMD). O’Neil states, “…a key component of every model, whether formal or informal, is its definition of success…we must not only ask who designed the model but also what that person or company is trying to accomplish” (O’Neil 21). This is crucially important as the discussion surrounding the quality of AI is questioning its measures of truth and reliability. What are scientists using AI to convey? Whose truth is AI producing? 

While pondering the idea of truth and AI, I discovered Safiya Noble’s “Google Has a Striking History of Bias Against Black Girls” as she discusses the biased algorithms against Black women in digital platforms like Google. Noble like many others have conducted a google search of Black girls and found disturbing results which are typically sexualized and stereotypical tropes of Black people. Again, these systems are failing and possess several blind spots. Can we trust AI and its algorithms? Do we treat it like HAL? As stated before, scientists tend to hold AI to a high standard, but what if we did treat AI like HAL and consider its fallibility. Before we consider AI, we should also reflect on our flaws as a society because in many ways our desires can mimic HAL. At the end, it is our ideals and ideologies that are being transferred into these models. 

In essence, it is worth considering AI’s technological advancements, but it is also equally important that we ensure individuals behind these models are being trained with unbiased outcomes. It is important that we raise questions, especially concerning the “black box” model or in O’Neil’s case the WMD. While it is unfortunate that we must continuously correct AI, it is necessary. It is necessary for the success of AI.

The Links to the Reading and Articles:

https://www.looper.com/163074/hal-in-2001-a-space-odyssey-explained/

https://time.com/5209144/google-search-engine-algorithm-bias-racism/


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *