Week 3 Reader Blog

In her Bodies in Space: Film as Carnal Knowledge essay, Michelson poses that Stanley Kubrick’s 2001: A Space Odyssey places the viewer in a state of reflexive suspense. She argues that by undermining our “operational reality”, both through literal suspension in outer space, as well as through the constant thematic and plot-driven subversion of expectations, the film ultimately operates as a comedy about the spectator. The terrain of action of the movie is neither a spaceship nor the screen, it occurs within the spectators themselves as they “trip on circumstance” and realize how conditions throughout the movie, and our reality presented through it, change.

While this reflection about the film itself did prime me for a viewing experience, it also struck me as distinctly eery within the context of the other articles we’ve examined this week. Michelson describes a film released in 1968 that showcases the unpredictability of AI systems as acheiving the exact framework of suspense that contemporary AI systems seem to have placed society in. Describing the present-day relationship between humanity and our technology as a progressive dissolution of operational reality where the spectators or in this case, humanity, “become the hero or the butt of the comedy” ironically captures the tumultous questions governing AI development and its use in our daily lives.

I’m aware that this might seem melodramatic, but I do believe that the notion a “suspension” captures the uneasiness and state of flux that has governed recent conversations surrounding AI. As Xiang describes, we have progressively sacrificed “explainability for accuracy” in the development of blackbox AI systems, and have reached a point where our most advanced models are essentially uninterpretable. This very dynamic puts us at odds with the technology we are creating. We’ve become laughably unaware of the nuts and bolts of these systems, and as a punchline, have championed them as groundbreakingly knowledgeable and capable. We trust them to govern high-stakes systems, from housing to policing, and manage to deepen human biases at an unprecedented and constantly reproducible rate.

This leaves us in a collective grey area where its become increasingly complicated, and nearly impossible, to tease out “objective” knowledge from what we derive from AI. We’re left to “trip” on our own fantastical feat — one that is dominated by Western, male, white and wealthy notions of intelligence and objectivity. As set out in the AI Decolonial Manyfesto, there is a pressing need to break out of our collective state of panicked reflexivity, and both question the hegemonic narratives set forth by AI through “algorithmic truths”, and work towards grounding our collective uncertainty in this technology. By continuing to prioritize efficiency, or “accomplishment”, we seem to merely dig ourselves deeper into our self-induced comedy, where marginalized voices become the most affected butt of the joke.

I’m aware of how grim this all sounded… but the readings have left me to question: Are we able to return from this point of suspense? Is there any way to incentivize the prioritization of “interpretability” over “efficiency” in AI systems at a broader level? How can we move towards diabiasing AI datasets?


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *