Week 8 Reader Blog

Mean Images

In the reading, Steyerl criticizes AI image production and “mean images,” which, derived from vast data pools, represent not individual realities but median values that disconnect visuals from concrete truths. As she writes, “Visuals created by ML tools are statistical renderings, rather than images of actually existing objects. They shift the focus from photographic indexicality to stochastic discrimination. They no longer refer to facticity, let alone truth, but to probability” (Steyerl). In this case, the mean images generated by AI and datasets are fabricated and unreliable. Steyerl also mentions that “As data visualizations, they do not require any indexical reference to their objects. They are not dependent on the actual impact of photons on a sensor…They represent the norm by signaling the mean. They replace likenesses with likelinesses” (Steyerl). The statistical averages diminish individual and racial uniqueness in favor of a generalized median view.

The text also addresses the exploitation inherent in the AI industry, particularly the poor working conditions and exploitation of microworkers who train AI systems. This reminds me of the news that “workers sue Meta, claiming viewing brutal videos caused psychological trauma,” and a similar event happened at Facebook. Besides, Steyerl highlights her personal experience that her image was used in AI training datasets without consent, which reminds me of the class discussion that we have little control over our online images and the documentary film “Another Body.” Furthermore, Steyerl discusses the perpetuation of biases through AI and the consolidation of power structures via control over data and technology. She advocates for a shift towards more ethical and equitable technological practices that respect individual rights and reduce exploitation.

This video explains how Hito Steyerl addresses the way digital images are created, shared, and archived. It was a relatively early video released in 2016 but illustrates Steyerl’s ideas about how we interact with digital images.

Who Is Making Sure the A.I. Machines Aren’t Racist?

The article delves into the issues of racial and gender biases embedded in AI systems. The focus is on incidents and studies that have demonstrated how AI can perpetuate societal biases, primarily due to the homogeneity of the AI workforce and the data used in training. “[AI] is being built in a way that replicates the biases of the almost entirely male, predominantly white workforce making it” (Metz). Since the dataset being put into AI contains biased content, it is inevitable that its products are biased. The reason is that the people choosing the training data were mostly white men, and they didn’t realize their data was biased.

Metz illustrates several incidents as examples, such as biases in facial recognition technologies that misidentify women and people of color at significantly higher rates than white men. For instance, in the tools from Microsoft and IBM designed to analyze faces and identify characteristics, “when the services read photos of lighter-skinned men, they misidentified sex about 1 percent of the time. But the darker the skin in the photo, the larger the error rate. It rose particularly high with images of women with dark skin” (Metz). Similar issues were also found in Amazon’s face service, “Ms. Buolamwini and Ms. Raji published a study showing that an Amazon face service also had trouble identifying the sex of female and darker-skinned faces” (Metz). Since the data used in training mostly consisted of white men, who have more control and power in the system, the AI system is trained to generate racially and gender-biased content.

Compared with “Mean Images,” both readings criticize the biased images or visual representations that deepen the sex and racial biases in the AI system. I was surprised by the overwhelming male dominance in AI practices and systems, and I wonder how we can improve these issues.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *