Week 8 Reader Blog

Who Is Making Sure the A.I. Machines Aren’t Racist?

In 2018, Google hired Timnit Gebru to work on improving its AI products so that they do not engage in problems like discrimination. Gebru later worked on a paper that showed facial recognition’s biases towards women and people of color, thus resulting in machine discrimination. Then, Gebru was fired by Google’s ethical AI team in 2020.

Google obviously pushed her out the company for her concerns of their biased hiring methods and the flaws of Google’s technology. What are the connections between this and the dominating white male workforce in technological fields like AI?

Another topic this article mentions is mistagged photos of black people as gorillas, which happened a lot of times. Although technology companies fear of repeating this same mistake, what companies like Google did was simply preventing categorizing any photos as gorillas. Instead of solving the problem, they censor or erase it so they can pretend it does not exist.

Deborah Raji in the article also shared her experience of realizing that faces used to train facial recognition software were mostly white male people.

Well, what about feeding AI system more pictures of people of all races and hiring more employees of color to make sure AI thoroughly learn about race? AIs are coded to be like the way they are anyways, by mostly white, heterosexual, male.

Two articles related:

https://www.npr.org/2023/11/28/1215529902/unmasking-ai-facial-recognition-technology-joy-buolamwini

Mean Images

The “likenesses” of pictures created by AI might be based on median values rather than actual likenesses, which creates “mean images”. Other meanings of “mean” also apply to this, like the meaning behind these images, or “demeaning”/ “poor” for their inaccuracy. I also believe this generation method to be responsible for some racial biases appearing in AI-generated images because the datasets used for the mean might be biased. In other cases, weird details that contradicts reality in those pictures can also be accounted. If a person has weird looking hands in an AI-generated picture, it might be the mean of different hand gestures that AI study online fused into one.

Hito Steyerl mentions that her name and face appeared in a part of database research online that contains faces encouraged to be studied by researchers. As I searched for the MS-Celeb-1M, web results show that “This dataset has been retracted and should not be used.” This reminds me of our discussion in class of how people do not possess the rights to their own images. Once your images are online, you have no idea to what purpose they are used or by whom they will be used. If we relate by to the documentary screened last week – Another Body (2023), we realize that nothing can be permanently deleted in digital age, especially when they have appeared online. “Despite the recent termination of the msceleb.org website, the dataset still exists in several repositories on GitHub, the hard drives of countless researchers, and will likely continue to be used in research projects around the world”, as stated by Jordan Pearson who wrote for VICE. (https://www.vice.com/en/article/a3x4mp/microsoft-deleted-a-facial-recognition-database-but-its-not-dead)

At the latter half of the article, Steyerl mentions the human labor behind AI models and the exploitation of big companies towards “microworkers” as well as the harm of certain tasks done to these underpaid workers, which is not at all new to us.

——————————————————————————————————————————————————-

The second article touched on a lot of complicated technical issues that I have little understanding of. I guess my questions will be: it seems that different companies employ similar methods to train their AI products. However, every AI image generator has their own signature style. What contributes to that?


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *