DEI (Diversity, Equity, and Inclusion) and A.I. are both highly discussed topics right now. In recent years, just as many corporations and institutions have implemented new DEI tactics in order to foster inclusive communities, media programs have been exploring often ignored storylines and listening to the voices of writers and creators that have long been pushed aside. A.I. is a rapidly changing field that is constantly being looked at to see how it can be implemented. As these topics gain mainstream attention– A.I. has been around for decades, and topics of diversity, equity, and inclusion have existed in some form throughout much of history– they intersect in various ways.

One of the ways these topics intersect is shown through a piece by Matt O’Brien for AP, Google says its AI image-generator would sometimes ‘overcompensate’ for diversity. The article discusses Google’s Genesis creating images of people of color in historical moments where they certainly would not have been. For example, the New York Times has written an article featuring a request to Genesis for an image of a 1943 German soldier. This yielded the following result:

A two-by-two block of images includes uniformed Asian and Black figures.

The above image was shared on Twitter/X and included in the New York Times article. The A.I. generated images show people of color in Nazi-era uniforms.

The AP article describes the ‘overcompensation’ of the A.I. As we’ve discussed in class and seen in our readings, A.I.s often take on the biases of their creators and data sets, thus, A.I.’s ‘default’ for a person is often a white person. Gemini intended to change this by intentionally adding diversity to what it generates.

I’m sure that the thought process behind this was nothing malicious. Diversity strengthens creation. A lack of diversity– only emphasizing one identity– creates false impressions of the world, excludes entire populations, and can strengthen biases.

Back to DEI– many shows and movies have made great efforts to recognize that diversity can strengthen their content, but biases have often limited diversity. In my opinion, it’s great that they want to do this! However, few shows actually implement DEI and its values, despite seeming to do so. From Grey’s Anatomy to Chicago Fire, I’ve been disappointed by shallow efforts to include characters with never-before explored stories and backgrounds. The inclusion had amazing potential to create new plotlines and share new perspectives. I was let down because these characters almost always get minimal screen time and serve to further other characters’ stories rather than exist in their own story.

This is performative– a character’s story is teased, then written away before anything can develop. It doesn’t give these characters the stories or respect they deserve. It ignores the systematic roots of the problem– it ignores why straight, white characters have almost always been main characters and characters who identify with minority groups are so often the butt of the joke, or just a sidekick. Rather that trying to truly rectify this and make change by actually giving these characters the stories they deserve, many programs attempt to introduce brief or shallow storylines and act like the issue is fixed.

I believe that this is the same issue that Gemini is facing. It’s great that their intentions were to try and fix some of the blatant biases of the A.I. It’s not great that the solution was just: “Add more POC!” This is the equivalent of adding those brief, hollow storylines and acting like the whole show is fixed. This ignores the root of the problem– that A.I.s reflect the biases of their creators and moderators and learn from curated training sets. Simply programming an A.I. to ‘compensate’ doesn’t change what it has actually learned, and doesn’t rectify the issues of bias and racism in A.I. Clearly, as shown in the generated images above, this tactic can cause harm, even if it leads to some good changes.

As Taylor Swift once said, “Band-aids don’t fix bullet holes.” Good intentions don’t fix years of inaction. As we’ve read about earlier, Google was aware of issues of racism and bias in A.I. but did not react in a proactive way. This attempt at tackling that issue is not proactive, either. It seems they are well aware of why their programs are often biased against women and people of color, but unwilling to fix the problem at its roots.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *