*I want to preface this blog post by acknowledging the length of the video above. It is extensive, but I highly recommend watching it in its entirety as the information provided is very insightful and greatly applies to our class discussion of A.I. and race.

It is no question that AI has the potential to advance society, thus making the lives of others better in the future (arguably speaking), which is already occurring today. However, we can’t ignore the fact that AI models are flawed as they carry biases, specifically racial bias. These models are being trained on data sets that are disadvantaging certain groups, especially people of color. For the sake of our future with AI, we must look deeper into these models and raise the following questions: Who is creating the models? What are they (scientists) trying to accomplish with AI? Who is benefiting from these models? Are these systems being built on the principles of equity?

In a panel discussion, Nicole Turner Lee, a Senior Fellow at Brookings Governance Studies Department and director of the Center for Technology Innovation, explores the intersection of AI and race. She introduces the AI Equity Lab at Brookings which is a platform dedicated to “viewing socio-technical design contexts and outcomes of evolving and emerging technologies in a manner that promotes increased interdisciplinary and diverse cooperation and collaboration. It’s also about unveiling other hidden figures and experts in AI” (Lee & Cummings, 2024). To learn more about the AI Equity Lab, I provide a link at the bottom of this page. In her opening presentation, she also gives an overview of AI models and their potential harms for communities of color. Lee, like many others, further questions the ethicality of how these systems are created, and she explains the need for an inclusive environment of researchers and scientists to amend such biases against people of color.

After her presentation, Lee is joined by Renee Cummings (Professor of Practice in Data Science and Senior Fellow in Brookings Governance Studies Department), Mutale Nkonde (Visiting Policy Fellow at Oxford Internet Institute and CEO of AI for the People), and Dr. Fay Cobb Payton (Information Technology and Analytics Professor at NC State University and Visiting Scholar and Advisor of Inclusive Innovation at Rutgers University). These women discuss the importance of building a more inclusive AI model, the harms of AI for vulnerable communities, behavioral shifts in the field of AI, creating space for people of color in AI, and finding ways for individuals across disciplines to collaborate to ensure AI is more balanced and diversely built to include all communities. 

Throughout their panel discussion, I enjoyed hearing their various perspectives about the equity of AI, and I particularly was intrigued by Dr. Payton’s response on the design of AI models (time stamp, 44:13-49:50). She explained that more attention should be given to how individuals are trained to design AI. She further states how this form of training should involve all disciplines, which include engineers, social scientists, computer scientists, and more. I agree with Dr. Payton, and I appreciate her for making this statement because I have never considered AI’s impact on various fields. It is not a one discipline matter, and as we can see, it is affecting individuals across several fields. I think having an inclusive board with individuals across disciplines can allow for a better design of AI as it allows for a greater range of perspectives to be shared and considered. I also was interested in how Dr. Payton discussed the importance of recognizing big data in AI technologies, but also acknowledging the importance of small data as well. Thus, it is the small data that can allow us to understand the big data results and outputs from AI models. We should pose the questions of what is the source of the data?, and what is missing that could be provided? This relates to our class discussion of how AI models resemble a black box where it carries a form of opacity, in which Dr. Payton would argue that we should look deeper into these systems and find its source to better understand its results. Dr. Payton’s point can also be related to our film screening for this week, They Cloned Tyrone. Considering Dr. Payton’s response, I posed the following questions after watching the film: What were the reasons for cloning individuals like Tyrone? Who was the source? Why target communities like the Glen? I think all of these questions are important to consider as Fontaine, Slick Charles, and Yo-Yo were trying to “uncover” what was happening in their community. It was the small details (i.e., the fried chicken, grape juice, and hair relaxers) that greatly contributed to their findings and understanding of the experiments happening in the Glen.

Overall, I truly appreciate this panel discussion as it left me more hopeful about the future of AI as it relates to equity. When searching for sources to include for this week, I was finding dated videos and interviews on this topic of AI and race. While these sources are still credible, I wanted to find information that was recently published to try and understand where we stand today on this subject matter. Thankfully, I was able to find this video which was published last year in December. Thus, it is reassuring to know that measures are being taken by individuals like Cummings, Nkonde, Payton, and Lee who are fostering conversations around the equity of AI and ensuring its models are inclusive, especially for people of color.

Here is the link to learn more about the AI Equity Lab, recently launched this year! I also cited this article in my response here.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *