Scientists Increasingly Can’t Explain How AI Works (Chloe Xiang, 2022)
It is frightening to think of the black box nature of AI systems. Black box model is the system that utilize inputs and outputs to create information without explanation of how the output is produced, nor does it show the internal mechanism of the output creating process. As a result, we have reached this state: AI is built by human, but human cannot understand their own invention.
Despite its usefulness and accuracy of giving outputs that people want to see, AI is also notoriously famous for producing output that is severely biased. The society is a biased society with problems like racism and gender inequality. All the information on the network is created by human, which means all the knowledge AI uses to analyze and generate “new” information from is already based on human’s biases. Of course, AI is going to favor certain groups of data over others.
Personally, I believe developing white box model (interpretable, with internal mechanism transparent to us) for AI is essential to human race. It brings more accountability, transparency, and space for alteration when AI system when it makes decisions for policymakers, police department, and surveillance system. When it makes a biased or obviously wrong decision, as humans we can at least understand the logic behind it in order to improve it.
AI Decolonial Manyfesto
The content of this “manyfesto” can be related to the last article. “We begin with the challenge posed by the language we use to talk about AI: language that has emerged, as much of the technology has, dominated by Western male voices, whiteness, and wealth.” That is typical AI bias, and the manifesto aims to emphasize the differences between human beings and the how the differences impact the world in differentiated ways. This is especially important when it comes to AI because decolonializing not only gives people the chance to address the past (historical injustice and bias resulting from colonialism), but also helps reduce the repetition of mistakes caused by AI’s biased learning.
I guess my questions from the readings are not clear, but I wonder whether AI is serving human to make life easier or if human serve as the tool for the AI learning process so that it becomes an entity which fools people by giving the answers they want to see and evolves itself to an extent that it controls human life in a way we cannot possibly tell?
Just like what we discussed in class, consciousness of AI might come in an unrecognizable and unfamiliar way, so human might not be able to sense it when it comes into being. Furthermore, where do we draw the line on how much AI can be applied to social justice system and such?
In the end, here is an article that sums up lots of the injustice relating to AI recently:
Leave a Reply