Creators & Creations: To whom am I speaking with?

Recently on Twitter (X, if you must), a user posed a question asking for examples of scholars who spent years after their discovery distancing themselves from it. The example stated in the Tweet itself is Oppenheimer, and various popular responses include Dong Nguyen and the highly addictive Flappy Bird, David Mech and his outdated analysis of the social structures of captive wolves, and Gregory Hinton and the power of artificial intelligence.

After reading The Guardian’s piece on Joseph Weizenbaum, one could certainly argue for his place on this list.

There is a major commonality between all of these creators and their inventions– it isn’t the information (false or not) or experience created itself that is the issue, but how society interacts with their ‘creations that leads to undesirable social outcomes. Perhaps this is obvious– nothing exists in a vacuum, and things will always fall into the hands of those with bad intentions.

As Weizenbaum has stated, no computer can ever fully understand another human or encapsulate the human experience because not even a human can fully understand another or define the human experience– it is collective but unique and cannot be found in a computer or an AI. However, it is easy to forget that the chatbot you are typing to is just that– a bot, and not a person. This effect, dubbed the Eliza effect in his chatbot’s initial trials, has only been amplified in recent years as programs like ChatGPT have become mainstream and adopted conversational tones, creating the feeling of talking to an assistant through a virtual space rather than inputting queries into a program and receiving its answers.

Weizenbaum took issue not as much with the technology of AI, but the accompanying ideology: that computers could replace humans in decisions that impact humanity. He uses the analogy of a bomber pilot, who “is not responsible for burned children because he never sees their village” to connect to the potential of people in power using AI for similar purposes– this psychologically distances them from their actions and consequences, as they can say ‘the AI did it!’

As Weizenbaum argues, an AI is not a person, it is a computer, and cannot be human. By treating it as such and using it as a tool for certain decisions and actions, we are distancing ourselves from potential guilt and consequences, thus, the applications of computers and AIs must be limited.

Thinking back to the Eliza project, we know that participants did not see Eliza as an extension of Weizenbaum but an entity of ‘her’ own. As we have discussed, a creator’s views and biases show through their creations, but this is often ignored due to the acceptance of these technologies as their own entities. A conversation with Eliza was not a conversation at all– it was an input from a user and a response outputted by an algorithm that seemed like a psychotherapist, but was programmed as such by Weizenbaum.

In a similar vein of thought, a conversation with ChatGPT is not that at all– it is an input from a user and a response generated by a program that takes on a conversational tone. However, ChatGPT is not a one-man experiment– although there are dominant faces linked to its creation, it gathers data from a wide range of sources, including the internet, and is tagged by various workers being paid minimally for their contributions. If we consider a conversation with Eliza to be interacting with a program manufactured by Weizenbaum, to whom can we attribute the program we ‘converse’ with when using ChatGPT?

Weizenbaum created Eliza and has rejected any notions of the computer’s ‘humanity’, finding anything to be created from it as a consequence of human interaction. ChatGPT, however, is trained on a variety of information and by a variety of people. It’s easiest to just say, “I talked to ChatGPT” or “I’ll ask ChatGPT!” and expect your answer to be from ChatGPT’s own entity.

But, when analyzed more critically, we are left with many questions. Who can we attribute this technology to? It’s creator, or the theorists and programmers of the past who have worked to develop AI to where it is today? The vast amount of people on the internet who contributed to its training data? The human contractors who label the data? These questions have no clear answer, so we often jump to labeling ChatGPT as its own being. This effectively distances us from what ChatGPT results in– whether it is a valid response to a question or a speech plagiarized from various figures– and allows us to attribute it to a computer, despite knowing that the very human forces behind the program and the biases reflected within it.

By viewing a creation as its own entity, we separate ourselves from potential consequences of the programs, just as Weizenbaum claimed nearly 50 years ago in Computer Power and Human Reason: From Judgment to Calculation. As humans, we have furthered our understanding of AI and generative models greatly since then. Why, then, haven’t we furthered our understanding of labor, responsibility, and accountability when it comes to AI?


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *