In “Weizenbaum’s nightmares: how the inventor of the first chatbot turned against AI,” Ben Tarnoff explores the life of Joseph Weizenbaum as he began to develop concerns about the future of Artificial Intelligence (AI) and humanity. In 1966, Weizenbaum became the first person to invent a chatbot which he named Eliza. Eliza’s role was to act as a psychotherapist; it responded to messages typed by a user. It is interesting to consider how Weizenbaum invented Eliza as a sort of therapist as this role is typically associated with women as individuals who take on the stereotypical caregiver or therapist; these roles project an image of women being more emotionally aware and connected to society. It seems like Weizenbaum unknowingly or knowingly gave Eliza these stereotypical feminine qualities which the public immediately sought after. He could have named the chatbot anything, why Eliza? Why give Eliza a role that could easily link to that of a woman? As I consider these questions, I am curious to know if his childhood influenced the creation of Eliza given his early established interest in Mathematics, rough relationship with his family members (i.e., dad), and finding his place in society.
As Eliza grew, it became extremely difficult for individuals to detach the chat bot from that of a real person. Weizenbaum recalls, “…one day, he said, his secretary requested some time with Eliza. After a few moments, she asked Weizenbaum to leave the room” (Tarnoff 2023). This experience serves as one example of Weizenbaum’s concerns between human use and technology which began to impact his studies as a professor at the Massachusetts Institute of Technology (MIT). At MIT, Weizenbaum along with other computer scientists were given over 2 million dollars in grants from the U.S. Pentagon to build computer models that could be easily accessible and target individual needs (Tarnoff 2023). This was entitled Project MAC, which means machine aided cognition among other meanings. Successfully, they were able to develop “time sharing” which allowed users to send messages and receive an immediate output from the computer. The Pentagon also funded other programs at MIT which allowed Weizenbaum and computer scientists to create technological systems designed to support the Vietnam war. It is interesting to note how Weizenbaum also had concerns with his position in these developments as he questioned the value behind their policies. In response, his colleagues explained “If we don’t do it, they told him, somebody else will. Or: scientists don’t make policy, leave that to the politicians” (Tarnoff 2023). These statements add to my personal concerns and frustrations with AI and technology today. Like Weizenbaum, I question the ethicality of the “scientists” who are creating these technologies. Thus, it relates to our previous class discussions on how there is competition for the advancement of AI as ethicality seems to be ignored, which for me, is still difficult to accept.
In expressing his concerns with AI and Eliza, Weizenbaum was often met with opposing viewpoints on his thoughts cornering AI. He discusses how individuals like John McCarthy and Marvin Minsky who generally believed that the human brain acted as a “meat machine” and could be reproduced by a human-made machine. However, Weizenbaum argued, “…no computer could ever fully understand a human being…and no human being could ever fully understand another human being” (Tarnoff 2023). I agree with Weizenbaum as there are subjective experiences by humans that no other human will ever know. How could these same experiences ever be replicated by a machine? I am curious to know what Weizenbaum would think of AI and technology today. Would his concerns remain the same?
Similarly, in “ChatGPT Is Powered by Human Contractors Getting Paid $15 Per Hour,” Lucas Ropek follows the concerns of AI. He explores the working conditions of contractors who are employed to assist AI models like ChatGPT in their data labeling tasks. These workers are responsible for “… parsing data samples to help automated systems better identify particular items within the dataset (Ropek 2023). With such a key role, these workers are only paid $15/hour, and some are even paid as little as $2/hour. Have we become so consumed with the possibilities of AI while ignoring the fact that the same possibilities are placing peoples lives at risks? Again, the ethicality of these issues deserves to be addressed. How can we ensure that policies are established to protect individuals from the negative harms of AI? I think it is important to understand the divide between human and machine as it relates to Weizenbaum’s main argument that human and machine are separate entities. However as AI continues to grow, this barrier seems to be lessening which is again very concerning to think about.
Leave a Reply