Category: Reader

  • Kiss So Simple, but not for A.I.

    “How to Read an AI Image: The Datafication of a Kiss” featured on Cybernetic Forests, and the second chapter of “Your Computer is on Fire,” titled “Your A.I. is human,” by Sarah T. Roberts, collectively illuminate the intricate interplay between AI, human experience, and the ethical considerations entrenched within our engagements with the digital realm.…

  • Concerned with AI?

    In “Weizenbaum’s nightmares: how the inventor of the first chatbot turned against AI,” Ben Tarnoff explores the life of Joseph Weizenbaum as he began to develop concerns about the future of Artificial Intelligence (AI) and humanity. In 1966, Weizenbaum became the first person to invent a chatbot which he named Eliza. Eliza’s role was to…

  • Creators & Creations: To whom am I speaking with?

    Recently on Twitter (X, if you must), a user posed a question asking for examples of scholars who spent years after their discovery distancing themselves from it. The example stated in the Tweet itself is Oppenheimer, and various popular responses include Dong Nguyen and the highly addictive Flappy Bird, David Mech and his outdated analysis…

  • Weizenbaum Now

    The Guardian article, titled “Weizenbaum’s Nightmares: How the Inventor of the First Chatbot Turned Against AI,” provided me with a historical journey that shed light on the origins of AI. Reflecting on AI today, I’m struck by the current boom in this ultramodern field, which seems to have emerged primarily in the last decade. However,…

  • Week 4 Reader Blog

    Introduction In the 2020s, convolutional neural networks, graph neural networks, and K-means clustering emerged as important elements in the evolution of AI and machine learning. The engagement with complex machine learning models over the past decade has offered profound insights into the distinctive contrasts between the idealized notions of artificial intelligence and its practical realities.…

  • Fordism and Systematic Exploitation

    In NOEMA’s article, the authors brought up the issue of labor exploitation in the process of training AI and related fields of machine learning. It is not as neat and clean or high-grade technology as it appears to be, but rather an inhuman and capitalistic machine fueled up with millions of human bodies who are…

  • Week 3 Reader Blog

    Chloe Xiang, “Scientists Increasingly Can’t Explain How AI Works” The article points out an interesting argument that most AI systems are black-box models, which means they are “viewed only in terms of their inputs and outputs”. The problem with it is that “AI systems notoriously have issues because the data they are trained on are…

  • Week 3 Reader Blog

    Scientists Increasingly Can’t Explain How AI Works (Chloe Xiang, 2022) It is frightening to think of the black box nature of AI systems. Black box model is the system that utilize inputs and outputs to create information without explanation of how the output is produced, nor does it show the internal mechanism of the output…

  • AI and Justice

    Last year, Judge Juan Manuel Padilla in Cartagena, Colombia, sought assistance from ChatGPT in a legal case involving insurance coverage for an autistic child’s medical treatment. Despite the verdict being in the family’s favor, the inclusion of ChatGPT in the court ruling started a debate. The Guardian highlights experts’ strong opposition to AI integration into…