In “Propositions Concerning Digital Minds and Society,” Nick Bostrom and Carl Shulman propose a list of several possible explanations of how AI could serve as a digital mind in society which could carry great benefits and cause major harms. Their proposal is divided into ten sections: consciousness and metaphysics, respecting AI interests, security and stability, AI empowered social organization, satisfying multiple values, mental malleability, persuasion and lock in, epistemology, status of existing AI systems, recommendations regarding current practices of AI systems, and impact paths and modes of advocacy.
To start their proposal, Bostrom and Shulman introduced consciousness and metaphysics. What does it mean to be conscious? Can AI possess human consciousness? These are just a few questions to consider when thinking about how AI can have a digital mind. They discussed how implanting processors into a computer could create the “digital mind” of an AI. However, they questioned its quality and quantity as a model of how closely it could mimic human consciousness. After reading this section, I was very interested and curious about how we as humans define consciousness. Again, what does it mean to be conscious? In my psychology class, we discussed how consciousness is essentially being in an alert state. Based on this definition, can a digital mind carry the same alertness as it relates to consciousness, and what does it mean for a digital mind to be alert?
Additionally, they discussed the morality of AI technologies in the “Respecting AI Interests” section. They expressed several concerns about how AI should be created. They believe that it should be designed to know its limits and to be “…treated so that they are likely to approve of their having been created” (Bostrom and Shulman, 3). This reminds me of Mary Shelley’s Frankenstein in the way that the creator must design an object where the object is satisfied with itself, which is very opposite of what occurs in Frankenstein, and we see the result of this harm throughout the novel. Could the same happen for the digital mind? Bostrom and Shulman also state, “Suffering minds should not be created for the purposes of entertainment” (Bostrom and Shulman, 3). I was a bit disturbed by this statement because they are anthropomorphizing a computer system. Now, this sort of digital mind has feelings–suffering feelings?
They go on to discuss the security and stability of AI and how it should be regulated to ensure that it doesn’t act beyond its means and cause more harm than good (i.e., global destruction and cyber-attacks). Then, they explain the harms and benefits of having a more coordinated and organized AI system which could help enforce agreements but also cause criminal conspiracies. In the next section, Bostrom and Shulman discuss how society could ultimately benefit from AI technology by reducing certain issues like suffering and conflict. They also suggest that AI can be used as persuasion which could positively impact humans (i.e., it could help patients and their ability to adhere). In the epistemology section, they consider how AI could allow for the discovery of more truth and formulation of more accurate estimates. Next, they talk through the status of existing AI systems in deciphering between the ideas of consciousness and morality. Some AI technologies have the intellectual capability (language or mathematics) but lack the “superficial aspects.” They explain how certain technologies like Chatpgt exceed the capabilities of certain animals, which I found again disturbing. I feel a bit uncomfortable whenever technology is compared to animals or humans in that sense because you can’t compare an animate’s capability to that of a computer because the structure of both is completely different. Later in their proposition, Bostrom and Shulman give recommendations regarding the current practices of AI systems in which they believe that it is unethical to train these models on humans and animals as it can cause major harm. Lastly, they discuss possible ideas to consider when thinking about the ways in which AI technologies should be regulated and its current and future impacts.
All in all, I think Bostrom and Shulman have a very interesting proposition. It is concerning to consider how AI technology could develop consciousness, morality, and other capabilities as mentioned in the reading. To me, there is a line between human and technology, and throughout their proposition this line gets crossed several times. It has left me with more questions, and I am curious to know what others think about their proposition.
Image Used: https://www.usatoday.com/story/news/nation-now/2017/11/02/baby-frankenstein-born-halloween-winter-park-florida-hospital/824388001/
Leave a Reply