Can AI developments in life-conserving areas offset the destruction direction that AI is headed?
With AI inevitably going in the weapon and destruction direction, can the good in AI development be a justification for its counter of a violent AI.
MIT has released a new course that is open to the public about teaching AI and healthcare. Could making more people educated on drug development (and potentially the roll of big pharma) offset the doomsday direction that AI is going?
Based on the three articles we read for this week, it’s hard to say if weaponized AI will be regulated. The biggest challenge that these articles pose between AI and societal well-being is that this might be a people versus government argument.
This article from the Carnegie Council outlines 5 points that are critical to AI governing structures. Although detailing a disclaimer that this is just a basic model and there is definitely more to add, it touches on pretty essential points that do trigger a lot of space for discussion and guideline development. And even if we as commonfolk do get “transparency, accountability, validation, and audit safety protocols, and to address issues related to the preservation of human, social, and political rights in all digital goods” there’s no guarantee that other countries will follow along and not take this opportunity to possibly hold a gun to a rival country.
Now, would the government get mad if UNESCO and OECD put limitations or demand public statements regarding their activities, and who’s to determine if the statements released by the government are reliable?
The opinion guest essay published in the New York Times titled “Our Oppenheimer Moment: The Creation of A.I. Weapons” discusses how some engineers are refraining from working on aggressive software and AI projects under the Department of Defense. However, there seems to price to everything. So even if a larger, better-established company denies working for the government, wouldn’t they just seek out another company that’s willing to do it?
This piece from The Hollywood Reporter demonstrates the difference between weaponized AI and the opp bomb being accessed, the amount of people who could assist in weaponizing AI is much greater than those who could build an atomic bomb. Overall, it’s more than just a problem of copywriting and threatening creativity. Many people in class have pointed out that AI is on the way to having more rights than the people of the United States, which is alarming but not surprising. If AI isn’t being used to destroy another nation, it can most definitely replace people in a level of importance to societal governance.
Leave a Reply