AI: Rated R for Violence

Can AI developments in life-conserving areas offset the destruction direction that AI is headed?

With AI inevitably going in the weapon and destruction direction, can the good in AI development be a justification for its counter of a violent AI.

MIT has released a new course that is open to the public about teaching AI and healthcare. Could making more people educated on drug development (and potentially the roll of big pharma) offset the doomsday direction that AI is going?

View Course Here

Based on the three articles we read for this week, it’s hard to say if weaponized AI will be regulated. The biggest challenge that these articles pose between AI and societal well-being is that this might be a people versus government argument.

This article from the Carnegie Council outlines 5 points that are critical to AI governing structures. Although detailing a disclaimer that this is just a basic model and there is definitely more to add, it touches on pretty essential points that do trigger a lot of space for discussion and guideline development. And even if we as commonfolk do get “transparency, accountability, validation, and audit safety protocols, and to address issues related to the preservation of human, social, and political rights in all digital goods” there’s no guarantee that other countries will follow along and not take this opportunity to possibly hold a gun to a rival country.

Now, would the government get mad if UNESCO and OECD put limitations or demand public statements regarding their activities, and who’s to determine if the statements released by the government are reliable?

The opinion guest essay published in the New York Times titled “Our Oppenheimer Moment: The Creation of A.I. Weapons” discusses how some engineers are refraining from working on aggressive software and AI projects under the Department of Defense. However, there seems to price to everything. So even if a larger, better-established company denies working for the government, wouldn’t they just seek out another company that’s willing to do it?

This piece from The Hollywood Reporter demonstrates the difference between weaponized AI and the opp bomb being accessed, the amount of people who could assist in weaponizing AI is much greater than those who could build an atomic bomb. Overall, it’s more than just a problem of copywriting and threatening creativity. Many people in class have pointed out that AI is on the way to having more rights than the people of the United States, which is alarming but not surprising. If AI isn’t being used to destroy another nation, it can most definitely replace people in a level of importance to societal governance.


Comments

One response to “AI: Rated R for Violence”

  1. Andrea Hidalgo Avatar
    Andrea Hidalgo

    Lani,

    Thanks for sharing your thoughts on the challenges in regulating weaponized AI and the global concern these developments have caused. There is no hyperbole when it comes to military applications of AI and this harsh truth prompted me to recall our class discussions on Israel’s AI-enabled mass targeting system, named “Hasbora”, which translates to “Gospel”. I felt compelled to reply to your post after reviewing recent reports on this. Just to recap, the AI-target creation machine, also known as Israel’s mass-assassination factory (a headline used in many articles referencing Habsora following a statement by a former Israeli intelligence officer), was implemented in the IDF’s aerial bombing campaigns on the Gaza Strip. It was essentially designed to accelerate the generation of targets from surveillance data. The algorithmic system has facilitated the bombing of densely populated civilian areas and caused a significant increase in civilian casualties, including children. An article in The Guardian notes a report from a source who had formerly worked in the target division and stated, “It really is like a factory. We work quickly and there is no time to delve deep into the target. The view is that we are judged according to how many targets we manage to generate.” Another source had disclosed that the IDF system’s “emphasis is on quantity and not on quality” and that a human eye “will go over the targets before each attack, but it need not spend a lot of time on them”. Additionally, the system’s assumptions about who constitutes an imminent threat are concerning—especially in light of the high number of civilian casualties. What’s more is that this mode of warfare is characterized by an emphasis on data-driven operations that purportedly offer technological alternatives to the challenge of achieving comprehensive situational awareness on the battlefield. Operating within this framework suggests that data is being viewed as ‘objective markers’ that emerge from external reality—an insightful observation I came across on a blog—rather than being seen as constructs that result from a complex process involving the collection, categorization, and interpretation of machine-readable signals. This perspective leads to an over-reliance on data for executive decision-making, regardless of the origins or reliability of the data. Ultimately, these outcomes raise ethical and legal issues regarding warfare practices and compliance with international humanitarian law.

    References:
    https://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets
    https://robotfutures.wordpress.com/2024/01/21/the-algorithmically-accelerated-killing-machine/

Leave a Reply

Your email address will not be published. Required fields are marked *