https://apnews.com/article/ai-military-jet-air-force-test-307be2e5153f883aed032cb6676d08f2
How far are we into the future, really, when we talk about Skynet exploding us all with nuclear bombs? Surely, humans won’t be stupid enough to go that far with AI, right? According to The Pentagon, it’s coming, perhaps much sooner than we are comfortable with. The US Air Force is undertaking a test later this spring where an artificial intelligence-operated warplane will take a civilian airborne. Furthermore, in addition to automated warplanes, the US Air Force hopes to activate 1,000 autonomously operated drones for future air warfare eventually. As additional details are yet to be revealed, one may question: When will we automate targeting systems as well? It seems like the natural progression for these machines, which are already fully autonomous and require no manpower to operate. Given the military’s preference for increased firepower at the expense of reduced manpower, the temptation to automate all aspects of these lethal machines is highly probable.
Relying on machines to do the dirty work has benefits, of course. These advantages include safeguarding the lives of pilots, sparing individuals from the psychological trauma associated with direct killing, and reducing military expenses related to training Air Force pilots, which can amount to as much as ten million dollars per pilot. However, it’s a double-edged sword. When we start giving decision-making processes to AI (decisions that can dictate someone’s death), complex ethical dilemmas arise. First, we are delegating life-and-death decisions to machines and shedding ourselves from the moral responsibility of taking a life. This practice dangerously diminishes the value of human life, a path that humanity should unequivocally avoid. Second, the loss of human oversight and control over lethal weapons raises questions about accountability. It blurs the distinction in warfare. How can it differentiate between an enemy soldier and a civilian in a warzone? Even now, we struggle to understand generative AI systems; how are we supposed to understand weaponized AI systems in control of more complex mechanisms? The “black box” models of AI hinder transparency and accountability mechanisms. It will be challenging to assess the decision-making processes of autonomous weapon systems and hold accountability to specific instigators in violation of ethical and legal norms.
Beyond ethical issues, the development of AI-driven weapons systems raises concerns about arms control, arms racing dynamics, and the destabilizing effects on international security. Similar to the Cold War era and the competition for nuclear supremacy, there are indications that we are witnessing a new global arms race. World leaders must remain vigilant regarding the challenges posed by these systems and take proactive measures to avert further conflicts.
Leave a Reply