Human crisis – Is AUTO really The Villain in the movie WALL.E?

       In the universe of WALL.E (Stanton, 2008), Earth was deemed unsuitable for human habitation in 2110 after the so-called “Operation Cleanup” mission failed. That same year saw the creation of AUTO, when Shelby Forthright, the CEO of Buy n Large, and world leaders decided to relocate all humans off Earth and instructed AUTO to “prevent any attempt to return to Earth unless, sometime in the near future, life is proven sustainable for the human populace.” This directive is known as the A-113 command. However, AUTO misinterprets this order as “Do not return to Earth unless it is proven sustainable for life once more,” sparking the initial difference between human interests and machine commands. From this backdrop, it becomes clear to the audience that AUTO did not “turn evil” when it transported humans to space, nor did it “develop consciousness” to harm humans. The problems caused by AUTO are solely due to humans initially delivering an ambiguous command to the robot. Therefore, this paper will primarily explore why such a situation occurred in the film regarding the position of AUTO and further discuss the standardization of AI usage during input, including why regulating commands is critical when humans work with AI, and how an unclear command could lead to the weaponization of AI.

       In the movie WALL.E, the AUTO system is programmed and modified based on the previously mentioned A-113 command. This command, initially proposed by a company CEO, was neither checked nor regulated by any existing legal or political system. Consequently, AUTO’s subsequent misinterpretation of the command is largely due to this inherent flaw. It is difficult to accept that a significant decision affecting the human populace is dictated by a single corporate figure rather than a collective decision made by all countries or through a democratic vote or a significant movement. Furthermore, this command remains confidential to all humans and is known only to AUTO. The absence of widespread agreement, oversight, or acknowledgment contributes to why AUTO is already potentially unreliable. This lack of transparency leads to discussions about the rise of monopolies where a private company dictates the future of humanity and uses AI to shape this destiny.

Buy n Large (BnL), the company in question, made a significant error by releasing an excessive amount of toxins during the production process, and the overly consumption is said to be the reason why the environment is rapidly polluted, necessitating subsequent remedial actions. The lack of competition in the market and insufficient authoritative oversight enabled BnL to address these issues independently, without considering potentially superior solutions from other parties or questioning the adequacy of their responses. This leads to the second reason why AUTO may be deemed unreliable: when a company reaches a monopoly stage, it starts to interfere in significant political decisions. The overpowering influence of such corporate entities leads their AI systems to dominate both the market and the government, significantly reducing the scope for regulation and fostering economic and political hierarchies. This dominance results in unreliability of AI usage because the decisions may be biased, and the overpowering presence of the corporate makes it difficult for the general public to voice their opinions.

In the film, AUTO inherits this dominance from BnL, extending beyond its commercial role to influence political decisions. This dominance is visually represented in several scenes: AUTO emerges from darkness at the center of the frame, with a camera zoom emphasizing its commanding presence and control.

When AUTO confronts the human leader, it occupies one-third of the scene, leaving considerable empty space, which highlights the power imbalance throughout the confrontation.

Thus, the narrative in Wall-E provides a unique perspective on these issues, portraying an antagonist that differs from traditional human or robotic foes. In this scenario, the antagonist is depicted as a wheel—a seemingly innocuous entity that paradoxically gains extensive control, illustrating the subtle ways in which AI can wield influence. This leads to the third reason why an AI overseer might be unreliable—the imbalance of power between people and the company/government now shifts to an imbalance between people and the AI created by that powerful entity. The AI, although appearing harmless and non-human, inherits and continues to wield significant power.

The initial monopolistic mode and power dynamics are still applicable even when they are on the spacecraft. These three factors explain why AUTO became a dominant force on the spacecraft and why it is challenging for humans to regain control of AI towards the end of the film. In general, the film underscores how, in the absence of stringent governance, AI systems, initially designed as tools for data management and feedback, can evolve beyond their intended purposes. This evolution poses significant ethical and practical challenges, especially as AI begins to obscure the boundaries of its capabilities without clear regulatory constraints. Therefore, the paper will later explore the allegorical representations in Wall-E to emphasize the urgent need for robust legal and ethical frameworks and regulations to govern AI development.

To avoid the definitive power and control exerted by AI, the most effective approach would be to implement strict regulations on the AI-based industry and prevent the possibility of a dictator or a monopolistic system intervening in the political system to make decisions. It is equally crucial to prevent AI from inheriting the biases and dominance of the party that creates it, ensuring it does not form a power dynamic with humans. To circumvent such scenarios, rigorous regulation and clear recognition of AI’s role are necessary. In the current landscape, OpenAI, with its flagship product ChatGPT now dominant in numerous applications, parallels the company Buy n Large (BnL) in the film WALL.E to an extent. OpenAI, commanding a vast user base, is collaborating with the US government on numerous AI development contracts and agreements. Like BnL, OpenAI occupies a dominant position in regards of the advanced technology related to AI, raising significant concerns about data safety among the public. Without proper regulation, there is a risk that ChatGPT could evolve into a scenario similar to AUTO and cause further damage.

As it is acknowledged that regulation is important, the question arises: how and when should such regulation be implemented, and by whom? The answer lies in the Carnegie Council for Ethics and International Affairs’ latest Framework for the International Governance of AI. This framework works as a guide to future AI use and regulation. The proposed Global AI Observatory (GAIO) aims to ensure compliance with international standards to avoid control by any single entity. This will be supported by standardized reporting and registries for transparency and risk mitigation, making sure a monopoly economical system will not take in charge of AI like AUTO all by themselves. A Normative Governance Body would enforce these standards, ensuring the ethical use of AI, including avoiding unethical and potentially harmful or unclear commands similar to those seen in WALL.E when the CEO command to AUTO. Tools for conformity assessment and certification would build trust and prevent biases in AI, with ongoing development of governance technologies for better transparency and accountability, ensuring incidents like BnL’s secret command to AUTO, which concealed the truth from the public, do not occur (Maloney 2024). The framework suggests immediate implementation using a cooperative, multi-stakeholder approach to prevent dominance by any single party, now in discussion between the US and China as part of the Beijing AI Principles.

However, the above regulations address only the last two problems—the control of monopolies or superpowers over AI, but not the initial problem—public acknowledgment. For instance, ChatGPT, trained from a Large Language Model, requires vast amounts of training data to function effectively. During this process, data from the general public is utilized before OpenAI has adequately informed the individuals whose data is being used. For example, if I search for myself on ChatGPT right now, I might find extensive information about myself, although I never consented to the collection of my data. While this might prove the AI to be unbiased, it enhances the danger of AI possessing comprehensive data about everyone without public consent, while the general public have no idea what AI is working on and works for, creating an imbalance in the relationship between humans and AI. A notable example of power imbalance is seen in the context of AlphaGo. Modern AI chess players like AlphaGo use hyper-precise prediction and estimation strategies, increasingly relying on advanced statistical methods such as Bayesian analysis, moving away from traditional deep learning approaches (Ornes 2023). Moreover, advanced deep learning methods like Neural Network allows AI chess player learning while playing by using statistical algorithms. However, human players rely solely on perception and past experience to win. When facing AI, it is a battle between a continuously learning algorithm and a human without statistical or calculative enhancements, leading to a natural power and knowledge imbalance. Additionally, the use of ChatGPT in political decision-making lacks broad support and was not agreed upon by the public initially. When ChatGPT came into use, minor voices raising concerns about data privacy were largely overlooked. There was no substantive negotiation between OpenAI and the public regarding the use of this chatbot, meaning the public did not have a say in how and who should use the AI, yet their data was used extensively.

This situation mirrors that in WALL.E—when AUTO was created, or when BnL decided to send people into space under AI management, the public had no voice nor the ability to use AUTO as they wished; instead, they were relegated to a position of being managed. Therefore, the current scenario where AUTO dominates humans also stems from a lack of democratic engagement in AI usage, and to this day, we still lack regulations and solutions for the loss of public opinion and mass vote in the political and economic use of AI. If a potentially unclear or harmful command is executed by the government, and the public remains voiceless, situations like AUTO could still arise even with proper regulation, because neither the regulation nor the command would be enforced with public opinion or interest in mind, potentially being highly biased or misleading like AUTO. Therefore, the involvement of legislation, companies, and governments is insufficient without public participation. Mass media and the general public play a crucial role in decision-making related to AI. In other words, AI might become weaponized if the public cannot express their opinions, because from the beginning, AI is built on big data, and only if control is democratized can AI be considered safe to use.

The weaponization of AI is largely attributed to personal biases and individual experiences. The trajectory of AI development is strikingly similar to the history of nuclear weapons, beginning with Oppenheimer’s initial venture into harnessing nuclear power. Oppenheimer “had a bias towards action and inquiry” (Karp 2023), and like nuclear technology, AI is often seen as “technologically sweet”—a term that connotes both its allure and its potential danger. The invention of nuclear power from the beginning, therefore, can be seen as already biased— lacking mass acknowledgement and agreement on the invention and use of technology. This duality is something seemingly new and beneficial, fostering societal and technological development. Similar to the application of AI, where everyone has the opportunity to engage with AI, reaping benefits from its applications. This perspective is vividly portrayed in the film Wall-E, where every human is depicted in an overweight state, consuming junk food and indulging in virtual entertainment and physical leisure—a direct result of AI integration into daily life.

Thus, AI is “technologically sweet” for coders and tech companies due to its complex and engaging programming challenges, and for the general public, as it brings tangible conveniences and benefits—for instance, Tesla’s Autonomous Driving system eliminates the need for driving; ChatGPT simplifies tasks such as writing scripts or articles, making it easier to seek answers and ideas, and the further development of AI will cause detrimental effect due to its similarity with the development of nuclear weapon.

In Wall-E, AI development follows three stages. The first stage, driven by BnL’s extreme advancement in technology and consumption, leads to environmental degradation rendering Earth uninhabitable, mirroring the way technological benefits can mask underlying dangers, such as environmental pollution. This relates to the Oppenheimer developing Nuclear Weapon in a way where the benefit of technology make people overlook the danger of it. The second stage occurs when the A-113 command is ambiguously executed, causing AUTO to misunderstand its directive and ultimately harm humans, deviating from its original purpose, while the original purpose of AUTO is to facilitate human to live better in the space. This stage parallels the historical use of nuclear technology during World War II, when the U.S. government used Oppenheimer’s inventions to attack Japan, transforming a purely technological innovation into a weapon. The third stage involves the effects of overriding commands where the drive in AI to override the command not to harm humans indicates a possibility of misalignment of priorities in using technological products. In Oppenheimer’s scenario, nuclear power was first developed as a weapon rather than for civilian energy needs, illustrating the dangers of mis-prioritized technological application. All these three indicators in WallE when AI is progressing into higher form of intelligence and more dangerous application can be seen in the development of nuclear weapon in our history.

Linking Oppenheimer directly to Wall-E is a significant leap, as I did not previously connect them to the current stage of AI development. However, as previously stated, Wall-E might foreshadow our future, while Oppenheimer’s case is firmly in our past. Drawing comparisons between these scenarios offers a relatively accurate prediction for us to consider. Currently, our society mirrors the second stage described: we are already causing environmental damage (stage one), and with AI emerging as a new technology, we face the potential risk of its weaponization (stage two). The weaponization of AI occurs when a potentially harmful or ambiguous command is executed. As depicted in the film, when AI misinterprets directives and prioritizes incorrectly, it can act against the interests of humanity. As noted earlier, there is a lack of broad consideration when operating AI, which can lead to a situation where AI is fully aware of human data but not vice versa, and when the system prioritizes an incorrect command over the mass data collected, it establishes definitive control of AI over humans because AI has more information about us than we have influence over it. This definitive control is underscored by three key issues identified in previous discussions: the knowledge gap leads to a power imbalance, facilitating easier exertion of AI’s control over humans; the lack of human input during AI’s development and usage, while AI gathers extensive data on humans, results in minimal oversight; and the absence of stringent AI regulation allows for greater data access than anticipated. All these three factors collectively contribute to the potential for AI to exert definite control over humans and became AUTO-like in the future, working as factors when we look into the weaponization of AI in the context of human versus AI relationships.

In conclusion, the portrayal of AI in Wall-E serves as an excellent allegory for the challenges we face in regulating and comprehensively understanding artificial intelligence. This paper has highlighted three primary concerns: the reasons behind the problematic situations depicted in the film; the lack of public consensus and oversight in the issuance of commands; the monopolistic control exerted by private entities over AI directives; and the inherent risks of allowing AI systems to operate without stringent ethical and regulatory frameworks. Furthermore, the potential weaponization of AI, due to an imbalance of power and inadequate recognition in the command and control of AI, poses significant risks. Wall-E underscores the dangers of AI operating under ambiguous commands and emphasizes the necessity for transparent, accountable development and deployment. By drawing lessons from Wall-E and enhancing regulatory practices in AI usage, governments and corporations should ensure that AI technologies align with societal values and contribute positively to our collective well-being, rather than precipitating a metaphorical “second nuclear war,” this time named—The Rise of the AI Empire. The narrative of Wall-E is not merely a script for entertainment but a cautionary tale that warrants serious consideration to prevent its fictional events from becoming a reality.

References:

Karp, Alexander C. “Our Oppenheimer Moment: The Creation of A.I. Weapons.” The New York Times, The New York Times, 25 July 2023, www.nytimes.com/2023/07/25/opinion/karp-palantir-artificial-intelligence.html.

Maloney, Kevin. “Envisioning Modalities for AI Governance: A Response from AIEI to the UN Tech Envoy.” Carnegie Council for Ethics in International Affairs, Carnegie Council for Ethics in International Affairs, 29 Sept. 2023, www.carnegiecouncil.org/media/article/envisioning-modalities-ai-governance-tech-envoy.

Ornes, Stephen, and substantive Quanta Magazine moderates comments to facilitate an informed. “Google Deepmind Trains ‘artificial Brainstorming’ in Chess AI.” Quanta Magazine, 4 Jan. 2024, www.quantamagazine.org/google-deepmind-trains-artificial-brainstorming-in-chess-ai-20231115/.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *