Overreading the Future

Upon reading Nick Bostrom and Carl Shulman’s work, “Propositions Concerning Digital Minds and Society,” I’ve become aware of our current “attempts” in preparation for a post-AGI society. It’s an unusual sensation to live in a time when such ideas- about giving rights to “beings” we humans created – are being discussed and contemplated. One can’t help but wonder: will some of the points raised in the essay become relevant and be implemented in the next decade, or are they more likely to be realized further into the future, perhaps fifty years from now?

While the essay displays some excellent ideas, some seem overdue by about a decade. We’re already witnessing issues in today’s world resulting from the unregulated development of AI, affecting everything from employment to military affairs. Moreover, the essay takes on an eerie tone by suggesting the attribution of rights to AI, which is uncannily similar to those of human rights.

Nevertheless, Bostrom and Shulman present many intriguing and insightful ideas that may or may not hold relevance in the future. Bostrom adeptly outlines and emphasizes the potential havoc that AGI could unleash. For instance:

There are several ways in which mental modification or replacement could become easier in an era of advanced AI technology, with or without the subject’s consent:

 ◦ Humans might be easily persuadable by powerful AIs (or other humans yielding such AIs).

 ◦ Advanced neurological technologies will become available that make it possible to exert relatively fine-grained direct control of the human motivation system.

 ◦ Digital minds could be subject to electronic interventions that can directly reprogram their goals and reward systems.

 ◦ Exact copies of digital minds could enable experiments to identify psychological vulnerabilities and to perfect attacks which could then be applied to an entire copy clan.

As we embrace increasing technological integration in our lives and venture deeper into the digital realm, it’s undoubtedly appealing to enjoy the advantages of advanced technology. However, it’s essential to acknowledge that this progression also creates opportunities for security breaches and attacks. Granting AI the capability to manipulate and access our thoughts, potentially deploying them against us, forebodes a not-so-happy outcome for humanity. Safeguards alone cannot suffice to shield against such malevolent exploits. We must also contemplate the role of human intervention and how individuals might wield these technologies as tools of terrorism, highlighting our capacity to weaponize them.

The essay notably overlooks the absence of concrete steps to mitigate the concerns above. While it advocates for action, it falls short of offering tangible solutions. These issues appear challenging to address and grapple with, as effective strategies to prevent such outcomes remain evasive and speculative. I don’t attribute fault to the essay itself but instead observe that the topic’s nature is highly hypothetical, verging on an excess of imaginative conjecture.

In conclusion, while I maintain a degree of skepticism towards the paper as a whole, engaging with Bostrom’s ideas has been a thought-provoking experience in exploring what issues may arise in the future. However, I am doubtful whether this paper will ever reach the desks of influential societal figures. And even if it does, it’s uncertain whether it would significantly impact their decisions and the trajectory of AI development. In how the world operates, gaining advantages over adversaries and serving personal interests supersedes the considerations of ethics and rights. Even if one country decides to implement regulations on AI, there’s no guarantee that other nations will follow suit. The necessity for universal cooperation in regulating AI adds a layer of complexity to the challenge.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *