AI Digital Divide: Tech Literacy

Chris Ume, co-founder of Metaphysic.ai and creator behind Deepfake Tom Cruise. (Source: Kris Van Exel)

In light of our Mission Impossible screening this week, we did a deep dive on the Hollywood superstar of American cinema: Tom Cruise. During our discussions, we covered his prolonged involvement in the Church of Scientology, extreme stunts on set, and briefly touched on his connection to discourses concerning deepfake AI. Since we are all tapped in on the serious issues AI tools like deepfakes can cause, I figured it’d be useful to explore this theme further.

As we are heading into the 2024 presidential election, U.S. security officials report having to prepare for a barrage of swift influence operations orchestrated by all kinds of opposition. These circumstances will certainly show up with the agency of deepfake AI and it’s safe to say that there is already a public awareness about this—considering the meme-fication of the 2016 elections and deepfake robocalls impersonating Biden.

However, public knowledge about misleading AI does not mean that everyone is informed or if other parts of the world are aware about its impact. I also feel that AI-generated disinformation poses greater challenges for individuals grappling with the digital divide and for countries that might lack advanced detection technologies. I was reminded of the presidential elections in Taiwan last year, where intelligence agencies warned about China’s attempts to sway young Taiwanese voters through several strategies, which no doubt included deepfake content farming. Concurrently, there was also a deepfake video circulating of a U.S. congressman campaigning for Taiwan’s presidential candidate. 

There’s obviously not much at our disposal to avoid fake news and deceitful deepfakes online—except relying on social media companies’ detection algorithms and clear disclosures. And it’s no surprise that these tools are often used for disinformation during high-profile elections on a global scale. But apart from this all too familiar and gloomy reality, disinformation fears in the advent of AI are felt more by individuals living in less technologically advanced countries, than in the U.S. and parts of Europe. These disparities were demonstrated in a study conducted by Ipos:

(Source: Politico)

The overarching issue here is evidently the digital divide between the global North and South—a pattern that has been historically observed in technological advancements. This is precisely why it’s imperative to avoid deepening this divide and ensure that the billions of people living in developing countries are included in conversations about digital literacy as AI continues to accelerate. Amara’s Law suggests that while society tends to overestimate the immediate effects of a new technology, we often overlook its potential to fundamentally change our collective experiences in the years to come and AI will surely follow the same prophecy. 

As the semester comes to a close, I’ve been putting my AI detection capabilities into practice since our feeds are riddled with deepfakes and AI content bots. I stumbled upon this cool site by MIT to put myself to the test.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *