1999 Japan commercial for a mobile internet web service

Can you truly opt-out of AI data collection?

Last week I came across a guide published by Wired on how to opt-out of having my personal data being used to train generative AI programs. In the guide, the authors included a list of industry-dominating tech companies, all which have elaborate opt-out processes, such as: Adobe, Amazon, Google Gemini, Grammarly, Hubspot, OpenAI (ChatGPT), Perplexity, Quora, Rev, Slack, Squarespace, Substack, Tumblr, and WordPress. The instructions, most of which were distributed by the tech companies themselves, generally fell into 3 types: emailing the tech company directly because there was no setting to opt-out, finding the setting to opt-out for providers that offered it, and being out of luck with companies that vaguely offered an option, but didn’t truly opt users out. 

The guide was helpful because it explained why it’s a problem that Generative AI companies use our personal data to run their programs, and what options we have available to claim some sort of data sovereignty. The authors of the guide were also upfront about their limitations and mentioned they couldn’t list everything since many tech companies aren’t clear about the data they’ve collected from the web and compared it to a black box model. This left me feeling frustrated, but not surprised at the reality that we cannot protect our internet privacy because reckless AI developers are indifferent to the demands of their users. Additionally, the guide failed to mention many of the tech companies I use daily, such as Instagram, TikTok, Apple, and X.

It shocks me that these AI programs have exploded in growth and popularity at the expense of our privacy, and yet there has not been a major outcry or push to regulate the tech companies more strictly. Recently, there has been some movement in the legal field with laws introduced in Europe, as well as with the United Nations’ advisory body dedicated to addressing AI governance. However, it feels far too late to take any precautionary measures on an individual level when we’re so far into the development of AI and when hundreds of these predatory companies exist. Moreover, these companies have already accessed much of our data—with or without our knowledge. I’m also reflecting on how much convenience these AI technologies have brought to my life and others. For instance, I use ChatGPT every single day for all sorts of tasks such as math calculations, cooking recipes, and teaching myself coding. I’ve also started using another generative AI tool called Julius AI, since it allows me to upload entire Excel documents containing data and functions as a kind of “data analyst.” But despite enjoying the services that AI companies provide, I don’t feel it’s enough of a reason to allow them to disregard people’s privacy concerns and exploit our data to run their programs. They still have the opportunity to correct these issues, be transparent about the poor handling of their users’ personal information, and take better measures to credit original work if they’re going to use it to run their technologies. Ultimately, it doesn’t make sense for Big Tech to continue collecting our data if they are not going to address current issues at hand and put a pause on the acceleration of developments in their programs.

References:

Burgess, M., & Rogers, R. (2024, April 10). How to Stop Your Data From Being Used to Train AI. Wired. https://www.wired.com/story/how-to-stop-your-data-from-being-used-to-train-ai/

Meaker, M. (2024, March 6). Europe is breaking open the Empires of Big Tech. Wired. https://www.wired.com/story/europe-dma-breaking-open-big-tech/ 

Mukherjee, S. (2023, October 27). United Nations creates advisory body to address AI Governance. Reuters. https://www.reuters.com/technology/united-nations-creates-advisory-body-address-ai-governance-2023-10-26/ 


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *