Introduction
In the 2020s, convolutional neural networks, graph neural networks, and K-means clustering emerged as important elements in the evolution of AI and machine learning. The engagement with complex machine learning models over the past decade has offered profound insights into the distinctive contrasts between the idealized notions of artificial intelligence and its practical realities. A particular theory, still enlightening to this day, highlights the intricate dynamics within the field: statisticians and data scientists, leveraging big data and machine learning models, essentially transform computers into tools as machine labors. Conversely, researchers and computer engineers, dedicated to refining computational models, become enablers for advanced coding, positioning themselves as labors for the new machine learning models. Beneath this layer, data encoders—humans, not autoencoders—are tasked with the manual labor of labeling vast amounts of data for supervised automatic classification, and they often work under challenging working conditions. This situation underscores a critical ethical dimension of AI that overturns the concerns about privacy and machine rights, directing attention to the significant labor and time investment required from human encoders.
The Exploited Labor Behind Artificial Intelligence
In The Exploited Labor Behind Artificial Intelligence (Williams, 2022), Ms. Williams on the media NOEMA elucidates that the concept of a “superintelligent machine” as commonly envisioned by the general public is significantly detached from the current realities of artificial intelligence (AI). The bulk of work in the field of machine learning, as it stands, is fundamentally centered around basic tasks such as classification and prediction, with binary classification being predominantly used. Despite discussions around AI possessing emotions, the reality is that we often employ simplest logistic regression models for tasks that a human, with a mere ten hours of training, could outperform.
Ms. Williams underscores that AI technology is still in its primitive stages, exemplified by the fact that the recognition or generation of images by AI systems relies heavily on algorithms that translate visual data into simple numerical representations, since it is hardly possible for a computer to understand a graph without dimension reduction. For instance, my classmates in my machine learning class undertook a project involving Graph Neural Networks for detecting lung infections, which essentially converted graphical data into numerical values for statistical analysis through bar graphs, which simplified the data to the largest degree.
liuximeng2. (n.d.). Liuximeng2/Chest_xray. GitHub. https://github.com/liuximeng2/Chest_xRay?tab=readme-ov-file
Therefore, the current practices in machine learning and AI are largely aimed at reducing the complexity of datasets to a form comprehensible by machines, highlighting a preliminary phase in model development that is far removed from attributing human-like emotions such as love to computers.
This phase is characterized by the need for extensive data transformation, labeling, and decoding (dimension reduction), which necessitates significant human labor. The pressing issue, as highlighted by the discussion, is not centered around the ethics of computer intelligence but rather the ethical treatment of the labor force involved in data processing.
The New Dark Age
This concern is further compounded by the potential for bias in the decoding process, as noted by James Bridle. Even “minor biases in data samples and labeling” can lead to inaccurate classifications, perpetuating biased outcomes in AI systems trained on such data. Ensuring unbiased data requires a considerable amount of repetitive work by numerous individuals to accurately label data. In medical AI research, for example, it may require the involvement of at least three professionals to classify signals in order to minimize the mean squared error in machine-led supervised classification. This demand for additional labor significantly exceeds traditional methodologies, where a single doctor could diagnose based on a limited number of images. The expanded workforce, often compensated inadequately, faces issues of low pay, poor health conditions, and overwork. The situation is exacerbated by the fact that the combined salaries of these workers may not even equate to that of a single doctor, despite analyzing a much larger volume of images and dealing with greater potential for encoding bias due to minor variations in the data, as Bridle points out. This situation tells a critical need to address the working conditions and ethical treatment of the workforce involved in AI data processing, shifting the focus from hypothetical future ethical dilemmas of AI to the immediate ethical concerns related to labor exploitation in the field.
Leave a Reply