A vulnerability analysis: Theorising the impact of artificial intelligence decision-making processes on individuals, society and human diversity from a social justice perspective

by Tanya Krupiy

Image via Pixabay

3. Thinking about AI decision-making processes as an institution that brings about transformative effects

The employment of AI decision-making processes is part of a larger trend of digital technologies transforming society.170 There is a view that digital technologies are ushering in a Fourth Industrial Revolution.171 The vulnerability theory is a fruitful lens for better understanding the AI decision-making processes and what kind of values these processes enact. It sheds light on some of the institutional and societal changes the employment of AI decision-making processes introduces from the perspective of social justice. As a result, one gains insight into how the cumulative use of automated decision-making processes is likely to impact on individuals and society at large from the vantage point of social justice.

Section 3.1 will investigate how the employment of AI decision-making processes impacts on individuals through changing the relationships individuals have to each other and the institutions. Marlies van Eck studied how the Dutch administrative bodies use systems172 that combine information from different government databases173 and autonomously determine174 whether an individual is entitled to payment.175 These systems do not use big data,176 that lack self-learning capabilities177 and execute procedures that lack a discretionary decision-making element.178 She posits that the use of automated decision-making processes to make assessments regarding the entitlement of individuals to receive a payment from the state changes the relationship the citizens have with the administrative body.179 Furthermore, they amend the relationship between administrative bodies.180 This argument should be expanded and modified to fit the context of the AI decision-making process. It will be put forward that the AI decision-making process should be conceived of as an institution. This institution changes how the individual is embedded in a set of relationships with individuals and institutions. The sets of relationships the operation of the AI decision-making processes gives rise to are complex and interlocking. The employment of AI decision-making processes may be seen as bringing about a social order181 and as modifying a societal architecture. Numerous transformations the AI decision-making processes bring about which are detrimental for human diversity will be discussed. Section 3.2 will assess what values the AI decision-making process enacts and how it bears on human diversity. It will engage with the concerns Mireille Hildebrandt and Karen Yeung raised relating to how the collection and analysis of data from many sources will impact on humanity182 and social life.183

3.1. AI decision-making processes reconfigure human, institutional and social relationships

The present analysis is confined to the employment of AI decision-making processes to create a representation of the individuals, to estimate with a margin of error their performance and to reach a conclusion concerning an individual’s entitlement to a positive decision. Since the attention is on selecting students for admission to a university, the enquiry excludes many applications of the process from the discussion, such as the use of AI processes to diagnose diseases184 and to allocate public service resources.185 The analysis centres on how the use of AI decision-making processes impacts on individuals who are most affected by being situated in unequal relationships. The findings can be extrapolated beyond the case study of the university admissions to the employment context and to contexts where a holistic assessment of the applicant and the exercise of discretion facilitate the attainment of socially just outcomes. The vulnerability theory lens makes it possible to identify some of the ways in which the use of AI decision-making processes will affect the relationships of individuals with each other and societal institutions. At the core of the vulnerability theory is a shift of focus from the individual and the individual’s autonomy to how the state organises relationships and societal institutions.186 Under the vulnerability theory framework, the AI decision-making process should be thought of as an institution. Technology in the course of its operation constitutes relationships between citizens, devices and infrastructures.187 These relationships can be thought of as a network.188 One of the relationships the use of the AI decision-making process gives rise to is between the applicant and the university. The AI decision-making process mediates this relationship by allocating candidates to positive and negative decision quadrants. It determines what combinations of characteristics entitle a candidate to be allocated a space at a university and channels how the candidates may communicate their qualities to the university.

The employment of the AI decision-making process gives rise to a relationship of subjugation between the applicant and the university through a process of erasure. Mathematical processes model the world as black and white.189 The process of representing an individual as a cluster of quantifiable characteristics using a vector190 pushes individuals into categories.191 When the AI decision-making process evaluates the individual against a template of an optimum combination of characteristics, it is unable to account for the individuality of people, the uniqueness of their experiences as well as the full scope of the contribution they can make to organisations and society as a result of having these qualities. The AI decision-making process withholds resources from individuals who either have difficulty fitting the algorithmically constructed profile of a good candidate or whose aptitudes algorithmic processes cannot detect.

The proposed use of the AI decision-making process to score student essays as part of the university admissions process192 illustrates that the analysis of data using algorithmic processes provides an incomplete representation of the person.193 This representation fails to capture important information about the individuals and thereby impedes the recognition of their aptitudes. This problem is deeper than technologies misreading or failing to read certain bodies.194 Everyone is affected. The lyrics of the Queen song ‘I want to break free’195 performed by Freddie Mercury is chosen as a case study because it is a text where the singer both expresses his identity and communicates to society. In this song Mercury talks about his desire to ‘break free’ from society’s lies, about being in love for the first time and about the need to ‘make it on my own.’196 The reader needs to know the context behind the text in order to understand the communication. The context pertains to society using categories to designate sexual identity197 and society historically stigmatising homosexuality.198 Furthermore, the capacity for empathy is a pre-requisite for understanding the entire communication in the text of the song, namely Mercury’s feelings of anguish and resentment. Mercury talks about his love for his partner and the need to ‘break free’199 as an allusion to the decision to begin a homosexual relationship notwithstanding society’s approbation of such conduct. The song is an expression of Mercury’s individuality because it encapsulates his feelings, opinions, life experiences and struggles.

How the programmer designs an AI decision-making process will determine what words the system will designate as relevant for the analysis, how it constructs links between the words200 and how it scores a particular combination of words. The AI decision-making process evaluates the similarity between words based on their distance in geometrical space.201 Because it is difficult to use a mathematical model to represent context,202 the AI decision-making processes cannot link the text to the societal context underlying the communication.203 Consequently, it lacks the capacity to derive meaning from the text of Queen’s song. In the course of mapping the song as a set of linkages between disparate words204 the AI decision-making process excises from the text what the program does not allow it to detect. The AI decision-making process erases meaning, human expression, individuality and the narration of lived experiences. It precludes individuals from communicating the diversity of their experiences in essays and to explain what contributions they can make to society if admitted to the university programme. For instance, the AI decision-making process will not detect that the song ‘I want to break free’205 demonstrates the writer’s creativity and capability to advocate for a more inclusive society. This concern may become attenuated if computer scientists find a way to imbue the AI decision-making processes with human qualities, such as the capacity for empathy,206 abstract thought and the capacity to link the emotions to the communication content. These qualities facilitate the ability of individuals to understand the meaning behind text where individuals communicate their personal experiences and needs. The real question is whether society wishes to have synthetic persons displace living decision-makers.

Another way in which the use of the AI decision-making process subjugates individuals is arguably through pushing them to adopt particular patterns of behaviour.207 According to Mireille Hildebrandt, individuals will adjust their behaviour in anticipation of how algorithmic processes operate in order to improve their chances of receiveing a favourable decision.208 Individuals will write essays for the purpose of university admissions in a manner that reflects how the AI decision-making process carries out analysis on the data and in a manner that increases their chances of receiving a high score. This inhibits self-expression and the ability of individuals to communicate holistically about how they can make a contribution to society if selected for the university programme. Of course, some university admissions officers create barriers for candidates by discriminating against them on the basis of their identities.209 However, universities can implement measures to reduce explicit and implicit bias210 in order to facilitate a holistic assessment of candidates during the decision-making process. The problem with employing fully autonomous AI decision-making processes is that there is no human being to read the essay and other application materials holistically, to detect how the decision-making process may disadvantage a candidate and to report a problem. Human oversight may not be a practicable solution. The use of AI decision-making processes is redundant when a human decision-maker reads and carefully considers the application materials.

The way in which the AI decision-making process creates a model of human diversity reinforces existing systems of classification.211 This fails to capture individual identities212 and institutes a relationship of subordination. Technologic design can reinforce hierarchies based on race,213 gender214 and colonial history215 even when on the face of it the design purports to achieve fair outcomes. IBM proposes to designate individuals who have historically experienced discrimination into groups so as to allocate to them additional points during the decision-making process to ensure fairness.216 What this approach misses is the problematic nature of designating individuals into categories using a system of classification. Queer legal theory rejects the use of categories to designate sexual orientation217 as a means to end subordination of homosexuals to heterosexuals.218 Queen’s song ‘I want to break free’219 illustrates that homosexual individuals are just like the rest of humanity in their experience of love and quest for relationships. The use of AI decision-making processes subjugates groups by perpetuating a hierarchical system of classification and by shifting attention away from how the construction of societal relationships disadvantages individuals.

If subordination is to be ended, then a vulnerability analysis is preferable. A related concern with achieving affirmative action policies using AI decision-making processes can be gleaned from the observation of Sandra Wachter. Wachter explains that there are reasons why individuals may wish not to disclose that they have characteristics protected by the prohibition of discrimination.220 This means that the initiatives by companies such as IBM to confer additional points to individuals with protected characteristics are likely to be ineffective. The focus should be on the vulnerabilities individuals experience as a result of being human221 and on how the state can restructure social institutions222 to ensure that all individuals have equal access to resources crucial to their flourishing.223 This approach shifts the analysis to the root causes of inequality and on what steps the state should take to ensure that individuals are equally situated in relationships. Furthermore, the focus should be on developing frameworks to protect individuals from discrimination without individuals having to disclose that they have characteristics protected by non-discrimination statutes. Wachter for instance proposes that equality protection can be achieved without individuals disclosing their protected characteristics by prohibiting differential treatment on the grounds of there being an association between an individual and a group that has protected characteristics.224

The use of AI decision-making processes reconfigures relationships between individuals. Tobias Blanke maintains that geometric rationality underpins algorithmic decisions.225 Algorithmic processes measure social relations.226 The distance between data points representing individuals in a geometrical space designates the degree of similarity between individuals.227 Blanke’s proposition should be extended to argue that the employment of AI decision-making processes brings about a relationship between applicants. The decisions the computer scientist makes when gathering the data and building the AI decision-making infrastructure determine the contours of the relationship. The computer scientist’s choices determine what similarities exist between individuals,228 what resemblances become amplified and what similarities are attenuated. The nature of the relationship the computer scientist constructs between applicants depends on which personal attributes of the students the computer scientist includes in the decision-making process and what weight the computer scientist attaches to the attributes. The more weight the computer scientist puts onto the student’s socioeconomic status, the less the relationship between students is characterised by individual autonomy. There are stronger elements of interdependence and solidarity in the relationship.

To illustrate, if the computer scientist models the students by only looking at their examination grades, the individuals who have lower examination grades due to experiencing social barriers will be grouped together. Students, such as young mothers who lack sufficient support, have lower high school grades.229 The AI decision-making process will group many young mothers together because it focuses on detecting a similarity in the past performance on examinations. Since the AI decision-making process will designate these mothers as being poor candidates for university admissions, their unequal position within social institutions becomes solidified. These women will have diminished employment opportunities as a result of lacking a university degree. The use of the AI decision-making process weakens the relationship of solidarity, connection and interdependence between young mothers with insufficient support and students with no care commitments. In societies that recognise the interdependence of individuals and the dependence of individuals on societal structures,230 the government has a duty to intervene to redress unjust institutional relationships.231 Its duty is to strengthen the resilience of individuals by providing them with resources.232 The use of the AI decision-making process that focuses on choosing candidates with optimal predicted performance deepens unjust societal relationships because it constructs relationships characterised by individual autonomy. These relationships include those between candidates, and between the candidate and the university.

Consider now a case where the AI decision-making process incorporates information about factors that result in individuals being unequally situated in institutional relationships in order to account for social justice concerns. Navid Tanzeem and colleagues created an artificial intelligence software which as part of predicting the students’ grades233 depicts the correlation between the socioeconomic, psychological and educational background of the student.234 There are practical and technical difficulties with constructing an AI decision-making process that accurately captures the disadvantage individuals experience by virtue of being situated in unequal relationships. For instance, partially sighted students recount how attitudes that they differ from the general population,235 a lack of resources such as recorders,236 and teachers making inadequate usage of assistive technology devices create learning barriers.237 Lack of universal classroom and curriculum design238 as well as societal biases result in partially sighted students being in less equal relationships than their peers with other students, teachers, the school and other social institutions. If the student in addition to being partially sighted comes from a one-parent family or has a parent who is ill then this student will experience unequal social relationships created by inadequate state financial, social and healthcare support.

Kimberle Crenshaw coined the concept of ‘intersectionality’ to explain that each person experiences discrimination and disadvantage differently.239 Institutional structures, such as sexism and racism, impact on individuals in an intersecting240 and multi-layered manner to disadvantage individuals.241 Factors, such as poverty and lack of job skills, that are the result of class, gender and other forms of oppression compound the disadvantaged position of individuals.242 The cumulative disadvantage individuals experience is greater than the sum of the factors giving rise to the disadvantage.243 The concept of intersectionality244 suggests that to understand the disadvantaged position of an individual one needs to consider the fluid manner in which all personal and institutional relationships in which an individual is situated interact.

A hurdle to employing AI decision-making processes is that it is difficult to translate how unequal relationships inhibit the ability of individuals to succeed onto a model using quantifiable variables. It is unclear how one can quantify the disadvantage a white female teenager experiences due to being bullied at school for having a bisexual orientation and due to having caring responsibilities for a relative. Neither can one compare with precision how the disadvantage this female teenager experiences compared to the barriers a male teenager of colour with a chronic health condition encounters. The inclusion of a set of factors into the geometrical representation of the individual and the giving of mathematical weight to these factors fails to accurately represent the experiences of individuals.

The testimonies of civil society organisations support the assertion that the use of mathematical formulas precludes the decision-making process from providing appropriate weight to the unjust societal relationships in which an individual is situated. Civil society organisations oppose a decision-making procedure where human decision-makers tick boxes and work like a computer.245 They worry that algorithmic decision-making processes cannot take relevant factors into account concerning an individual and give appropriate weight to such factors.246 While the process of quantifying unjust societal relationships strengthens a recognition that individuals are interdependent, it replaces socially embedded relationships with symbolic mathematical boundaries. This occurs because the AI decision-making process transforms individuals into a set of factors that designate social barriers. As the AI decision-making process allocates individuals into groups247 and decision quadrants it erects symbolic boundaries. Societal relationships become partitioned and mechanised. This discussion points to the need to preserve human decision-making where the decision relates to evaluating the capability of the individual. Such decisions incorate deliberation how to intervene in order to remedy the unequal position of individuals in social and institutional relationships. Of course, if computer scientists succeed in replicating human capabilities fully in AI as well as in providing the AI with knowledge of the social processes, then the AI decision-making process will approximate human decision-making more closely. The closer the AI decision-making process replicates human decision-making, the fewer concerns there will be with delegating decision-making to machines.

In the course of restructuring relationships between students, and between the students and the university the operation of the AI decision-making process brings about transformations at societal level that influence the ability of individuals to realise themselves as human beings. Florian Eyert, Florian Irgmaier and Lena Ulbricht posit that AI decision-making processes structure social processes and generate a particular type of social order.248 Abeba Berhane is of the view that AI decision-making processes ‘restructure the very fabric of the social world.’249 Social orders arise from individuals adopting particular beliefs, common interpretations of events and acceptable ways of behaving.250 Such conduct gives rise to patterns of interaction251 and produces a social system.252 The cumulative use of AI decision-making processes to determine admission to educational institutions, selection for employment and whether to extend a loan to an individual produces a particular type of social order.253 This social order amplifies the scientific tradition of elevating rationality above emotions, experience and the power of imagination.254 AI decision-making processes structure how individuals navigate their relationships with each other. There is a danger that the operation of AI decision-making processes will act as a divisive force. As a result, there will be a deepening of segregation.255 Individuals will adapt their behaviour in order to increase their chances of receiving a positive decision from the AI decision-making process.256 Parents may encourage or nudge their children to associate with other children who have those habits or lifestyles that make it more likely that they will meet the criteria of optimal performance as designated by the AI decision-making process. Since the AI decision-making process associates optimal performance with a set of particular characteristics shared by a group,257 parents and children may avoid individuals who lack such characteristics. Further support for this argument is found in the scholarship of Wachter. Sandra Wachter draws attention to the fact that the AI decision-making processes may profile people who have an association or ‘affinity’ with a group who are protected by antidiscrimination statutes by virtue of having shared interests.258 For instance, the individual can have interests or engage in activities that have a linkage to the characteristics of the protected group259 or to characteristics that functions as an indicator of a protected characteristic.260 To illustrate, there may exist statistical evidence that members of an ethnic group like to consume Carribean food and listen to jazz music.261 This discussion illustrates that the use of AI decision-making processes has wide-ranging secondary effects that legislators should take into account. Human decision-making is crucial for discerning the full array of human expression and creativity.”

Tanya Krupiy, A vulnerability analysis: Theorising the impact of artificial intelligence decision-making processes on individuals, society and human diversity from a social justice perspective, 38 Computer Law & Security Review (2020).

Read the full article here.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.