A Vulnerability Analysis: Theorising the Impact of Artificial Intelligence Decision-Making Processes on Individuals, Society and Human Diversity from a Social Justice Perspective

by Tanya Krupiy

“Erica Curtis, a former admissions evaluator at Brown University in the United States, has noted that she evaluated each student’s application consisting of standardised test scores, the transcript, the personal statement, and multiple supplemental essays within a twelve-minute timeframe.1 Arguably, this is a very short period of time within which an admissions officer can evaluate the applicant’s personality and academic qualities holistically.2 The time constraints create a possibility that the admissions officer may fail to detect the applicants’ capabilities or how societal barriers diminished their ability to realise their potential. Another concern with human decision-making is that the decision-maker officer may act arbitrarily in the course of exercising discretion3 by putting different weight on comparable attributes that cannot be measured.

What is more, an admissions officer could treat applicants on an unequal basis due to being influenced by conscious or unconscious biases.4 Advances in artificial intelligence (hereinafter AI) technology give rise to a discussion whether organisations should use AI systems to select applicants for admission to university.5 Technology companies market AI systems with a capability to predict the candidates’ performance and to follow a decision-making procedure as possessing the capacity to eliminate bias and to improve decision-making.6 The computer science community is now working on embedding values, such as fairness, into the AI decision-making procedure.7 Daniel Greene and colleagues view the focus on achieving fairness by incorporating values into the design of the system as short-sighted.8 The attention on how to embed fairness into the decision-making procedure of a technical system side-lines the discussion how the employment of AI decision-making processes impacts on achieving social goals, such as social justice and ‘equitable human flourishing.’9 Virginia Eubank’s work underscores the importance of investigating how the use of AI decision-making processes impacts individuals and society. Her interviews with affected individuals who applied to access state benefits in the state of Indiana in the United States10 demonstrate that the employment of AI decision-making processes can lead to the deepening of inequality,11 to social sorting12 and to social division.13 The enquiry is particularly pertinent given the fact that not all sources report adverse outcomes. The British Universities and Colleges Admissions Service asserts that in its pilot project an algorithmic process selected the same pool of applicants to be admitted to universities as admissions officers; the organisation did not reveal the algorithm’s design and operation procedure.14

The present paper explores some of the hitherto unresolved longstanding societal problems and new issues the employment of AI decision-making processes raises. It contributes to existing literature by proposing that an AI decision-making process should be understood as an institution. The AI decision-making process reconfigures relationships between individuals as well as between individuals and institutions. The paper examines some of the values and types of institutional arrangements the employment of AI decision-making processes embeds into society. This issue is significant. The Council of Europe Committee of Ministers stated that when data-driven technologies operate ‘at scale’ their operation prioritises certain values over others.15 The assertion of the Council of Europe Committee of Ministers that data-driven technologies reconfigure the environment in which individuals process information16 should be extended to encompass the relationships individuals have with each other and with the institutions. The article examines some of the types of social transformations that the use of AI decision-making processes across domains will accentuate. While the design of AI decision-making processes will shape whether their operation gives rise to solidarity or segregation, there is a potential for these systems to adversely affect individuals who have historically experienced discrimination, disadvantage, disempowerment and marginalisation. The provisions in international human rights treaties prohibiting discrimination provide a non-exhaustive list of individuals who experience discrimination, exclusion, oppression, disempowerment and disadvantage.17 The characteristics such individuals possess include sex, gender identity, sexual orientation, age, ethnicity, race, colour, descent, language, religion, political or other opinion, national or social origin, property, birth and disability amongst others.18

The university admissions process serves as a case study for contextualising the discussion in the present paper. One of the reasons for using a case study for focusing the discussion is that an evaluation of any technology needs to be context specific. Jane Bailey and Valerie Steeves observe that technology is neither good nor bad.19 Everything depends on how developers design a technology, how the law regulates it and what values the developers embed into the technology.20 One may add to this observation that how individuals use the technology and for what purpose matters too. Clearly, it is possible to use AI technology to advance societal objectives. Bruce D Haynes and Sebastian Benthall propose that computer scientists should develop AI systems that detect racial segregation in society.21 This information can then be used to detect similar treatment of individuals.22 Since individuals have disparate opportunities as a result of living in segregated areas within the same city,23 the use of AI technologies to remedy segregation would contribute to the attainment of social justice.

This article focuses on uncovering a number of adverse impacts the use of an AI decision-making system is likely to have both on individuals and society from the perspective of advancing social justice. It is confined to scrutinising the context where educational institutions automate the process of the selection of students by employing AI decision-making processes. Such criteria could include performance on examinations, extra-curricular activities, personal statements, samples of student work and so on. While the article uses examples from a number of countries, the findings can be extended to all universities that use a variety of criteria to judge the merit of individuals. The analysis does not include within its scope AI systems that allocate students to universities based on the students’ preferences for a study programme without reference to the merit criteria. An example of the university admissions processes beyond the scope of this paper is that of the French state universities other than grandes écoles.24 The algorithm allocates places at French state universities to students according to the student’s highest preference for a programme and according to whether a student lives within the district where the university is located; a random procedure is used to break the ties.25 For reasons of space it is beyond the scope of the present enquiry to consider the beneficial uses to which a variety of AI technologies may be put.”

 

Tanya Krupiy, A vulnerability analysis: Theorising the impact of artificial intelligence decision-making processes on individuals, society and human diversity from a social justice perspective, 38 Computer Law & Security Review (2020).

Read the full article here.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.