Analytic/Ethics

The Conceptual Analysis of Algorithmic Bias

Soyo_Kim 2023. 12. 3. 22:40

2023-2 Ethics and Technology

 

The Conceptual Analysis of Algorithmic Bias

 

1. Introduction

Algorithms have become an integral part of our daily lives. Our preferences are gathered by YouTube, Amazon, and Instagram in real-time, and the algorithms they employ enhance our tastes with suitable recommendations. Judges often grant bail to defendants based heavily on prediction algorithms that anticipate the likelihood of recidivism (Brennan-Marquez 2017: 1254-1255). Many companies have started to increasingly use AI interviews when hiring candidates. The applicability of machine learning-based algorithms is so extensive that any attempts to elude their impact seem futile. Accordingly, philosophers carry a responsibility to address the ethical concerns raised by algorithms and provide valuable insights for their resolution, rather than outright accepting or rejecting the significance of algorithms.

In this very respect, one of the domains in which philosophers can contribute is combining issues about AI technology with traditional topics that are not only widely covered in ethics and political philosophy, but also often neglected by algorithm developers and users. The study of mitigating algorithmic bias is the archetype of such an attempt in that it challenges our traditional notion that algorithms are always more impartial than humans and that their decision-making process is never tainted by stereotype-based evaluation of particular social groups. This paper aims to provide a conceptual analysis of algorithmic bias, which is an essential prerequisite for a deeper exploration of its ethical concerns. In section 2.1, I will critically assess the view that algorithmic bias is reducible to mechanical flaws. Challenging a claim that the term “algorithmic bias” is a disguised expression that arises from the misuse of our language, I will present the distinctive feature of algorithmic bias which cannot be captured by mechanical flaws. In section 2.2, I will scrutinize defining algorithmic bias as a deviation or a strategy for obtaining an epistemic justification. Both of which share an assumption that this concept possesses a value-neutral sense. I will argue that both definitions are flawed because they are either too broad or narrow to capture the essence of algorithmic bias.

2. The Conceptual Analysis of Algorithmic Bias

2.1 An attempt to reduce algorithmic bias to mechanical flaws

It takes for granted that every algorithm is, essentially, a type of mechanical device; they are defined as the computational instantiations of “a set of specific, step-by-step instructions for taking an input and converting into an output.” (Danaher 2015) Nevertheless, algorithms are becoming more similar to humans than ever before. What bestows a magical aura upon algorithms is their ability to challenge our traditional notions of machinery in both various and radical ways. AlphaGo, for instance, conquered the game of Go, which was previously considered something only humans can do. The innovation of AlphaGo is that it successfully resembles and materializes human intuition, imagination, and creativity. Unfortunately, it seems that AI is also becoming more similar to human malicious tendencies. Tay, an open-domain conversational AI, has baffled algorithm developers and users by uttering hate speech. Given that prejudice is considered to be the exclusive property of human beings, the notion of algorithmic bias per se entails the trend of the humanization of machines.

Such a conclusion, however, might be hyperbolic according to the way we define algorithmic bias. Some critics would argue that the term “algorithmic bias” is misleading. For instance, Wittgenstein points out that the question “Could a machine think?” arises from our misuse of the word “think.” (Wittgenstein 1969: §359-360) In fact, most people would regard Tay’s eccentric deviation as a mechanical flaw, rather than intentional discrimination or a hate crime. Arguably, the reason why we have considered prejudice as the monopolized product of human is partly because we are likely to project moral and epistemic obligations onto this word: “we are often influenced by psychological and social factors that we ourselves tend to think should not affect our cognition.” (Peters 2019: 393; my emphasis) Therefore, the substitution of the word “mechanical flaw” for “prejudice” or “bias” will contribute to dispel the aura that algorithms are moral or cognitive agents.

However, it is worth noting that algorithmic bias is more than just a simple mechanical error. When we say a car or a refrigerator is “flawed,” what we have in mind is that it cannot fulfill a specific purpose (transportation or food refrigeration). On the contrary, it is not clear how algorithmic bias hinders the fulfillment of the given purpose in Tay’s case. The point is that algorithmic bias emerged incidentally during the process of Tay’s interactions with users, and this is separated from Tay’s primary goal of engaging in conversations. On top of that, algorithmic bias sometimes even contributes to achieving an objective. Consider an algorithm that produces the judgment, “Elderly people will be less proficient in using computers.” If the objective of this algorithm is assessing individual computer proficiency, such a judgment could hinder fulfilling it. If it is detecting regions with a high demand for computer education, however, such a judgement might be considered inevitable even if it will be revealed as bias (Johnson 2021: 9947).

The preceding discussion shows that we need to expound the nature of algorithmic bias that are distinct from both machinal flaws and human prejudice. To accomplish this task, which involves providing a precise definition of algorithmic bias, we will now review the existing definitions of algorithm bias.

2.2 An attempt to define algorithmic bias in a value-neutral sense

Despite numerous articles discussing the ethical implications of algorithmic bias, a clear consensus on its definition remains elusive. Notably, some approaches delineate algorithmic bias in natural and descriptive ways as follows:

The word ‘bias’ often has a negative connotation in the English language; bias is something to be avoided, or that is necessarily problematic. In contrast, we understand the term in an older, more neutral way: ‘bias’ simply refers to deviation from a standard. Thus, we can have […] moral bias in which a judgment deviates from a moral norm. (Danks and London 2017: 4692)
These points evoke an understanding of a natural kind bias, under which problematic social algorithmic and cognitive bias emerge as species. This broader kind is normatively neutral and explanatorily robust. Against common usage, it includes biases that are epistemically reliable and morally unproblematic. […] On this general understanding, biases are necessary solutions to underdetermination, and thus, bias exists anywhere induction does. (Johnson 2021: 9951)

In both articles, the authors aim to define algorithmic bias in a value-neutral manner, removing the negative connotations typically associated with the word “bias.” For Danks and London, bias is simply a systematic deviation from certain standards, and thereby bias can be considered morally problematic only if it infringes normative rules. Johnson provides a more intriguing explanation of a value-neutral bias. According to her, bias is a strategy for solving an underdetermination problem, where an epistemic subject extrapolates generalized guidelines from concrete examples and applies them to specific problems. In this view, bias emerges as a natural outcome of inductive reasoning.

Unfortunately, Danks and London’s definition of algorithmic bias is too broad to capture the essence of this concept. As I mentioned above, one distinctive feature of algorithmic bias is that it can occur incidentally, irrespective of whether it fulfills its intended purpose. Danks and London’s definition encompass the deviation of prescribed objectives, i.e., what we commonly called mechanical flaws. Thus, according to their definition, we would have to say that the broken refrigerator is also biased. The espousal of such an approach is highly likely to become a Procrustean bed at all.

In contrast to the efforts of Danks and London, Johnson’s definition is to narrow to encompass the significant bias that emerges from unjust power dynamics. In considerable cases, morally problematic bias is the result of malevolent propaganda and factitious ideologies, rather than inductive reasoning. Slavery, for instance, was historically justified by phrenology and eugenics, claiming that there are significant radical differences between blacks and whites. However, it is crucial to recognize that these two pseudo-sciences were established on the basis of presupposing racial differences rather than objectively substantiating them. This is also the case with algorithmic bias. Let me recall Tay’s case again. It is not at all clear how Tay ended up learning hate speech while performing any inductive reasoning. The more reasonable interpretation is that Tay has learned value-laden bias transferred from reality, like a young child learning language for the first time. Johnson underestimates such a transferability of bias, which can occur without the active involvement of moral or epistemic agents.

3. Conclusion

This paper has examined two sorts of definitions for algorithmic bias: one that characterizes it as a type of mechanical flaw and another that delineates it within a value-neutral framework. The paper also shows that neither of these definitions adequately explains algorithmic bias. While we have only discussed how algorithmic bias should not be defined, the insights gained along the way allude a more sophisticated approach to characterizing the complexity of algorithmic bias as follows: First, algorithmic bias possesses a unique characteristic that cannot be reduced to either mechanical flaws or human bias. Second, algorithmic bias can, in some instances, result from the transfer of pre-existing biases unrelated to the algorithm’s rational decision-making process.

References

Brennan-Marquez, K. (2017), “Plausible Cause: Explanatory Standards in the Age of Powerful Machines,” Vanderbilt Law Review 70: 1249-1301.

Danaher, J. (2015, Julu 20), “The Philosophical Importance of Algorithms,” Philosophical Disquisitions, https://ieet.org/index.php/IEET2/more/danaher20150809.

Danks, D. & London, A. J. (2017), “Algorithmic Bias in Autonomous Systems,” Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17): 4691-4697.

Johnson, G. M. (2021), “Algorithmic bias: on the implicit biases of social technology,” Synthese 198: 9941–9961.

Peters, Uwe. (2019), “Implicit bias, ideological bias, and epistemic risks in philosophy,” Mind & Language 34: 393-419

Wittgenstein, L. (1969), Philosophical Investigations, 3rd Ed. Trans. G. E. M. Anscomb, Oxford: Blackwell.