Analytic/Ethics

Ethics and Technology: Discussions

Soyo_Kim 2023. 12. 17. 18:54

2023-2 Ethics and Technology

 

1. Sandler: “Introduction: Technology and Ethics” 

According to the technology as a tool view, technology can be defined as (i) something that is developed and used by people to accomplish their goals, and (ii) something that is value-neutral, which means that its value hinges entirely on the goal or ends for which it is used (p. 1).
 
Sandler challenges that such a definition has a defect in that "technology shapes us and our social and ecological worlds as much as we shape technology" (p.2). In other words, (iii) technology cannot be isolated from social and political contexts and thereby it is not value-neutral. Thus, it seems Sandler upholds (i) ∧ ~(ii) ∧ (iii).
 
In my view, however, his challenge is perplexing. Without denying (iii), we can still espouse (ii) by arguing that the value of technology depends not only on whether the technology fulfills its fixed goal, which was originally generated by social and political contexts, but also on whether the fixed goal is valuable. In fact, (ii) amounts to saying that the value of technology is, eventually, derived from the value of its objectives.

On top of that, I think (ii) is central to figuring out the distinctive feature of technology, namely, its purpose can only be established by others (i.e., from external sources). (i) is not a sufficient condition for defining technology because it cannot exclude the historical case of human objectification (e.g., employees and employers at the time of the Industrial Revolution, slaves and slave owners, etc.). Arguably, we have a moral intuition that employees and slaves should not be counted as technologies. Human beings should not be counted as technologies because they can set their self-reliant objectives from within, regardless of external enforcement. 

Accordingly, I propose the value-neutral thesis as follows: (1) The values of X depend on (1-a) whether X fulfills its objectives, (1-b) whether these objectives are valuable, and (2) The objectives of X cannot be self-determined.

 

2. John Danaher, “The Philosophical Importance of Algorithms”

The most surprising event that shows the immeasurable power of machine learning was that AlphaGo defeated Go Masters and held the foremost position in this game in 2016. As a fanboy of watching the game of Go, I was surprised by AlphaGo not only being well-versed in mastering all the traditional skills (e.g., an invasion, a ladder, probing, and an end game) that are required to win the game but also developing several novel techniques concerning the flow of stones and Fuseki by which it is able to teach humans how to play Go better.

As is often called the most complicated board game in human history, Go consists of numerous strategies and techniques. The ideal way of learning such a complicated game would be, at first glance, to teach several skills step-by-step by giving specific situations. As far as I know, this is the way the first version of AlphaGo was developed. Technically, the original AlphaGo that competed with the Go Master, Sedol Lee, was trained by supervised machine learning in the first place with the Monte Carlo Tree Search (MCTS). "Using supervised learning, we create a policy network to imitate expert moves. With this policy, we can play Go at the advanced amateur level. Then, we let the policy network to play games with itself. Using reinforcement learning, we apply the game results to refine the policy network further," Jonathan Hui said.

Surprisingly, however, AlphaGo Zero, which was trained by reinforcement learning by playing Go against itself without any initial knowledge, is much stronger than the original and at the same time makes full use of techniques that human masters and the original AlphaGo have used. The result might be interpreted as exposing the defects of expert moves, which were conventionally made by humans. It might also indicate that reinforcement machine learning algorithms are able to perform other tasks much better than humans when a reward is specifically set. However, because we cannot easily say that the conventional knowledge of human society is merely inefficient or fallacious, it might be inappropriate to set a reward if the task is pertinent to moral judgments. Therefore, how can we make a standard to choose the better types of machine learning to achieve an established goal?

 

3. Mittelstadt, et.al. “Mapping the Debate”

One of the most subtle and novel problems raised by emerging algorithm technology is the transformative effect leading to challenges for informational privacy because it is not only confined to the traditional issues of exposing privacy but also requires us to redefine the concept of privacy. As the authors already mentioned, the definition of ‘personal data’ in European data protection law is not sufficient to protect privacy effectively as it does not include data describing anonymized and aggregated data. It's crucial to note that traditional methods of privacy protection rely on individuals consenting to the use of their pre-existing information. However, the central problem here is the characteristic of AI technology which is able to provide unforeseen insightful information and generate new ways to access personal information.

It is widely known that Artificial Intelligence (or Big Data) based personalized services may raise serious concerns about privacy infringement. For instance, Automated recommendation services utilizing individuals' search history, postings, and purchase records raise significant concerns about the potential inference of non-consensual information by major corporations or government agencies exploiting such services. According to the experts at the Information Society Development Institute, it is necessary to expand the scope of privacy protection to include not only individuals' unique information but also information estimated by artificial intelligence. This, however, appears to raise challenges related to the ownership of information subjects. How far should information obtained through inference be counted as personal information?

 

4. Johnson, “Algorithmic bias” 

While reading Johnson’s article this week, I had a question arose regarding the case of the algorithm for classifying presidential candidates.

Johnson suggests that introducing counterexamples as training data might be helpful in mitigating truly implicit social biases in machines (pp. 9955-9956). However, it is a clear fact that in the entire history of U.S. elections, only one African American president has been elected, and a woman has never been elected as president. If I comprehend this case correctly, her suggestion amounts to manipulating actual cases to ameliorate social implicit bias when we let machine learning algorithms train themselves. As she admits, however, “these social patterns are likely themselves the result of the biases we are attempting to ameliorate. (p. 9956)” Thus, given that the objective of the classification algorithm is accurate real-world predictions, can arbitrarily modifying training data, even if based on a righteous intent, be deemed permissible? This not only raises concerns about compromising accuracy but also instills a tendency to justify developers in altering training data according to specific agendas.

On top of that, we can point out that these biases are futile for predicting the next president in the US, not because they are morally wrong, but because the initial setting considering only the traits of the previous presidents is fallacious; the fact that a woman has never been elected as president can be interpreted as a factor that amplifies voters' desire for a female president. Furthermore, the training data used by the algorithm does not reflect the policies of the upcoming presidential candidates at all. Both factors should be considered and will offset the side effect of assumed biases. As the case that light tends to come from above indicates, bias becomes problematic not only when it is morally wrong but also when it functions as an epistemological obstacle to achieving the intended goals. Considering this, the dilemma Johnson mentioned could be dissolved without losing accuracy by investigating the misuse of bias.

 

5. Fazelpour and Danks, “Algorithmic Bias: Senses, Sources, Solutions”

While reading Fazelpour and Danks’ article this week, what caught my interest was the point related to the immediate and long-term impact of an algorithm and the emergence of Goodhart's law. The Goodhart's law, which can cause unintended incentive effects, could be applied to all cases where a metrics can become a target of policy actions, potentially putting the cart before the horse. In addition to the case presented in the paper that the student success algorithm might give students an incentive to worsen their grades, there have been instances in South Korea where the "Rural Special Admission" policy, established to mitigate educational disparities, has been distorted by urban residents attempting to falsely claim rural backgrounds for admission.

On the one hand, Goodhart's Law can lead to the corruption of university evaluation agencies. The World Bank, for instance, devloped  the "Business Environment Assessment" indicators and provided advisory services to countries that lagged behind in rankings on these indicators simultaneously. This is essentially equivalent to purchasing improvements in rankings and can be applied to university evaluations as well. The opacity issue of artificial intelligence algorithms, exemplified by the concept of the "black box," is expected to exacerbate the challenge of monitoring such corruption and tracking the production of fair metrics in the process. On the other hand, given that all ethical issues presuppose and prompt normative behavior, efforts by students and universities towards metrics can also be evaluated as part of these normative actions. This is particularly crucial in improving the situation of minority groups that are susceptible to social biases. So, what are the ways to reconcile these two aspects?

 

6. Creel, “Transparency in Complex Computational Systems” 

I think one of the common and traditional issues stemming from opacity is information asymmetry. Citizens and consumers, in many cases, require transparency when they have limited information about the policy-making process or product pricing and anticipate an unfair outcome from this opacity. When the issues of information asymmetry are applied to human relations, it is not difficult to imagine how these problems can be realized with concrete examples. For instance, if we consider a situation where a developer possesses specialized knowledge about the algorithm, they would find it easier to gain structural and functional transparencies compared to the algorithm's users. In other words, developers could acquire knowledge about how the program instantiates a particular algorithm and which algorithm the program instantiates with less effort than the users. Moreover, algorithm users such as companies or government agencies that have easy access to algorithm input data would find it easier to obtain run transparency compared to consumers or citizens who can only accept the algorithm's outputs. In a nutshell, when algorithm transparency is selectively shared with information stakeholders, it will lead to information asymmetry and thereby produce unfair outcomes.

However, the black-box problem is unique in that it pertains to the information asymmetry between humans and the algorithm itself. "Because the [medical] diagnostic [machine learning] systems do not give an explanation or reason for the diagnosis, doctors often deem them untrustworthy and avoid them. (p. 583)" It is certainly natural to interpret the concerns of these doctors as a trust issue in the accuracy of machines. However, "translating" the decision-making process into a format that doctors can comprehend through methods like LIME seems to be more about seeking psychological reassurance than improving the machine's accuracy. In other words, the case involving LIME appears to be similar to that of mathematicians who raised objections to the computer solving the four-color problem.

 

7. Peters, “Explainable AI lacks regulative reasons” 

While reading Peters' paper this week, what caught my attention was his explanation of the "mindshaping view of self-ascriptions," wherein he connects the reliability of the judgment 'p' produced by decision-maker to the propositional attitude statements, rather than the truth-value of 'p.'  As is well known, a statement such as 'A believes that p' is considered to possess a unique characteristic in that it does not comply with the law of substitutivity of identity. Peters assumes that these statements, unlike ordinary propositions, have a distinct informative content beyond their truth-values or their represented contents, such as "A will affirm that p when asked about it, and "A will deny that not-p," and so forth. If, as Peters emphasized, statements like these play a significant role in ensuring transparency and trustworthiness in the decision-making process, it resonates with coherentism, one of the theories that are covered in modern epistemology to some extent. In this regard, I think one of the indespensible factors with regard to the reliable decision-making process is the observance of inference rules (mostly logical ones?). I have been once perplexed when ChatGPT pointed out errors in a sentance, and then stated there were no errors to the exactly same sentence.

 

8. Brennan-Marquez, “Plausible Cause"

According to Brennan-Marquez, there are two situations where plausibility and probability do not align. One is when an actual event occurs, such as diagnosing a rare disease or finding a culprit in a complicated detective novel, but the likelihood of it recurring is exceedingly low (For example, in Edgar Allan Poe’s short story "The Murders in the Rue Morgue," the truth of the case arises from unbelievable coincidences). The other is when it's almost meaningless to determine the actual cause because it occurs too frequently. Brennan-Marquez argues through these two cases that comprehending the actual cause involves a qualitative aspect that cannot be eliminated through a quantitative methodology which pursues statistical accuracy. Here, my interest lies in explaining what this qualitative aspect is. Generally, what is required for the explanatory power of a theory is characterized by (1) its internal consistency, (2) logical simplicity, and (3) the applicability to reality of the theory. I'm not sure whether Brennan-Marquez claims each of these factors has a qualitative aspect respectively. Also, is the hierarchy of such "qualitative aspects" justified by Judge’s intuition at all? Another question is whether the term "reality" refers to simple facts or encompasses facts and legal norms. If it's the latter, does it imply that explanation is a composition of both factual judgment and value judgment?

 

9. Ross, Amber "AI and the expert"

I have doubts about Ross's attempt to apply Hardwig's criteria for deferring to expert judgment to the relationship between AI (an expert) and human (a layman). First, Hardwig's criteria are basically applicable to the decision-making process of an individual expert. But one of the important features of experts is that they belong to an expert group; other experts always have the ability and responsibility to review and monitor the decision-making process of one expert (for instance, peer reviews of research papers). So, we could say that an expert's decision-making process is still transparent in this manner (this is likely one of the key reasons why laypersons trust experts). However, in the case of AI, such review and monitoring procedures do not exist, especially when it is a black-box AI. Therefore, I think that Ross's suggestion misses the collaborative nature of experts. On top of that, expert decisions can sometimes seriously conflict with each other. If two similarly reliable AIs make different judgments on an ethically significant issue (for example, making decisions for military operations), which AI's decision should be accepted? As previously mentioned, it appears that there are no procedures in AI to regulate individual expert judgments and reconcile a dispute when we regard AI's opaque decisions-making process as that of an expert.

 

10. Kirsten Martin, “Ethical Implications and Accountability of Algorithms” 

In her article, Martin claims that "the question is, who is responsible for the ethical implications rather than whether or not the algorithm provides moral guidance." (p. 843) I generally agree with Martin's two arguments (the frm is knowledgeable as to the design decisions, and they willingly enter into the decision context), which support the claim that firms should take ethical responsibility for the algorithms they develop. However, I would like to point out that the process of assigning responsibility, in reality, might be much more complex than she described.

As Johnson pointed out, ethical issues like algorithmic biases can occur, irrespective of developers' intention, and problems like the proxy problem cannot be completely eliminated notwithstanding their efforts. Additionally, as Ross mentioned, there are urgent issues that require the use of black box algorithms. In this situation, an ethically appropriate response that companies can take is to transparently provide algorithm users with information about the benefits and limitations of using that algortihm. For instance, if there is an AI performing medical surgeries with a success rate of 60%, deciding whether this probability is sufficiently high may vary depending on the perspectives of doctors, government agencies, and patients. In this case, the primary ethical responsibility of the company is nothing but providing accurate information. Therefore, the issue here seems to be the transparent disclosure of information that companies possess and the explicit consents from stakeholders affected by algorithms (although, in some cases like algorithms used for recidivism prediction, obtaining consent may not be feasible). Similarly, in the case of car design, firms design cars based on consumer's demand and government regulations. Thus, the autonomy and moral responsibility of designers are often less than what Martin delineated.

 

11. Jenkins, Purves and Strawser, “Autonomous Machines, Moral Judgment, and Acting for the Right Reasons”

Reading Purves, Jenkins, and Strawser's article, I found that there is something vague about the comparison between AWS and a sociopath soldier case. According to the article, sociopathic soldier abides by the constraints of jus in bello, and he would fail to act for the right reasons (on the contrary, racisist soldier is acting for the wrong reason). To sum up, they believe that any members of war must act for the right reasons  in killing enemy combatants and thereby the actions performed by AWS will be morally problematic.

However, it is questionable whether the sociopathic soldier cannot act for the right reasons. Their claim can only be valid under the assumption that there is an emotional role in acting for right reasons. And such an assumption would lose its power in Kant's moral theory claiming that we have to strictly follow the practical reason and exclude all our natural inclinations (including emotion). The genuine reason authors believe that AWS cannot act for the right reasons is not because AWS because it lacks an ability to make moral judgment in competently applying general moral principles, even if it can codify such principles. The authors argue that reflective equilibrium, practical wisdom, phenomenal quality, etc., are essential for making moral judgments, but there seems to be no reasonable evidence to suggest that at least the sociopathic soldier lacks these abilities. Likewise, the authors' claim that AWS lacks these abilities, without a more specific explanation, seems to risk reducing to a trivial assertion that there is a difference between humans and machines. For instance, the scenario where AI acquires practical wisdom through methods like big data doesn't seem entirely implausible (in fact, AIpaGO breaks the prejudice that human intuition is irreplicable). 

12. Danks and Roff, “Trust, but Verify”

While reading Roff & Danks' paper, the three methods they presented to enhance soldiers' trust in AWS (Autonomous Weapon System) were intriguing but somewhat unclear. In particular, I didn't clearly understand how the first method could be effectively achieved based on their explanations. According to Roff & Danks, establishing the second type of trust (found in interpersonal relationships and dependencies) between AWS and soldiers is challenging because they generally do not share a mental model. In other words, humans can hardly figure out what AWS (the trustee) will do (especially if it has a learning process) and why it pursues that course of action (especially if it is too complicated to be comprehended). 

Their first solution is designating the AWS liaisonan, which is analogous to warfighters who team with non-human animals, and using the transitive aspects of trust. However, animals do not receive highly complex objectives like AWS (for example, "eliminate all enemies in this room"), and thereby accepting the risks of the autonomous judgments made by AWS will be much higher than that of animals. It raises doubts about whether trust in AWS (regardless of the competence of the AWS liaison) possesses transitivity, unlike the case with animals. In my view, trusting a trained animal seems much easier than trusting an AWS; animals are not only trustee, but also trained, while the relationship between AWS and its liaisonan does not involve such a process.

 

13. Kirkpatrick, Hahn, & Haufler: “Trust and Human-Robot Interactions” 

I think there is a discrepancy between the good will account of interpersonal trust and the main feature of trust presented in the paper, in one central respect. According to the good will account, the trustee's goodwill is an indispensable constituent of interpersonal trust as well as the truster's risk and vulnerability. Following this line of thought, we can say that true interpersonal trust only exists where a trustee serves as a moral agent. However, AI (at least currently) cannot serve as a moral agent. Therefore, there is no true interpersonal trust between humans and AI at all.

However, the authors also suggest the important feature of interpersonal trust by comparing it with reliability; "trusting can be betrayed, or at least let down, whereas disappointment is the proper response to unreliability." In a nutshell, we can be disappointed by a flawed coffee machine, which turns out unreliable, but we do not feel betrayed at all. The only matter is functional flaws in such a case. In this regard, interpersonal trust and its counterpart are related to a moral obligation. If so, however, how is it possible for betrayal to occur from such a trustee who possesses goodwill? The good will account of interpersonal trust cannot sufficiently explain how betrayal is possible. Nor can it explain the meaning of statements such as "I trust him 90%."

 

Reading Lists

Sandler: “Introduction: Technology and Ethics” 

John Danaher, “The Philosophical Importance of Algorithms”

Mittelstadt, et.al. “Mapping the Debate”

Johnson, “Algorithmic bias” 

Fazelpour and Danks, “Algorithmic Bias: Senses, Sources, Solutions”

Creel, “Transparency in Complex Computational Systems” 

Peters, “Explainable AI lacks regulative reasons” 

Brennan-Marquez, “Plausible Cause"

Ross, Amber "AI and the expert"

Kirsten Martin, “Ethical Implications and Accountability of Algorithms” 

Jenkins, Purves and Strawser, “Autonomous Machines, Moral Judgment, and Acting for the Right Reasons”

Danks and Roff, “Trust, but Verify”

Kirkpatrick, Hahn, & Haufler: “Trust and Human-Robot Interactions”