Research/Proposals & Drafts

Proposal: Can AI become an expert?

Soyo_Kim 2024. 7. 12. 01:36

2023-2 Ethics and Technology

2024.05.10 - [Research/Publications] - Can AI become an Expert?

 

Can AI become an Expert?

김현균. (2024). Can AI become an Expert?. 인공지능인문학연구, 16, 113-136.

georgia15.tistory.com

 

 

1. The Main Questions

In the final paper, I will delve into the question of whether artificial intelligence can become an expert. Throughout history, the emergence of new technologies has consistently given rise to concerns about potential job displacement. One classic example is the Luddite movement, where workers sought to destroy weaving machines out of fear for their livelihood. Two centuries later, AI now assumes a role once executed by weaving machines, invading the domains traditionally considered exclusive to human beings; it can replicate human capabilities, perform tasks previously undertaken by humans, and ultimately replace roles with superior efficiency. Conversely, AI advancements also sometimes foster unwarranted optimism about progress. For instance, we often encounter arguments advocating the introduction of AI judges in the comments section of news articles when a human judge renders a verdict contrary to public opinions. This implies a belief in AI’s capacity to execute human tasks with greater fairness, accuracy, and rationality.

The question of whether AI can be considered as an expert, however, goes beyond both concerns about job prospects and optimism about progress. To become an expert, AI should also be expected to embody factors considered essential for experts, including trust, responsibility, and explainability. For instance, recognizing AI as a judge implies trusting it in its role as a judge, expecting it to take responsibility for fair judgments, and believing that it will provide appropriate explanations for its decisions in court. The absence of each factor could lead to morally problematic consequences with unjust outcomes for stakeholders.

2. The Structure of the Final Paper with a tentative conclusion

Therefore, I will address whether AI can adequately embody these elements and become an expert. First, I will narrow the scope of the discussion by examining the definition and the nature of an expert. Along the way, I will argue that the term ‘expert’ does not merely refer to an individual with a specific profession but encompasses a member of the expert community. In the expert community, each member not only possess and advance knowledge in their specialization but also should be subject to the verification of the ownership of that knowledge. Plus, I will elucidate the nature of an expert through the expert-layperson relationship. That is, an expert has certain epistemic privileges over a layperson, exhibiting the following characteristics: (1) The layperson trusts the expert’s opinion even without understanding the underlying rationale (trust). (2) The expert bears direct or indirect responsibility for the outcomes if the layperson takes his/her advice (responsibility). (3) The expert has the ability and obligation to explain their knowledge to the non-expert at a level suitable for the non-expert’s understanding (explainability).

Next, I will argue that AI cannot become a trustee, a subject taking responsibility for its advice, and a sufficient transmitter of knowledge, in turn. (1) Amber Ross claims that using opaque AI without understanding its decision-making process could be justified because it is similar to relying on experts’ opinions. However, this is misleading. One of the significant differences between the expert-layperson relationship and the opaque AI-human relationship is that trust in an expert is fundamentally justified by the regulatory functions inherent in the group to which they belong (I call this trait as an external regulatory function). (2) It is controversial to decide who must take a responsibility when AI has made a fatal mistake. In most cases, the issue of AI and responsibility can be reduced to the matters of trustworthiness (reliability) of AI, and consent accompanied by sufficient information; we could say that the user should take a risk where these two conditions are met, and the developer should take responsibility for negative consequences on behalf of AI where such conditions aren’t met (it is similar to the case that the expert group take a responsibility on behalf of an individual expert). Taking responsibility, however, sometimes far exceeds the matter of blameworthiness. For whatever reason, it makes experts more prudential (I call this trait as an internal regulatory function). In this manner, AI cannot be a subject taking responsibility as it lacks such an internal regulatory function. (3) Amber Ross holds that explainability is not even necessary for the expert-layperson relationship in that “the laypeople cannot themselves ‘assess the merits of the evidence’ nor understand how the evidence supports the expert’s decision.” I will challenge that explainability remains crucial requirements for experts, as epistemologically, the explanation of a state of affairs can manifest in multiple ways. A counterargument could be raised against the three criteria I have presented, suggesting that these criteria are too stringent. In particular, some might argue that AI like AlphaGo has already serves as an expert at all. In response to such an objection, I would assert that AlphaGo possesses a certain form of knowledge-how, rather than knowledge that. Therefore, I maintain that the criteria I have presented remain valid.

3. The Tentative Bibliography

The literature primarily covered in the course is likely to be cited (“AI and the expert,” “Trust and Human-Robot Interactions,” “Ethical Implications and Accountability of Algorithms,” and “Plausible Cause”).