Contact person: Karen Joisten, Ettore Barbagallo (karen.joisten@rptu.de; ettore.barbagallo@rptu.de

Internal Partners:

  1. RPTU kaiserslautern 

 

This micro-project started from the consideration that AI systems not only are improving in doing what they are expected to do but are also developing a characteristic that no other technological artifact displays, that is, the resemblance with biological systems. This resemblance explains the tendency present not only in the media and social media but also within academic milieus to use anthropomorphic language when talking about what AI systems do or are (“intelligence,” “agency,” “autonomy,” “life cycle,” “learning,” “knowing,” “discriminating,” etc.). Joisten and Barbagallo attempted to discern cases in which anthropomorphic language is inevitable and can scarcely be replaced by better linguistic alternatives, and cases in which philosophers, scientists, and engineers should work together to find more suitable language solutions. The goal of the micro-project was not to suggest that anthropomorphic language use is in all cases incorrect and should always be replaced by more correct language use. The project leader instead adopted an ethical perspective, arguing that the real risk of unconsciously using anthropomorphic language when speaking of AI systems is not the humanization of AI but the mechanization of human life. The expression “mechanization of human life” refers here to the possible shift in the way human beings intellectually comprehend and emotionally perceive their humanity. Mechanization of human life, therefore, takes place when AI becomes the model and framework of human self-comprehension. An important takeaway of the micro-project is that the issue of anthropomorphic AI language and mechanization of human life cannot be addressed and solved by a single isolated discipline but only in an interdisciplinary effort in which philosophers, ethicists, computer scientists, engineers, social scientists, linguists, and jurists share their expertise.

Results Summary

The main focus of the micro-project was on the ethical consequences of language use when discussing AI systems and human-AI interaction. The research project was conducted in three phases.

1) In the first phase (May to June 2023), the project examined various AI guidelines, such as the Ethics Guidelines for Trustworthy AI (AI HLEG 2019) and others, with the aim of analyzing the language used to describe AI’s functionality and the interrelation between humans and machines. The project’s philosophical emphasis on problems related to language use was based on the phenomenological observation that language shapes our cognitive and emotional relationships to ourselves, the world, and also to our technological artifacts, including AI.

2) In the second phase (July to August 2023), the study addressed more general issues that emerged from the analysis and comparison of the examined AI guidelines. Despite the evident efforts of the guidelines’ authors to use technical, scientific, neutral, and objective language, Joisten and Barbagallo identified several terms—such as “agency,” “learning,” “decision making,” and “autonomy”—that indicate a tendency toward anthropomorphic language. The questions posed by the project implementers were: when is it philosophically and ethically acceptable to employ humanizing language when speaking of AI’s functioning? And when is it more appropriate to replace the terms mentioned above with more suitable concepts?

3) The third phase of the project (October 2023 to February 2024) involved the Research Seminar “Ethics and AI,” which was directed at PhD students, postdoctoral scholars, and research fellows, and took place at the University of Kaiserslautern-Landau (Campus Kaiserslautern) in the winter term of 2023-2024. The seminar gave Joisten and Barbagallo the opportunity to present and discuss their findings with other researchers and colleagues. In the seminar, the project leaders aimed to show that the main ethical risk of using anthropomorphic language in relation to AI is not the humanization of AI, but rather the mechanization of human life.

Tangible Outcomes

  1. Research seminar: Ethics and AI for PhD students, postdoctoral scholars, and research fellows in University of Kaiserslautern-Landau (Winter 2023-2024) https://www.kis.uni-kl.de/campus/all/event.asp?gguid=0x36775C8D0E68413D87103D67948EF327&tguid=0x3E97C1E01A714B9F9C0BEE5AB4FFE5FC