Analyzing perceived task difficulty measured through EEG and reliance behavior in a human-AI decision-making experiment
For AI to be useful in high-stakes applications, humans need to be able to recognize when they can trust AI and when not. However, in many cases, humans tend to overrely on AI, i.e. people adopt AI recommendations even when they are not adequate. Explanations of AI output are meant to help users calibrate their trust in AI to appropriate levels, but often even increase overreliance. In this microproject, we aim to investigate whether and how overreliance on AI and the effect of explanations depend on the perceived task difficulty. We plan to run an AI-assisted decision-making experiment where participants are asked to solve a series of decision tasks (particular type of task to be determined) with support by an AI model, both with and without explanations. Along with participants' reliance behavior, we will measure perceived task difficulty for each task and participant through EEG data.
The results of this study will contribute to the call topic by enhancing our understanding of how human decision-making is impacted by AI decision support. We expect to gain insights about how human decision-making competency can be complemented by AI while preserving human agency.
Output
Publication at a high-profile HCI conference like CHI or IUI.
Project Partners
- fortiss GmbH, Tony Zhang
- Türkiye Bilimsel ve Teknolojik Araştırma Kurumu (TUBITAK), Sencer Melih Deniz
Primary Contact
Tony Zhang, fortiss GmbH