Social dilemmas are situations in which the interests of the individuals conflict with those of the team, and in which maximum benefit can be achieved if enough individuals adopt prosocial behavior (i.e. focus on the team’s benefit at their own expense). In a human-agent team, the adoption of prosocial behavior is influenced by various features displayed by the artificial agent, such as transparency, or small talk. One feature still unstudied is expository communication, meaning communication performed with the intent of providing factual information without favoring any party.

We will implement a public goods game with information asymmetry (i.e. agents in the game do not have the same information about the environment) and perform a user-study in which we will manipulate the amount of information that the artificial agent provides to the team, and examine how varying levels of information increase or decrease human prosocial behavior.

Output

Submission to one of the following: International Journal of Social Robotics, Behaviour & Information Technology, AAMAS, or CHI. Submission to be sent by the end of August 2021.

Release of the game developed for the study on the AI4EU platform to allow other researchers to use it and extend it

Educational component on the Ethical aspect of AI, giving a concrete example on how AI can “manipulate” a human

Project Partners:

  • Örebro University (ORU), Jennifer Renoux
  • Instituto Superior Técnico (IST), Ana Paiva

Primary Contact: Jennifer Renoux, Örebro University