HHAI23 Doctoral Consortium series of diverse presentations on Human AI interaction and collaboration
During the #HHAI23 Doctoral Consortium, we attended a series of diverse presentations on Human AI interaction and collaboration. This morning Azade Farshad discussed Representation Learning for Semantic Scene Understanding, contributing to advancements in the field of computer vision. #HumanAIInteraction #RepresentationLearning #SemanticSceneUnderstanding.
Johanna Wolff discussed Behavior Support Agents and their role in assisting humans in achieving various goals. The research focuses on creating a framework that enables enhanced interaction between the agent and the user, aiming for greater effectiveness, flexibility, and responsibility. By utilizing techniques from non-monotonic reasoning, the study aims to develop a knowledge base for the agent that aligns with the user’s mental model and can be modified based on user input. The incorporation of explicit and traceable reasoning processes into a logical framework is also a key objective, enabling the agent to provide explanations for its outputs.
Finally, Regina Duarte presented how Explainable Artificial Intelligence (XAI) models are relevant for Human-AI collaboration. She highlighted that current XAI models primarily focus on verifying input-output relationships of AI models, disregarding the importance of context. However, to foster effective collaboration and establish appropriate levels of trust between humans and AI, developing XAI models promoting justified trust is crucial. #ExplainableAI #HumanAICollaboration #Trust
Full papers are available here https://lnkd.in/eWDmZjPR