• Stockholm, 16 November 2022
  • Location: Grand Hôtel (Södra Blasieholmshamnen 8, 103 27 Stockholm)
  • Rooms Uppsala (coffee breaks, lunch, poster exhibition) and New York (conference)


  • Virginia Dignum (Umea University Sweden)
  • Paul Lukowicz (DFKI)
  • Albrecht Schmidt (LMU Munich)
  • John Shawe-Taylor (UCL)

Event Contact


The European HumanE AI Network aims to leverage the synergies between the involved centers of excellence to develop the scientific foundations and technological breakthroughs needed to shape the AI revolution in a direction that is beneficial to humans both individually and societally, and that adheres to European ethical values and social, cultural, legal, and political norms. The core challenge is the development of robust, trustworthy AI systems capable of what could be described as “understanding” humans, adapting to complex real-world environments, and appropriately interacting in complex social settings. The aim is to facilitate AI systems that enhance human capabilities and empower individuals and society as a whole, while respecting human autonomy and self-determination.

This conference aims at present and highlights the research directions, methods and results of the network’s activities, with a specific focus on our micro-projects: our  unique collaboration model that allows agile interaction between partners, interfacing related activities outside the project and easy engagement with researchers outside the consortium.

The conference will take place at the Grand Hotel in Stockholm on 16 November 2022, as a twin event with the 3rd Conference on AI for Humanity and Society (AI4HS) organised by WASP-HS in the same location on 17 November 2022. All participants are invited to attend both conferences (please note you need to register separately to the AI4HS conference).

All HumanE AI network members and collaboration partners are invited to submit a proposal for the HumanE AI conference. All micro projects (past and current) are strongly advised to submit.


Time Speaker Description
08.00-09.00 Participants are welcome to pin their posters on the poster boards (room Uppsala)
09.00-09.15 Paul Lukowitz Welcome and Introduction
09.15-10:00 Danica Kragic Jensfelt “We can use Robots: acting and interacting”
10.00-11:00 Poster session and coffee break (room Uppsala) Posters (see list below): either presented as poster or on laptop
11:00-12:30 Paper session “Humans and AI”
Jonne Maas, Luís Gustavo Ludescher and Juan M Durán The Role of an AI Designer: design choices and their epistemic and moral limitations
Inês Lobo, Inês Batina, Jennifer Renoux, Janin Koch and Rui Prada A Human-AI Collaboration Study using the Geometry Friends Game
Inês Lobo, Diogo Rato, Rui Prada and Frank Dignum: Socially Aware Interactions From Dialogue Trees to Natural Language Dialogue Systems
Sahan Bulathwela, María Pérez Ortiz, Erik Novak, Daniel Loureiro, Emine Yilmaz, Joao Vinagre, Alípio Jorge and John Shawe-Taylor Towards Educational Recommenders with Computational Narratives
Maria Tsfasman, Kristian Fenech, Morita Tarvirdians, Andras Lorincz, Catholijn Jonker and Catharine Oertel Towards creating a conversational memory for long-term meeting support: Predicting memorable moments in multi-party conversations through eye-gaze
12:30-13:30 Lunch (room Uppsala)
13:30-14:15 Panel with HAI-net responsible AI board and others Trustworthy HAI
  • Dagmar Monett (board, confirmed to attend meeting)
  • Jennifer Cobbe (board, confirmed to attend meeting)
  • Ulises Cortes (WP5)
  • John Shawe-Taylor  (WP1)
  • Albrecht Schmidt or someone from Industry (WP7)
  • Moderator: Virginia Dignum
14:15-15:30 Paper session 2 “XAI/Fairness/Ethics”
James Crowley Comprehension, Explanation and Learning Core Research Challenges for Collaborative AI
Ali A. Khoshvishkaie, Petrus Mikkola, Pierre-Alexandre Murena, Mustafa Mert Çelikok, Frans A. Oliehoek and Samuel Kaski AI-assistant to mitigate confirmation bias in cooperative Bayesian optimization
Elisabeth Stockinger, Anna Jonsson, Luís G. Ludescher, Jonne Maas and Virginia Dignum A Value-Based Political Guidance Model
Fosca Giannotti and Dino Pedreschi Reporting on the results of the ADG-ERC XAI project: Science and Technology for eXplanation of AI bases decision-making
János Kertész, Letizia Milli, Virginia Morini, Valentina Pansanella, Dino Pedreschi, Giulio Rossetti and Tiziano Squartini Investigating polarization: cognitive and algorithmic biases and external effects on opinion formation
15:30-16:00 Break (room Uppsala)
Paper session 3 “ML/NLP/KR”
Bettina Fazzinga, Andrea Galassi and Paolo Torroni A Privacy-Preserving Dialogue System Based on Argumentation
Francesco Spinnato, Riccardo Guidotti, Mirco Nanni, Daniele Maccagnola, Giulia Paciello and Antonio Bencini Farina Explaining Crash Predictions on Multivariate Time Series Data
Francesco Pisani, Luciano Caroprese, Bruno Veloso, Matthias König, Giuseppe Manco, Holger Hoos and Joao Gama A Graph-Based Drift-Aware Data Cloning Process
Nina Khairova, Fabrizio Lo Scudo, Bogdan Ivasiuk, Andrea Galassi, Carmela Comito, Giuseppe Manco, Raivis Skadins and Paolo Torroni An Event-based Dataset around Russia’s Invasion of Ukraine News coverage
Lorenzo Valerio, Chiara Boldrini, Andrea Passarella, Janos Kertesz and Gerardo Iniguez Social AI Gossiping
Alessandro Daniele, Emile van Krieken, Luciano Serafini and Frank van Harmelen Refining neural network predictions using background knowledge


  1. Annalisa Bosco, Matteo Filippini, Davide Borra, Elsa A. Kirchner and Patrizia Fattori    Prediction of static and perturbed reach goals from movement kinematics.
  2. Sencer Melih Deniz, Hamraz Javaheri, Juan Felipe Vargas, Dogan Urgun, Fariza Sabit, Mahmut Tok, Mehmet Haklidir, Bo Zhou and Paul Lukowicz    Neural Mechanism in Human Brain Activity During Weight Lifting
  3. Jennifer Renoux, Neziha Akalin, Joana Campos, Filipa Correia, Lucas Morillo-Mendez, Fernando P. Santos and Ana Paiva    The impact of game outcomes and Agent-based Feedback on Prosociality in Social Dilemmas
  4. Lorenzo Bellomo, Virginia Morini, Giulio Rossetti, Dino Pedreschi and Paolo Ferragina    Source Selection Bias in the European Media Landscape
  5. Lorenzo Bertolini and Julie Weeds    Testing Large Language Models on Compositionality and Inference in the Absence of Biases
  6. Inês Lobo, Diogo Rato, Rui Prada, Giulia Andrighetto and Eugenia Polizzi    Using Dictator Game Data to Identify Patterns of Behaviour and Beliefs on Norms
  7. Jan Hajic, Zdenka Uresova, Eva Fučíková, Karolina Zaczynska, Peter Bourgonje and Georg Rehm    Multilingual Event-Type-Anchored Ontology for Natural Language Understanding
  8. Jan Hajic, Zdenka Uresova, Eva Fučíková, Thierry Declerck, Marco Rospocher, Francesco Corcoglioniti and Alessio Palmero Aprosio    Multilingual SynSemClass for the Semantic Web
  9. Helena Lindgren    Managing Breakdown Situations and Co-Creating We-Intention in Human-AI Collaboration for Improving Health
  10. Antti Oulasvirta, Julien Gori and Firooz Hossein    Optimal Alerting
  11. Andreas Theodorou, Juan Carlos Nieves and Virginia Dignum    AI and the lack of Sustainable Development
  12. Sahan Bulathwela, Shenal Pussegoda, Maria Perez-Ortiz, Davor Orlic, Emine Yilmaz, Yvonne Rogers and John Shawe-Taylor    X5LEARN: Cross Modal, Cross Cultural, Cross Lingual, Cross Domain, and Cross Site Interface for Access to Openly Licensed Educational Materials
  13. Bruno Veloso, Luciano Caroprese, Matthias König, Giuseppe Manco, Holger Hoos and Joao Gama    Online Deep-AUTOML
  14. Sebastian Stefan Feger, Andrea Esposito, Giuseppe Desolda and Florian Müller    Making for Everyone: Requirements Research on Voice-Based Digital Modeling
  15. Carmela Comito, Andrea Galassi, Bogdan Ivasiuk, Nina Khairova, Fabrizio Lo Scudo, Giuseppe Manco, Raivis Skadins and Paolo Torroni    Comparative analysis of Ukranian war news: automatic detection of opinions, sentiment, and propaganda
  16. Richard Benjamins, Javier Carro, Pedro A. de Alarcón, Luis Suárez, Luis Lamiable and Andrés Herguedas García    Internet of Things and Artificial Intelligence to Improve air quality in cities
  17. Pierpaolo Resce, Lukas Vorwerk, Zhiwei Han, Giuliano Cornacchia, Omid Isfahani Alamdari, Mirco Nanni and Luca Pappalardo    Connected Vehicle Simulation Framework for Parking Occupancy Prediction
  18. Lorenzo Bertolini, Valenitna Elce, Giulio Bernardi and Julie Weeds    Towards Automatic Scoring of Dream Reports
  19. Dimitris Pappas, Ioannis Lyris, George Kountouris and Haris Papageorgiou    A Neurosymbolic Question Answering System Combining Structured and Unstructured Biomedical Knowledge
  20. Giuliano Cornacchia, Matteo Böhm, Giovanni Mauro, Mirco Nanni, Dino Pedreschi and Luca Pappalardo    How Routing Strategies Impact Urban Emissions
  21. Ana Nogueira, Andrea Pugnana, Salvatore Ruggieri, Dino Pedreschi and Joao Gama    Methods and Tools for Causal Discovery and Causal Inference
  22. Jesus Cerquides, Mehmet Oğuz Mülâyim and Jose Luis Fernandez-Marquez    Crowdnalysis: a Python library for consensus in citizen science crowdsourcing projects
  23. Victor Schetinger, Silvia Miksch, Thomas Eiter, Rafael Kiesel and Yuanting Liu    The Combinatorics of HumaneAI Kristian Fenech, Sean Bergeron, Ádám Fodor, Rachid Saboundji, Catharine Oertel and Andras Lorincz    Automatic estimation of the perceived personality of small groups

This was the EU-funded HumanE-AI-Net project brings together leading European research centres, universities and industrial enterprises into a network of centres of excellence. Leading global artificial intelligence (AI) laboratories will collaborate with key players in areas, such as human-computer interaction, cognitive, social and complexity sciences. The project is looking forward to drive researchers out of their narrowly focused field and connect them with people exploring AI on a much wider scale. The challenge is to develop robust, trustworthy AI systems that can ‘understand’ humans, adapt to complex real-world environments and interact appropriately in complex social settings. HumanE-AI-Net will lay the foundations for designing the principles for a new science that will make AI based on European values and closer to Europeans.


  • Roel Dobbe (TU Delft)
  • Ana Valdivia (King's College London)

Event Contact

  • Maria Perez-Ortiz (University College London)


Time Speaker Description
Monday 13. June 2022 Workshops, Tutorials and other Events
Tuesday 14. June 2022 Workshops, Tutorials and other Events


HHAI-2022 workshops will provide a platform for discussing Hybrid Human-Artificial Intelligence in more informal settings and for a broad audience. We invite proposals for full-day and half-day events during the two days leading up to the main conference. Registration for the main conference is expected, arrangements for non-traditional conference attendees can be requested.

The goal of workshops is to bring together academics, professionals and users of technology to better understand the socio-technical benefits, risks and limitations that artificial intelligence has when interacting with humans from different perspectives. Thus we encourage workshops presenting broad concepts of human-artificial intelligence interaction or specific cases. We invite submissions for events that foster cross-disciplinary interaction, scientific discourse, and creative and critical reflection, rather than just being mini-conferences. To do so, we offer organizers flexibility for format that best suit the goals of their event. We especially welcome submissions of communities that are usually not featured prominently in artificial intelligence events and conferences.

Important Dates

January 31, 2022: Workshop proposals due
February 7, 2022: Workshop proposal acceptance notification
February 14, 2022: Deadline for announcing the Workshops Call for Papers/Contributions
April 1, 2022: Workshop application deadline for contributions to the workshop
April 29, 2022: Recommended deadline for paper acceptance notification
June 13,14 2022: HHAI2022 Workshops


  • Stefan Schlobach (Vrije Universiteit Amsterdam)
  • Maria Perez-Ortiz (University College London)
  • Myrthe Tielman (TU Delft)
  • Ana Valdivia (King's College London)
  • Roel Dobbe (TU Delft)
  • Shenghui Wang (University of Twente)

Event Contact

  • Maria Perez-Ortiz (University College London)


Time Speaker Description
Monday 13. June 2022 TBC Workshops, Tutorials and other Events
Tuesday 14. June 2022 TBC Workshops, Tutorials and other Events
Wednesday-Friday 15-17. June 2022 TBC Main Research Program


Hybrid Human Artificial Intelligence (HHAI2022) is the first international conference focusing on the study of Artificial Intelligent systems that cooperate synergistically, proactively and purposefully with humans, amplifying instead of replacing human intelligence.

HHAI2022 is organised by the Dutch Hybrid Intelligence Center and the European HumaneAI Network, as the first conference in what we intend to become a series of conferences about Hybrid Human Artificial Intelligence.

HHAI2022 will be an in-person event at the VU Amsterdam, The Netherlands, and will be organized as a single-track conference.

HHAI aims for AI systems that assist humans and vice versa, emphasizing the need for adaptive, collaborative, responsible, interactive and human-centered intelligent systems that leverage human strengths and compensate for human weaknesses, while taking into account social, ethical and legal considerations. This field of study is driven by current developments in AI, but also requires fundamentally new approaches and solutions. In addition, we need collaboration with areas such as HCI, cognitive and social sciences, philosophy & ethics, complex systems, and others. In this first international conference, we invite scholars from these fields to submit their best original, new as well as in progress, visionary and existing work on Hybrid Human-Artificial Intelligence.

Please for more information visit our website at https://www.hhai-conference.org/

About the Course

The Advanced Course on AI (ACAI) is a specialized course in Artificial Intelligence sponsored by EurAI in odd-numbered years. The theme of the 2021 ACAI School is Human-Centered AI.

The notion of “Human Centric AI”  increasingly dominates the public AI debate in Europe[1].  It postulates a “European brand” of AI beneficial to humans on both individual and social level that is characterized by a focus on supporting and empowering humans as well as incorporating “by design” adherence to appropriate ethical standards and values such as privacy protection, autonomy (human in control), and non-discrimination. Stated this way (which is how it mostly appears in the political debate) it may seem more like a broad, vague wish list than a tangible scientific/technological concept. Yet, at a second glance, it turns out that it is closely connected to some of the most fundamental challenges of AI[1].

Within ACAI 2021, researchers from the HumanE-AI-Net consortium will teach courses related to the state of the art in the above areas focusing not just on narrow AI questions but emphasising issues related to the interface between AI and Human-Computer Interaction (HCI), Computational Social Science (and Complexity Science) as well as ethics and legal issues. We intend to provide the attendees with the basic knowledge needed to design, implement, operate and research the next generation of Human Centric AI systems that are focused on enhancing Human capabilities and optimally cooperating with humans on both the individual and the social level.

ACAI 2021 will have a varied format, including keynote presentations, labs/hands-on sessions, short tutorials on cutting edge topics and longer in-depth tutorials on main topics in AI.

Please check for updates!


Learning and Reasoning with Human in the Loop

Learning, reasoning, and planning are interactive processes involving close synergistic collaboration between AI system(s) and user(s) within a dynamic, possibly open-ended real-world environment. Key gaps in knowledge and technology that must be addressed toward this vision include combining symbolic-subsymbolic learning, explainability,  translating a broad, vague notion of “fairness” into concrete algorithmic representations, continuous and incremental learning, compositionality of models and ways to adequately quantify and communicate model uncertainty.

Multimodal Perception

Human interaction and human collaboration depend on the ability to understand the situation and reliably assign meanings to events and actions. People infer such meanings either directly from subtle cues in behavior, emotions, and nonverbal communications or indirectly from the context and background knowledge. This requires not only the ability to sense subtle behavior, and emotional and social cues but an ability to automatically acquire and apply background knowledge to provide context. The acquisition must be automatic because such background knowledge is far too complex to be hand-coded. Research on artificial systems with such abilities requires a strong foundation for the perception of humans, human actions, and human environments. In HumanE AI Net, we will provide this foundation by building on recent advances in multimodal perception and modelling sensory, spatiotemporal, and conceptual phenomena

Representations and Modeling

Perception is the association of external stimuli to an internal model. Perception and modelling are inseparable. Human ability to correctly perceive and interpret complex situations, even when given limited and/or noisy input, is inherently linked to a deep, differentiated, understanding based on the human experience.  A new generation of complex modelling approaches is needed to address this key challenge of Human Centric  AI including Hybrid representations that combine symbolic, compositional approaches with statistical and latent representations. Such hybrid representations will allow the benefits of data-driven learning to be combined with knowledge representations that are more compatible with the way humans view and reason about the world around them.

Human Computer Interaction (HCI)

Beyond considering the human in the loop, the goal of human-AI is to study and develop methods for combined human-machine intelligence, where AI and humans work in cooperation and collaboration. This includes principled approaches to support the synergy of human and artificial intelligence, enabling humans to continue doing what they are good at but also be in control when making decisions. It has been proposed that AI research and development should follow three objectives: (i) to technically reflect the depth characterized by human intelligence; (ii) improve human capabilities rather than replace them; and (iii) focus on AI’s impact on humans. There has also been a call for the HCI community to play an increasing role in realizing this vision, by providing their expertise in the following: human-machine integration/teaming, UI modelling and HCI design, transference of psychological theories, enhancement of existing methods, and development of HCI design standards.

Social AI

As increasingly complex sociotechnical systems emerge, consisting of many (explicitly or implicitly) interacting people and intelligent and autonomous systems, AI acquires an important societal dimension. A key observation is that a crowd of (interacting) intelligent individuals is not necessarily an intelligent crowd. Aggregated network and societal effects and of AI and their (positive or negative) impacts on society are not sufficiently discussed in the public and not sufficiently addressed by AI research, despite the striking importance to understand and predict the aggregated outcomes of sociotechnical AI-based systems and related complex social processes, as well as how to avoid their harmful effects. Such effects are a source of a whole new set of explainability, accountability, and trustworthiness issues, even assuming that we can solve those problems for an individual machine-learning-based AI system.

Societal, Legal and Ethical Impact

Every AI system should operate within an ethical and social framework in understandable, verifiable and justifiable ways. Such systems must in any case operate within the bounds of the rule of law, incorporating fundamental rights protection into the AI infrastructure. Theory and methods are needed for the Responsible Design of AI Systems as well as to evaluate and measure the ‘maturity’ of systems in terms of compliance with legal, ethical and societal principles. This is not merely a matter of articulating legal and ethical requirements but involves robustness, and social and interactivity design. Concerning the ethical and legal design of AI systems, we will clarify the difference between legal and ethical concerns, as well as their interaction and ethical and legal scholars will work side by side to develop both legal protection by design and value-sensitive design approaches.

European Association for
Artificial Intelligence




The 2021 ACAI School will take place on 11-14 October 2021.

We are going to use different locations all very close to each other. This allows us to keep up with the maximum occupancy restrictions:
• 3IT, Salzufer 6, Entrance: Otto-Dibelius-Strasse
• Forum Digital Technologies (FDT) // CINIQ Center: Salzufer 6 (main venue), 10587 Berlin ( Entrance Otto-Dibelius-Straße);
• Loft am Salzufer: Salzufer 13-14, 10587 Berlin 
• Hörsaal HHI, Fraunhofer Institute for Telecommunications (HHI): Einsteinufer 37, 10587 Berlin (across the bridge)

There will be a possibility to participate in the School's activities online.

According to the current regulations in Germany associated with the COVID-19, we are restricted to a maximum of 60 students attending in person. The format of the event is subject to the COVID-19 regulations at the time of the School.

The program will be updated regularly. (Download the program)

Monday, 11 October
09.00-09.30 Registration (venue: Loft)
09.30-10.00 Welcome and Introduction (venue: Loft)
10.00-12.00 Mythical Ethical Principles for AI and How to Operationalise Them (venue: Loft)
Deep Learning Methods for Multimodal Human Activity Recognition (venue: 3IT)
Social Artificial Intelligence (venue: FDT)
12.00-13.00 Keynote: Yvonne Rogers (venue: Loft)
13.00-14.00 Lunch
14.00-18.00 Why and How Should We Explain in AI? (venue: Loft)
Multimodal Perception and Interaction with Transformers (venue: 3IT)
Social Artificial Intelligence (venue: FDT)
18.00-20.00 Welcome Reception and Student Poster Mingle (venue: Loft)

Tuesday, 12 October
09.00-13.00 Ethics and AI: An Interdisciplinary Approach (venue: Hörsaal HHI)
Machine Learning With Neural Networks (venue: FDT)
Social Simulation for Policy Making (venue: 3IT)
13.00-14.00 Lunch
14.00-16.00 Learning Narrative Frameworks from Multimodal Inputs (venue: 3IT)
Interactive Robot Learning (venue: FDT)
Argumentation in AI (venue: Hörsaal HHI)
16.00-17.00 Keynote: Atlas of AI: Mapping the Wider Impacts of AI by Kate Crawford
17.00-18.00 EURAI Dissertation Award

Unsupervised machine translation by Mikel Artetxe

Wednesday, 13 October
09.00-13.00 Law for Computer Scientists (venue: 3IT)
Computational Argumentation and Cognitive AI (venue: FDT)
Operationalising AI Ethics: Conducting Socio-Technical Assessment (venue: Hörsaal HHI)
13.00-14.00 Lunch
14.00-18.00 Explainable Machine Learning for Trustworthy AI (venue: FDT)
Cognitive Vision: On Deep Semantics for Explainable Visuospatial Computing (venue: 3IT)
Operationalising AI Ethics: Conducting Socio-Technical Assessment (venue: Hörsaal HHI)

Thursday, 14 October
09.00-11.00 Children and the Planet - The Ethics and Metrics of "Successful" AI (venue: Loft)
Learning and Reasoning with Logic Tensor Networks (venue: FDT)
Writing Science Fiction as An Inspiration for AI Research and Ethics Dissemination (venue: 3IT)
11.00-13.00 Introduction to intelligent UIs (venue: 3IT)
11.00-14.00 Student mentorship meetings with lunch (venue: Loft)
14.00-16.00 HumaneAI-net Micro-Project Presentation (venue: Loft)
16.00-18.00 Challenges and Opportunities for Human-Centred AI: A dialogue between Yoshua Bengio and Ben Shneiderman, moderated by Virginia Dignum (venue: Loft)
18.00-20.00 ACAI 2021 Closing Reception/Welcome HumaneAI-net (venue: Loft)

European Association for
Artificial Intelligence



Cognitive Vision: On Deep Semantics for Explainable Visuospatial Computing, Mehul Bhatt, Örebro University - CoDesign Lab EU; Jakob Suchan, University of Bremen
(see Tutorial Outline)

Ethics and AI: An Interdisciplinary Approach, Guido Boella, Università di Torino; Maurizio Mori, Università di Torino
(see Tutorial Outline)

Children and the Planet - The Ethics and Metrics of "Successful" AI, John Havens, IEEE; Gabrielle Aruta, Filo Sofi Arts
(see Tutorial Outline)

Mythical Ethical Principles for AI and How to Operationalise Them, Marija Slavkovik, University of Bergen
(see Tutorial Outline)

Operationalising AI Ethics: Conducting Socio-Technical Assessment, Andreas Theodorou, Umeå University & VeRAI AB; Virginia Dignum, Umeå University & VeRAI AB
(see Tutorial Outline)

Explainable Machine Learning for Trustworthy AI, Fosca Giannotti, CNR; Riccardo Guidotti, University of Pisa
(see Tutorial Outline)

Why and How Should We Explain in AI?, Stefan Buijsman, TU Delft
(see Tutorial Outline)

Interactive Robot Learning, Mohamed Chetouani, Sorbonne Université
(see Tutorial Outline)

Multimodal Perception and Interaction with Transformers, Francois Yvon, Univ Paris Saclay, James Crowley, INRIA and Grenoble Institut Polytechnique
(see Tutorial Outline)

Argumentation in AI (Argumentation 1), Bettina Fazzinga, ICAR-CNR
(see Tutorial Outline)

Computational Argumentation and Cognitive AI (Argumentation 2), Emma Dietz, Airbus Central R&T; Antonis Kakas, University of Cyprus; Loizos Michael, Open University of Cyprus
(see Tutorial Outline)

Social Simulation for Policy Making, Frank Dignum, Umeå University; Loïs Vanhée, Umeå University; Fabian Lorig, Malmö University
(see Tutorial Outline)

Social Artificial Intelligence, Dino Pedreschi, University of Pisa; Frank Dignum, Umeå University
(see Tutorial Outline)

Introduction to Intelligent User Interfaces (UIs), Albrecht Schmidt, LMU Munich; Sven Mayer, LMU Munich; Daniel Buschek, University of Bayreuth
(see Tutorial Outline)

Machine Learning With Neural Networks, James Crowley, INRIA and Grenoble Institut Polytechnique
(see Tutorial Outline)

Deep Learning Methods for Multimodal Human Activity Recognition, Paul Lukowicz, DFKI/TU Kaiserslautern

Learning and Reasoning with Logic Tensor Networks, Luciano Serafini, Fondazione Bruno Kessler

Learning Narrative Frameworks From Multi-Modal Inputs, Luc Steels, Universitat Pompeu Fabra Barcelona
(see Tutorial Outline)

Law for Computer Scientists, Mireille Hildebrandt, Vrije Universiteit Brussel; Arno De Bois, Vrije Universiteit Brussel
(see Tutorial Outline)

Writing Science Fiction as An Inspiration for AI Research and Ethics Dissemination, Carme Torras, UPC
(see Tutorial Outline)

European Association for
Artificial Intelligence


Yoshua Bengio, MILA, Quebec

Yoshua Bengio is recognized worldwide as one of the leading experts in artificial intelligence, Yoshua Bengio is most known for his pioneering work in deep learning, earning him the 2018 A.M. Turing Award, “the Nobel Prize of Computing,” with Geoffrey Hinton and Yann LeCun. He is a Full Professor at Université de Montréal, and the Founder and Scientific Director of Mila – Quebec AI Institute. He co-directs the CIFAR Learning in Machines & Brains program as Senior Fellow and acts as Scientific Director of IVADO. In 2019, he was awarded the prestigious Killam Prize and in 2021, became the second most cited computer scientist in the world. He is a Fellow of both the Royal Society of London and Canada and Officer of the Order of Canada. Concerned about the social impact of AI and the objective that AI benefits all, he actively contributed to the Montreal Declaration for the Responsible Development of Artificial Intelligence.

Kate Crawford

Kate Crawford

Kate Crawford, Professor, is a leading international scholar of the social and political implications of artificial intelligence. Her work focuses on understanding large-scale data systems in the wider contexts of history, politics, labor, and the environment. She is a Research Professor of Communication and STS at USC Annenberg, a Senior Principal Researcher at Microsoft Research New York, and an Honorary Professor at the University of Sydney. She is the inaugural Visiting Chair for AI and Justice at the École Normale Supérieure in Paris, where she co-leads the international working group on the Foundations of Machine Learning. Over her twenty year research career, she has also produced groundbreaking creative collaborations and visual investigations. Her project Anatomy of an AI System with Vladan Joler won the Beazley Design of the Year Award, and is in the permanent collection of the Museum of Modern Art in New York and the V&A in London. Her collaboration with the artist Trevor Paglen produced Training Humans – the first major exhibition of the images used to train AI systems. Their investigative project, Excavating AI, won the Ayrton Prize from the British Society for the History of Science. Crawford's latest book, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press) has been described as “a fascinating history of data” by the New Yorker, a “timely and urgent contribution” by Science. and named one of the best books on technology in 2021 by the Financial Times.

Yvonne Rogers, UCLIC - UCL Interaction Centre

Yvonne Rogers is a Professor of Interaction Design, the director of UCLIC and a deputy head of the Computer Science department at University College London. Her research interests are in the areas of interaction design, human-computer interaction and ubiquitous computing. A central theme of her work is concerned with designing interactive technologies that augment humans. The current focus of her research is on human-data interaction and human-centered AI. Central to her work is a critical stance towards how visions, theories and frameworks shape the fields of HCI, cognitive science and Ubicomp. She has been instrumental in promulgating new theories (e.g., external cognition), alternative methodologies (e.g., in the wild studies) and far-reaching research agendas (e.g., "Being Human: HCI in 2020"). She has also published two monographs "HCI Theory: Classical, Modern and Contemporary." and "Research in the Wild." with Paul Marshall. She is a fellow of the ACM, BCS and the ACM CHI Academy. 

Ben Shneiderman

Ben Shneiderman, University of Maryland

Ben Shneiderman is an Emeritus Distinguished University Professor in the Department of Computer Science, Founding Director (1983-2000) of the Human-Computer Interaction Laboratory, and a Member of the UM Institute for Advanced Computer Studies (UMIACS) at the University of Maryland. He is a Fellow of the AAAS, ACM, IEEE, NAI, and the Visualization Academy and a Member of the U.S. National Academy of Engineering. He has received six honorary doctorates in recognition of his pioneering contributions to human-computer interaction and information visualization. His widely-used contributions include the clickable highlighted web-links, high-precision touchscreen keyboards for mobile devices, and tagging for photos. Shneiderman’s information visualization innovations include dynamic query sliders for Spotfire, the development of treemaps for viewing hierarchical data, novel network visualizations for NodeXL, and event sequence analysis for electronic health records. Ben is the lead author of Designing the User Interface: Strategies for Effective Human-Computer Interaction (6th ed., 2016). He co-authored Readings in Information Visualization: Using Vision to Think (1999) and Analyzing Social Media Networks with NodeXL (2nd edition, 2019). His book Leonardo’s Laptop (MIT Press) won the IEEE book award for Distinguished Literary Contribution. The New ABCs of Research: Achieving Breakthrough Collaborations (Oxford, 2016) describes how research can produce higher impacts. His forthcoming book on Human-Centered AI will be published by Oxford University Press in January 2022.

European Association for
Artificial Intelligence


Mikel Artetxe at Facebook AI Research has been selected as the winner of the EurAI Doctoral Dissertation Award 2021.

In his PhD research, Mikel Artetxe has fundamentally transformed the field of machine translation, by showing that unsupervised machine translation systems can be competitive with traditional, supervised methods. This is a game-changing finding which has already made a huge impact on the field. To solve this challenging problem of unsupervised machine translation, he has first introduced an innovative strategy for aligning word embeddings from different languages, which are then used to induce bilingual dictionaries in a fully automated way. These bilingual dictionaries are subsequently used in combination with monolingual language models, as well as denoising and back translation strategies, to end up with a full machine translation system.

The EurAI Doctoral Dissertation Award will be officially presented at ACAI 2021 on Tuesday, October 12th, at 17.00 (CET). Mikel Artetxe will also give a talk:

Title: Unsupervised machine translation

Abstract: While modern machine translation has relied on large parallel corpora, a recent line of work has managed to train machine translation systems in an unsupervised way, using monolingual corpora alone. Most existing approaches rely on either cross-lingual word embeddings or deep multilingual pre-training for initialization, and further improve this system through iterative back-translation. In this talk, I will give an overview of this area, focusing on our own work on cross-lingual word embedding mappings, and both unsupervised neural and statistical machine translation.


European Association for
Artificial Intelligence


The number of places for on-site participation is limited. The registration is now closed.

Early-bird registration

(15 September)

Late registration

(after 16 September)

(PhD) Student 250€ 300€
Non-student 400€ 450€

Members of EurAI member societies are eligible for a discount (30€).

Students attending on-site will have an opportunity to apply for scholarships.

By registering, you

  • commit to attend the ACAI2021 School and do the assignments (where applicable),
  • commit to receiving further instructions,
  • confirm having acquired approval for participation in ACAI2021 School from your supervisor (where applicable).

Please note, the registration fee does not cover accommodation or travel costs.

Please check the information on entry restrictions, testing and quarantine regulations in Germany.

European Association for
Artificial Intelligence


Virginia Dignum, Umeå University
ACAI 2021 General Chair


Paul Lukowicz, German Research Center for Artificial Intelligence
ACAI 2021 General Chair


Mohamed Chetouani, Sorbonne Université

Mohamed Chetouani, Sorbonne Université
ACAI 2021 Publications Chair


Davor Orlic, Knowledge 4 All Foundation
ACAI 2021 Publicity Chair


Tatyana Sarayeva, Umeå University
ACAI 2021 Organising Chair

European Association for
Artificial Intelligence


Venue: Forum Digital Technologies // CINIQ Center: Salzufer 6 (main venue), 10587 Berlin

Travelling and staying in Berlin: The ACAI 2021 school participants are responsible for their own accommodation and trip to Berlin.

Visa: Organizing committee can provide the ACAI 2021 school participant with an invitation letter. For the invitation letter,  we need proof of enrollment at your university and a recommendation letter from your supervisor describing why is important for you to attend ACAI 2021. The ACAI 2021 school participant is responsible for the visa application.

COVID-19 guidance: Please check the information on entry restrictions, testing and quarantine regulations in Germany.


European Association for
Artificial Intelligence