Organizers

Event Contact

Watch the Recording

You can now watch the recording of the entire event at: https://youtu.be/pwOy6KKh_tk.
The total duration of the video is 2 hours 40 minutes.

Attend the Event

Register for an e-mail reminder: https://forms.gle/LamUhKpzN2N9FfPG7

Event Description

Recent developments have enabled humans and AI-based systems to cooperatively work towards joint goals in interactive and collaborative settings. They have not only showcased various application domains and use-cases for such interactive capabilities but also highlighted several issues and opportunities. Experts from Psychology, HCI, AI, and Computer Science will discuss some current progress, challenges, opportunities, and a vision for the future of such systems from a human-centered perspective.

Programme

Time Speaker Description
14:00–14:05 Kashyap Todi Welcome
14:05–14:30 Wendy Mackay Plenary Talk: Human–Computer Partnerships
14:30–14:55 Janet Rafner & Jacob Sherson Plenary Talk: Hybrid Intelligence
14:55–15:00 Break
15:00–15:10 Alessandro Saffiotti Short Talk: Human-AI in artistic co-creation
15:10–15:20 Janin Koch Short Talk: Visual Design Ideation with Machines
15:20–15:30 Silvia Miksch Short Talk: Guide Me in the Analysis: How can Visual Analytics enriched by guidance contribute to gaining insights and decision making
15:30–15:40 Mohamed Chetouani Short Talk: Social Learning Agents: Role of Human Behaviors
15:40–16:00 Panel Discussion
16:00 Event Close

Meet the Speakers and Organisers

Abstracts

Human–Computer Partnership (Wendy Mackay)

In this talk, Wendy Mackay will talk about moving beyond the traditional 'human-in-the-loop' perspective, which focuses on using human input to improve algorithms. She will share her vision for 'computer-in-the-loop', where intelligent algorithms serve to enhance human capabilities.

Hybrid Intelligence: First Rate Humans, Not Second Class Robots (Janet Rafner & Jacob Sherson) 

In light of the recent deep learning driven success of AI in both corporate and social life there has been a growing fear of human displacement and a related call to develop IA (intelligence augmentation) rather than pure AI. In reality, most current AI applications have a significant human-in-the-loop (HITL) component and are therefore arguably more IA than AI already. From here, there are currently two trends in the field. In one trend, increasing machine autonomy is pursued, first by placing the human-on-the-loop in order to verify the result of the machine computation and then by hoping to take the human completely out of the loop, as in the pursuit of artificial general intelligence. Two main challenges of this approach are a) the value-alignment problem (how do we ensure that the machine satisfies human preferences when we often cannot even express or agree on these ourselves) and b) the extensive human deskilling that often accompanies algorithmic advances. In our talk, we will discuss how these two challenges may potentially be overcome by the second trend: the pursuit of increasingly intertwined human-machine operation. We will present and give examples of an operational and ambitious framework, hybrid intelligence (HI), in which the two interact synergistically and continually learn from each other.

Human-AI Collaboration in Artistic Co-creation (Alessandro Saffiotti)

Live artistic performance, like music, dance or acting, provides an excellent domain to observe and analyze the mechanisms of human-human collaboration. In this short talk, I use this domain to study human-AI collaboration. I propose a model for collaborative artistic performance, in which an AI system mediates the interaction between a
human performer and an artificial one. I will illustrate this model with case studies involving different combinations of human musicians, human dancers, robot dancers, and a virtual drummer.

Visual Design Ideation with Machines (Janin Koch)

In this short talk, Janin Koch will talk about 'MayAI', 'ImageSense', and her current postdoctoral research on how humans and machines can collaborate during visual design ideation, and how this collaboration enhances the creative process and results.

Guide Me in the Analysis: How can Visual Analytics enriched by guidance contribute to gaining insights and decision making (Silvia Miksch)

Visual Analytics is "the science of analytical reasoning facilitated by interactive visual interfaces." Guidance is a "computer-assisted process that aims to actively resolve a knowledge gap encountered by users during an interactive visual analytics session.” I will illustrate how guidance-enriched Visual Analytics contribute to gaining insights and decision making.

Social Learning Agents: Role of Human Behaviors (Mohamed Chetouani)

There are increasing situations in which humans and AI systems are acting, deciding and/or learning. In this short talk, we discuss approaches and models able to capture specific strategies of humans while they are teaching agents. We will see how social learning based approaches make it possible to take into account such strategies in the development of interactive machine learning techniques and in particular when it comes to social robotics.

Network

The Humane AI Net project funded by the European Union Horizon 2020 program aims to bring together the European AI community to develop the scientific foundations and technological breakthroughs needed to shape the AI revolution in a direction that is beneficial to humans both individually and societally, and that adheres to European ethical values and social, cultural, legal, and political norms. Key specific questions that the project addresses are:

  • AI systems that „understand” humans,
  • AI systems that can interact in complex social settings
  • AI systems that enhance  human capabilities
  • AI systems that empower both individuals and society as a whole carefully balancing individual benefits and social impact of their functionality
  • AI systems that respect human autonomy and self-determination
  • Ethics and Legal Protection “by design” in complex dynamic AI systems

Free textbook materials

Check the free online access to the eBook conference proceedings for conference members and enjoy the Human-Centered Artificial Intelligence Advanced Lectures.

About the Course

The Advanced Course on AI (ACAI) is a specialized course in Artificial Intelligence sponsored by EurAI in odd-numbered years. The theme of the 2021 ACAI School is Human-Centered AI.

The notion of “Human Centric AI”  increasingly dominates the public AI debate in Europe[1].  It postulates a “European brand” of AI beneficial to humans on both individual and social level that is characterized by a focus on supporting and empowering humans as well as incorporating “by design” adherence to appropriate ethical standards and values such as privacy protection, autonomy (human in control), and non-discrimination. Stated this way (which is how it mostly appears in the political debate) it may seem more like a broad, vague wish list than a tangible scientific/technological concept. Yet, at a second glance, it turns out that it is closely connected to some of the most fundamental challenges of AI[1].

Within ACAI 2021, researchers from the HumanE-AI-Net consortium will teach courses related to the state of the art in the above areas focusing not just on narrow AI questions but emphasising issues related to the interface between AI and Human-Computer Interaction (HCI), Computational Social Science (and Complexity Science) as well as ethics and legal issues. We intend to provide the attendees with the basic knowledge needed to design, implement, operate and research the next generation of Human Centric AI systems that are focused on enhancing Human capabilities and optimally cooperating with humans on both the individual and the social level.

ACAI 2021 will have a varied format, including keynote presentations, labs/hands-on sessions, short tutorials on cutting edge topics and longer in-depth tutorials on main topics in AI.

Please check for updates!


Topics

Learning and Reasoning with Human in the Loop

Learning, reasoning, and planning are interactive processes involving close synergistic collaboration between AI system(s) and user(s) within a dynamic, possibly open-ended real-world environment. Key gaps in knowledge and technology that must be addressed toward this vision include combining symbolic-subsymbolic learning, explainability,  translating a broad, vague notion of “fairness” into concrete algorithmic representations, continuous and incremental learning, compositionality of models and ways to adequately quantify and communicate model uncertainty.

Multimodal Perception

Human interaction and human collaboration depend on the ability to understand the situation and reliably assign meanings to events and actions. People infer such meanings either directly from subtle cues in behavior, emotions, and nonverbal communications or indirectly from the context and background knowledge. This requires not only the ability to sense subtle behavior, and emotional and social cues but an ability to automatically acquire and apply background knowledge to provide context. The acquisition must be automatic because such background knowledge is far too complex to be hand-coded. Research on artificial systems with such abilities requires a strong foundation for the perception of humans, human actions, and human environments. In HumanE AI Net, we will provide this foundation by building on recent advances in multimodal perception and modelling sensory, spatiotemporal, and conceptual phenomena

Representations and Modeling

Perception is the association of external stimuli to an internal model. Perception and modelling are inseparable. Human ability to correctly perceive and interpret complex situations, even when given limited and/or noisy input, is inherently linked to a deep, differentiated, understanding based on the human experience.  A new generation of complex modelling approaches is needed to address this key challenge of Human Centric  AI including Hybrid representations that combine symbolic, compositional approaches with statistical and latent representations. Such hybrid representations will allow the benefits of data-driven learning to be combined with knowledge representations that are more compatible with the way humans view and reason about the world around them.

Human Computer Interaction (HCI)

Beyond considering the human in the loop, the goal of human-AI is to study and develop methods for combined human-machine intelligence, where AI and humans work in cooperation and collaboration. This includes principled approaches to support the synergy of human and artificial intelligence, enabling humans to continue doing what they are good at but also be in control when making decisions. It has been proposed that AI research and development should follow three objectives: (i) to technically reflect the depth characterized by human intelligence; (ii) improve human capabilities rather than replace them; and (iii) focus on AI’s impact on humans. There has also been a call for the HCI community to play an increasing role in realizing this vision, by providing their expertise in the following: human-machine integration/teaming, UI modelling and HCI design, transference of psychological theories, enhancement of existing methods, and development of HCI design standards.

Social AI

As increasingly complex sociotechnical systems emerge, consisting of many (explicitly or implicitly) interacting people and intelligent and autonomous systems, AI acquires an important societal dimension. A key observation is that a crowd of (interacting) intelligent individuals is not necessarily an intelligent crowd. Aggregated network and societal effects and of AI and their (positive or negative) impacts on society are not sufficiently discussed in the public and not sufficiently addressed by AI research, despite the striking importance to understand and predict the aggregated outcomes of sociotechnical AI-based systems and related complex social processes, as well as how to avoid their harmful effects. Such effects are a source of a whole new set of explainability, accountability, and trustworthiness issues, even assuming that we can solve those problems for an individual machine-learning-based AI system.

Societal, Legal and Ethical Impact

Every AI system should operate within an ethical and social framework in understandable, verifiable and justifiable ways. Such systems must in any case operate within the bounds of the rule of law, incorporating fundamental rights protection into the AI infrastructure. Theory and methods are needed for the Responsible Design of AI Systems as well as to evaluate and measure the ‘maturity’ of systems in terms of compliance with legal, ethical and societal principles. This is not merely a matter of articulating legal and ethical requirements but involves robustness, and social and interactivity design. Concerning the ethical and legal design of AI systems, we will clarify the difference between legal and ethical concerns, as well as their interaction and ethical and legal scholars will work side by side to develop both legal protection by design and value-sensitive design approaches.


European Association for
Artificial Intelligence

 

 

 

The 2021 ACAI School will take place on 11-14 October 2021.

We are going to use different locations all very close to each other. This allows us to keep up with the maximum occupancy restrictions:
• 3IT, Salzufer 6, Entrance: Otto-Dibelius-Strasse
• Forum Digital Technologies (FDT) // CINIQ Center: Salzufer 6 (main venue), 10587 Berlin ( Entrance Otto-Dibelius-Straße);
• Loft am Salzufer: Salzufer 13-14, 10587 Berlin 
• Hörsaal HHI, Fraunhofer Institute for Telecommunications (HHI): Einsteinufer 37, 10587 Berlin (across the bridge)

There will be a possibility to participate in the School's activities online.

According to the current regulations in Germany associated with the COVID-19, we are restricted to a maximum of 60 students attending in person. The format of the event is subject to the COVID-19 regulations at the time of the School.

The program will be updated regularly. (Download the program)


Monday, 11 October
09.00-09.30 Registration (venue: Loft)
09.30-10.00 Welcome and Introduction (venue: Loft)
10.00-12.00 Mythical Ethical Principles for AI and How to Operationalise Them (venue: Loft)
Deep Learning Methods for Multimodal Human Activity Recognition (venue: 3IT)
Social Artificial Intelligence (venue: FDT)
12.00-13.00 Keynote: Yvonne Rogers (venue: Loft)
13.00-14.00 Lunch
14.00-18.00 Why and How Should We Explain in AI? (venue: Loft)
Multimodal Perception and Interaction with Transformers (venue: 3IT)
Social Artificial Intelligence (venue: FDT)
18.00-20.00 Welcome Reception and Student Poster Mingle (venue: Loft)

Tuesday, 12 October
09.00-13.00 Ethics and AI: An Interdisciplinary Approach (venue: Hörsaal HHI)
Machine Learning With Neural Networks (venue: FDT)
Social Simulation for Policy Making (venue: 3IT)
13.00-14.00 Lunch
14.00-16.00 Learning Narrative Frameworks from Multimodal Inputs (venue: 3IT)
Interactive Robot Learning (venue: FDT)
Argumentation in AI (venue: Hörsaal HHI)
16.00-17.00 Keynote: Atlas of AI: Mapping the Wider Impacts of AI by Kate Crawford
17.00-18.00 EURAI Dissertation Award

Unsupervised machine translation by Mikel Artetxe


Wednesday, 13 October
09.00-13.00 Law for Computer Scientists (venue: 3IT)
Computational Argumentation and Cognitive AI (venue: FDT)
Operationalising AI Ethics: Conducting Socio-Technical Assessment (venue: Hörsaal HHI)
13.00-14.00 Lunch
14.00-18.00 Explainable Machine Learning for Trustworthy AI (venue: FDT)
Cognitive Vision: On Deep Semantics for Explainable Visuospatial Computing (venue: 3IT)
Operationalising AI Ethics: Conducting Socio-Technical Assessment (venue: Hörsaal HHI)

Thursday, 14 October
09.00-11.00 Children and the Planet - The Ethics and Metrics of "Successful" AI (venue: Loft)
Learning and Reasoning with Logic Tensor Networks (venue: FDT)
Writing Science Fiction as An Inspiration for AI Research and Ethics Dissemination (venue: 3IT)
11.00-13.00 Introduction to intelligent UIs (venue: 3IT)
11.00-14.00 Student mentorship meetings with lunch (venue: Loft)
14.00-16.00 HumaneAI-net Micro-Project Presentation (venue: Loft)
16.00-18.00 Challenges and Opportunities for Human-Centred AI: A dialogue between Yoshua Bengio and Ben Shneiderman, moderated by Virginia Dignum (venue: Loft)
18.00-20.00 ACAI 2021 Closing Reception/Welcome HumaneAI-net (venue: Loft)

European Association for
Artificial Intelligence

 

 

Cognitive Vision: On Deep Semantics for Explainable Visuospatial Computing, Mehul Bhatt, Örebro University - CoDesign Lab EU; Jakob Suchan, University of Bremen
(see Tutorial Outline)

Ethics and AI: An Interdisciplinary Approach, Guido Boella, Università di Torino; Maurizio Mori, Università di Torino
(see Tutorial Outline)

Children and the Planet - The Ethics and Metrics of "Successful" AI, John Havens, IEEE; Gabrielle Aruta, Filo Sofi Arts
(see Tutorial Outline)

Mythical Ethical Principles for AI and How to Operationalise Them, Marija Slavkovik, University of Bergen
(see Tutorial Outline)

Operationalising AI Ethics: Conducting Socio-Technical Assessment, Andreas Theodorou, Umeå University & VeRAI AB; Virginia Dignum, Umeå University & VeRAI AB
(see Tutorial Outline)

Explainable Machine Learning for Trustworthy AI, Fosca Giannotti, CNR; Riccardo Guidotti, University of Pisa
(see Tutorial Outline)

Why and How Should We Explain in AI?, Stefan Buijsman, TU Delft
(see Tutorial Outline)

Interactive Robot Learning, Mohamed Chetouani, Sorbonne Université
(see Tutorial Outline)

Multimodal Perception and Interaction with Transformers, Francois Yvon, Univ Paris Saclay, James Crowley, INRIA and Grenoble Institut Polytechnique
(see Tutorial Outline)

Argumentation in AI (Argumentation 1), Bettina Fazzinga, ICAR-CNR
(see Tutorial Outline)

Computational Argumentation and Cognitive AI (Argumentation 2), Emma Dietz, Airbus Central R&T; Antonis Kakas, University of Cyprus; Loizos Michael, Open University of Cyprus
(see Tutorial Outline)

Social Simulation for Policy Making, Frank Dignum, Umeå University; Loïs Vanhée, Umeå University; Fabian Lorig, Malmö University
(see Tutorial Outline)

Social Artificial Intelligence, Dino Pedreschi, University of Pisa; Frank Dignum, Umeå University
(see Tutorial Outline)

Introduction to Intelligent User Interfaces (UIs), Albrecht Schmidt, LMU Munich; Sven Mayer, LMU Munich; Daniel Buschek, University of Bayreuth
(see Tutorial Outline)

Machine Learning With Neural Networks, James Crowley, INRIA and Grenoble Institut Polytechnique
(see Tutorial Outline)

Deep Learning Methods for Multimodal Human Activity Recognition, Paul Lukowicz, DFKI/TU Kaiserslautern

Learning and Reasoning with Logic Tensor Networks, Luciano Serafini, Fondazione Bruno Kessler

Learning Narrative Frameworks From Multi-Modal Inputs, Luc Steels, Universitat Pompeu Fabra Barcelona
(see Tutorial Outline)

Law for Computer Scientists, Mireille Hildebrandt, Vrije Universiteit Brussel; Arno De Bois, Vrije Universiteit Brussel
(see Tutorial Outline)

Writing Science Fiction as An Inspiration for AI Research and Ethics Dissemination, Carme Torras, UPC
(see Tutorial Outline)


European Association for
Artificial Intelligence

 

Yoshua Bengio, MILA, Quebec

Yoshua Bengio is recognized worldwide as one of the leading experts in artificial intelligence, Yoshua Bengio is most known for his pioneering work in deep learning, earning him the 2018 A.M. Turing Award, “the Nobel Prize of Computing,” with Geoffrey Hinton and Yann LeCun. He is a Full Professor at Université de Montréal, and the Founder and Scientific Director of Mila – Quebec AI Institute. He co-directs the CIFAR Learning in Machines & Brains program as Senior Fellow and acts as Scientific Director of IVADO. In 2019, he was awarded the prestigious Killam Prize and in 2021, became the second most cited computer scientist in the world. He is a Fellow of both the Royal Society of London and Canada and Officer of the Order of Canada. Concerned about the social impact of AI and the objective that AI benefits all, he actively contributed to the Montreal Declaration for the Responsible Development of Artificial Intelligence.

Kate Crawford

Kate Crawford

Kate Crawford, Professor, is a leading international scholar of the social and political implications of artificial intelligence. Her work focuses on understanding large-scale data systems in the wider contexts of history, politics, labor, and the environment. She is a Research Professor of Communication and STS at USC Annenberg, a Senior Principal Researcher at Microsoft Research New York, and an Honorary Professor at the University of Sydney. She is the inaugural Visiting Chair for AI and Justice at the École Normale Supérieure in Paris, where she co-leads the international working group on the Foundations of Machine Learning. Over her twenty year research career, she has also produced groundbreaking creative collaborations and visual investigations. Her project Anatomy of an AI System with Vladan Joler won the Beazley Design of the Year Award, and is in the permanent collection of the Museum of Modern Art in New York and the V&A in London. Her collaboration with the artist Trevor Paglen produced Training Humans – the first major exhibition of the images used to train AI systems. Their investigative project, Excavating AI, won the Ayrton Prize from the British Society for the History of Science. Crawford's latest book, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press) has been described as “a fascinating history of data” by the New Yorker, a “timely and urgent contribution” by Science. and named one of the best books on technology in 2021 by the Financial Times.

Yvonne Rogers, UCLIC - UCL Interaction Centre

Yvonne Rogers is a Professor of Interaction Design, the director of UCLIC and a deputy head of the Computer Science department at University College London. Her research interests are in the areas of interaction design, human-computer interaction and ubiquitous computing. A central theme of her work is concerned with designing interactive technologies that augment humans. The current focus of her research is on human-data interaction and human-centered AI. Central to her work is a critical stance towards how visions, theories and frameworks shape the fields of HCI, cognitive science and Ubicomp. She has been instrumental in promulgating new theories (e.g., external cognition), alternative methodologies (e.g., in the wild studies) and far-reaching research agendas (e.g., "Being Human: HCI in 2020"). She has also published two monographs "HCI Theory: Classical, Modern and Contemporary." and "Research in the Wild." with Paul Marshall. She is a fellow of the ACM, BCS and the ACM CHI Academy. 

Ben Shneiderman

Ben Shneiderman, University of Maryland

Ben Shneiderman is an Emeritus Distinguished University Professor in the Department of Computer Science, Founding Director (1983-2000) of the Human-Computer Interaction Laboratory, and a Member of the UM Institute for Advanced Computer Studies (UMIACS) at the University of Maryland. He is a Fellow of the AAAS, ACM, IEEE, NAI, and the Visualization Academy and a Member of the U.S. National Academy of Engineering. He has received six honorary doctorates in recognition of his pioneering contributions to human-computer interaction and information visualization. His widely-used contributions include the clickable highlighted web-links, high-precision touchscreen keyboards for mobile devices, and tagging for photos. Shneiderman’s information visualization innovations include dynamic query sliders for Spotfire, the development of treemaps for viewing hierarchical data, novel network visualizations for NodeXL, and event sequence analysis for electronic health records. Ben is the lead author of Designing the User Interface: Strategies for Effective Human-Computer Interaction (6th ed., 2016). He co-authored Readings in Information Visualization: Using Vision to Think (1999) and Analyzing Social Media Networks with NodeXL (2nd edition, 2019). His book Leonardo’s Laptop (MIT Press) won the IEEE book award for Distinguished Literary Contribution. The New ABCs of Research: Achieving Breakthrough Collaborations (Oxford, 2016) describes how research can produce higher impacts. His forthcoming book on Human-Centered AI will be published by Oxford University Press in January 2022.


European Association for
Artificial Intelligence

 

Mikel Artetxe at Facebook AI Research has been selected as the winner of the EurAI Doctoral Dissertation Award 2021.

In his PhD research, Mikel Artetxe has fundamentally transformed the field of machine translation, by showing that unsupervised machine translation systems can be competitive with traditional, supervised methods. This is a game-changing finding which has already made a huge impact on the field. To solve this challenging problem of unsupervised machine translation, he has first introduced an innovative strategy for aligning word embeddings from different languages, which are then used to induce bilingual dictionaries in a fully automated way. These bilingual dictionaries are subsequently used in combination with monolingual language models, as well as denoising and back translation strategies, to end up with a full machine translation system.

The EurAI Doctoral Dissertation Award will be officially presented at ACAI 2021 on Tuesday, October 12th, at 17.00 (CET). Mikel Artetxe will also give a talk:

Title: Unsupervised machine translation

Abstract: While modern machine translation has relied on large parallel corpora, a recent line of work has managed to train machine translation systems in an unsupervised way, using monolingual corpora alone. Most existing approaches rely on either cross-lingual word embeddings or deep multilingual pre-training for initialization, and further improve this system through iterative back-translation. In this talk, I will give an overview of this area, focusing on our own work on cross-lingual word embedding mappings, and both unsupervised neural and statistical machine translation.

 


European Association for
Artificial Intelligence

 

The number of places for on-site participation is limited. The registration is now closed.

Early-bird registration

(15 September)

Late registration

(after 16 September)

(PhD) Student 250€ 300€
Non-student 400€ 450€

Members of EurAI member societies are eligible for a discount (30€).

Students attending on-site will have an opportunity to apply for scholarships.

By registering, you

  • commit to attend the ACAI2021 School and do the assignments (where applicable),
  • commit to receiving further instructions,
  • confirm having acquired approval for participation in ACAI2021 School from your supervisor (where applicable).

Please note, the registration fee does not cover accommodation or travel costs.

Please check the information on entry restrictions, testing and quarantine regulations in Germany.


European Association for
Artificial Intelligence

 

Virginia Dignum, Umeå University
ACAI 2021 General Chair

 

Paul Lukowicz, German Research Center for Artificial Intelligence
ACAI 2021 General Chair

 

Mohamed Chetouani, Sorbonne Université

Mohamed Chetouani, Sorbonne Université
ACAI 2021 Publications Chair

 

Davor Orlic, Knowledge 4 All Foundation
ACAI 2021 Publicity Chair

 

Tatyana Sarayeva, Umeå University
ACAI 2021 Organising Chair


European Association for
Artificial Intelligence

 

Venue: Forum Digital Technologies // CINIQ Center: Salzufer 6 (main venue), 10587 Berlin

Travelling and staying in Berlin: The ACAI 2021 school participants are responsible for their own accommodation and trip to Berlin.

Visa: Organizing committee can provide the ACAI 2021 school participant with an invitation letter. For the invitation letter,  we need proof of enrollment at your university and a recommendation letter from your supervisor describing why is important for you to attend ACAI 2021. The ACAI 2021 school participant is responsible for the visa application.

COVID-19 guidance: Please check the information on entry restrictions, testing and quarantine regulations in Germany.

 


European Association for
Artificial Intelligence

 

Organizers

The organizer of this event is Prof. Virginia Dignum and other consortium members in charge of the work package on AI Ethics and Responsible AI. The research in WP5 will deal with various ethical issues such as transparency, whether biases are pre- programmed, are unintendedly introduced by the algorithm, or are the result of disproportionate data.

About the event

In this virtual event, we'll discuss the issue of defining AI for regulatory and policy purposes. There is an increasing realisation that researchers, regulators and policymakers are struggling with identifying what exactly are they addressing, with views ranging from 'magic' to the whole of computing, from robotics to very narrow specific statistical techniques, which render any attempts at regulation or policy guidance quite useless.

The result of this event will a research brief proposing a definitional framework to inform the current discussion around AI regulation. Our primary focus are the current regulatory efforts at European Parliament and Commission, but we hope to be useful to a wider audience, including proposals that contribute to shaping education, auditing and industry views on AI.

Register here

Participation is free of charge, but registration is required in order to organise the round table discussions. Link to Zoom meeting will be sent prior to the event to all registered participants.

Programme

17:00‑17:30 Welcome, fire start presentations and  Q/A
Marko Grobelnik

There won't be any perfect definition of AI, but we urgently needed a 'good enough' one yesterday

Eva Kaili

EU approach to AI regulation

Catelijne Muller

TBA

Francesca Rossi

Can we really define AI?

Michael Wooldridge

When is an algorithm AI? And if we can't answer that, how can we regulate AI?

17:45‑18:45 Round table discussions
18:45‑19:00 Summary and conclusions

Meet the Speakers

Marko Grobelnik, Artificial Intelligence Laboratory, JSI

Marko Grobelnik is a researcher in the field of Artificial Intelligence. Marko co-leads Artificial Intelligence Lab at Jozef Stefan Institute, cofounded UNESCO International Research Center on AI (IRCAI), and is the CEO of Quintelligence.com. He collaborates with major European academic institutions and major industries such as Bloomberg, British Telecom, European Commission, Microsoft Research, New York Times. Marko is co-author of several books, co-founder of several start-ups and is/was involved into over 70 EU funded research projects in various fields of Artificial Intelligence. Marko represents Slovenia in OECD AI Committee (ONE AI), in Council of Europe Committee on AI (CAHAI), and Global Partnership on AI (GPAI). In 2016 Marko became Digital Champion of Slovenia at European Commission.

Eva Kaili, member of European parliament
Eva Kaili, Member of the European parliament

Eva Kaili is a Member of the European Parliament, part of the Hellenic S&D Delegation since 2014. She is the Chair of the Future of Science and Technology Panel in the European Parliament (STOA) and the Centre for Artificial Intelligence (C4AI), Member of the Committees on Industry, Research and Energy (ITRE), Economic and Monetary Affairs (ECON), Budgets (BUDG), and the Special Committee on Artificial Intelligence in a Digital Age (AIDA). Eva is a member of the delegation to the ACP-EU Joint Parliamentary Assembly (DACP), the delegation for relations with the Arab Peninsula (DARP), and the delegation for relations with the NATO Parliamentary Assembly (DNAT). In her capacity, she has been working intensively on promoting innovation as a driving force of the establishment of the European Digital Single Market. She has been the draftsperson of multiple pieces of legislation in the fields of blockchain technology, online platforms, big data, fintech, AI and cybersecurity, as well as the ITRE draftsperson on Juncker plan EFSI2 and more recently the InvestEU program. She has also been the Chair of the Delegation to the NATO PA in the European Parliament, focusing on Defence and Security of Europe. Prior to that, she has been elected as a Member of the Hellenic Parliament 2007-2012, with the PanHellenic Socialist Movement (PASOK). She also worked as a journalist and newscaster prior to her political career. She holds a Bachelor degree in Architecture and Civil Engineering, and Postgraduate degree in European Politics.

Catelijne Muller, ALLAI
Catelijne Muller, ALLAI

Catelijne Muller is President and co-founder of ALLAI, an independent organisation that promotes responsible development, deployment and use of AI. She is a former member of EU High Level Expert Group on AI, that advised the European Commission on economic, social, legal and ethical strategies for AI. She is AI-Rapporteur at the EESC and was Rapporteur of the EESC opinion on Artificial Intelligence and Society, the EESC opinion on the EU Whitepaper on AI and the EESC opinion on the EU AI Regulation (upcoming). From 2018 to 2020 she headed the EESC Temporary Study Group on AI and she is a member of the EESC Digital Single Market Observatory. She is a member of the OECD Network of Experts on AI (ONE.AI). She advises the Council of Europe on the impact of AI on human rights, democracy and the rule of law. Catelijne is a Master of Laws by training and worked as a Dutch qualified lawyer for over 14 years prior to committing her efforts to the topic of Responsible AI.

Michael Wooldridge, Oxford UniversityMichael Wooldridge, Oxford University

Michael Wooldridge (Oxford University) is a Professor of Computer Science and Head of Department of Computer Science at the University of Oxford, and a programme director for AI at the Alan Turing Institute. He has been an AI researcher for more than 30 years, and has published more than 400 scientific articles on the subject, including nine books. He is a Fellow of the Association for Computing Machinery (ACM), the Association for the Advancement of AI (AAAI), and the European Association for AI (EurAI). From 2014-16, he was President of the European Association for AI, and from 2015-17 he was President of the International Joint Conference on AI (IJCAI). 

Francesca Rossi (IBM)

Francesca Rossi is an IBM fellow and the IBM AI Ethics Global Leader. She is an AI scientist with over 30 years of experience in AI research,
on which she published more than 200 articles in top AI journals and conferences. She co-leads the IBM AI ethics board and she actively participate in many global multi-stakeholder initiatives on AI ethics. She is a member of the board of directors of the Partnership on AI and the industry representative in the steering committee of the Global Partnership on AI. She is a fellow of both the worldwide association of AI (AAAI) and of the European one (EurAI),
and she will be the next president of AAAI.

Organizers

Background

Every year, students develop numerous ideas to solve societal problems using Artificial Intelligence (AI). But the majority of these valuable ideas are not getting further pursued or turned into businesses. This event provides a stage for outstanding student projects and seeks to promote and match them with leading professionals. AI experts from research, business, and the startup scene evaluate participants' ideas and highlight opportunities for further development. The most promising ideas will receive an award.

Don’t miss the chance to get an overview of exciting ideas and a great networking opportunity with high potential students as well as experts from the AI ecosystem.

You have an idea you want to present? Share your Idea and take part in the AI Prize! Please send an email to Sebastian Feger (sebastian.feger@um.ifi.lmu.de) with a short description and a link to a video that showcases your idea or prototype.

If you want to attend, please register for the reminder mail.

The prizes include:

  • An expert coaching – helping you to bring your idea to the next level.
  • A team dinner event – celebrating your first step to start your business.
  • Smart speaker – communication with AI.

Register here

Programme

17:00‑17:15 Welcome by Albrecht Schmidt and Jan Alpmann

Intro to HumaneAI Net and today’s event

17:15‑17:25 Guest talk by Timon Ruban

Our journey of building an AI Startup

17:25‑17:30 Setting the stage by Albrecht Schmidt
17:30‑18:30 Starting the Pitches with Jury members:
  • Matthias Notz
  • Albrecht Schmidt
  • Timon Ruban
  • Bernd Blumoser
  • Gülce Cesur Pitches including Q&A
18:30‑18:40 Short Break
18:40‑19:00 Panel discussion – From AI ideas to businesses
19:00‑19:15 Award ceremony & farewell by Albrecht Schmidt and Jan Alpmann s
19:15‑19:30 Open Networking

Meet the Jury

  • Ludwig-Maximilian-University Munich: Prof. Dr. Albrecht Schmidt
  • CEO German Entrepreneurship: Matthias Notz
  • Innovation Head of AI Lab: Bernd Blumoser
  • VW Data Lab: Gülce Cesur
  • Co-Founder Luminovo: Timon Ruban

About

Collaborative microprojects are the main mechanism for implementing the research agenda. Note that the notion of a collaborative microproject in which industry from both within and outside the consortium can participate is also an important internship and personnel-exchange instrument.

Programme

14:00‑14:10 Welcome and setting the stage by Coordinator: Paul Lukowicz, German Research Center for Artificial Intelligence
14:00‑14:30 Session 1 - Check the videos here
  • Reasoning on Contextual Hierarchies via Answer Set Programming with Algebraic Measures
  • Educational Recommenders with Narratives
  • Linking language and semantic memory for building narratives
  • Neural-Symbolic Integration: explainability and reasoning in KENN
  • Online Deep-AUTOML
  • AI Integration Languages: a Case Study on Constrained Machine Learning
  • Feasibility analysis of hardware acceleration for AML
  • Multimodal Perception and Interaction with Transformers
14:40‑15:00 Session 2 - Check the videos here
  • Collection of datasets tailored for HumanE-AI multimodal perception and modelling
  • Causality and Explainability in Temporal Data
  • Prediction of static and perturbed reach goals from movement kinematics
  • Neural mechanism in human brain activity during weight lifting
  • Coping with the variability of human feedback during interactive learning through ensemble reinforcement learning
  • A tale of two consensuses - building consensus in collaborative and self-interested scenarios
  • Socially aware interactions
  • Combining symbolic and sub-symbolic approaches - Improving neural Question-Answering-Systems through Document Analysis for enhanced accuracy and efficiency in Human-AI interaction.
15:10‑15:40 Session 3 - Check the videos here
  • Exploring the impact of Agency on Human-Computer Partnerships
  • Evidence-based chatbot interaction aimed at reducing sedentary behavior
  • Multilingual Event-Type-Anchored Ontology for Natural Language Understanding (META-O-NLU)
  • Machine supervision of human activity: The example of rehabilitation exercises
  • Autobiographical Recall in Virtual Reality
  • DIASER: DIAlog task oriented annotations for enhanced modeling of uSER
  • Social interactions with robots
  • Learning Individual Users’ Strategies for Adaptive UIs
  • Normative behavior and extremism in Facebook groups
15:50‑16:10 Session 4: Legal Protection by Design Aspects: Mireille Hildebrand - videos here
  • Venice
  • Algorithmic bias and media effects
  • Agent based modeling of the Human-AI ecosystem
  • Social AI gossiping
  • Using Social Norms to counteract misinformation in online communities
  • Pluralistic recommendation in News
  • Explainable vertigo diagnosis
  • Delegation of processing in techno-social systems
  • Network effects in mobility navigation systems
16:20‑16:40 Session 5 - Check the videos here
  • The knowledgeable and empathic behavior change coach
  • Asking the right Questions! How to Match Expertise and People for Innovation
  • Ethical chatbots
  • What idea of AI? Social and public perception of AI
  • Improving air quality in large cities using mobile phone and IoT data
  • Validating fairness property in post-processing vs in-processing systems
  • The role of designers regarding AI design: a case study
  • X-ai model for human readable data aimed at connected car crash detection
  • X5LEARN: Cross Modal, Cross Cultural, Cross Lingual, Cross Domain, and Cross Site interface for access to openly licensed educational materials
16:40‑16:45 Discussion
16:45‑17:00 Closing

 

June 20 – 25 , 2021, Dagstuhl Perspectives Workshop 21252

Human-Centered Artificial Intelligence

Organizers

Virginia Dignum (University of Umeå, SE)
Wendy E. Mackay (INRIA Saclay – Orsay, FR)
John Shawe-Taylor (University College London, GB)
Frank van Harmelen (VU University Amsterdam, NL)

For support, please contact

Annette Beyer for administrative matters

Michael Gerke for scientific matters

Documents

Dagstuhl Perspectives Workshop Schedule (Upload here)

Motivation

Society is undergoing a revolution in artificial intelligence (AI), with huge potential benefits, but also major risks for individuals and society.

Increasingly, trust in the development, deployment, and the use of AI and autonomous systems concerns not only the technology’s inherent properties, but also the socio-technical systems of which they are part of, that is, the people, organisations, and societal environments in which systems are developed, implemented, and used. Currently, major challenges include the lack of fundamental theory and models to analyse and ensure that systems are aligned with human values and ethical principles, accountable, open to inspection, and understandable to diverse stakeholders. Furthermore, there is no doubt that this technological shift will have revolutionary effects on human life and society.

The goal of this Dagstuhl Perspectives Workshop is to contribute to shape that revolution, to provide the scientific and technological foundations for designing and deploying AI systems that work in partnership with human beings, to enhance human capabilities rather than replace human intelligence. Fundamentally new solutions are needed for core research problems in AI and human-computer interaction (HCI), especially to help people understand actions recommended or performed by AI systems and to facilitate meaningful interaction between humans and AI systems.

Specific challenges include: learning complex world models; building effective and explainable machine learning systems; developing human-controllable intelligent systems; adapting AI systems to dynamic, open-ended real-world environments (in particular robots and autonomous systems); achieving in-depth understanding of humans and complex social contexts; and enabling self-reflection within AI systems.

Expected results (outcome) of the workshop

  • Define a coherent research agenda for this rapidly emerging discipline
  • Produce a clear narrative on content and urgency of the discipline to influence policy makers
  • Trigger scientific innovation across the whole spectrum from fundamental research to practical applications
  • Develop synergies across Europe on this emerging research theme and link with similar international initiatives (e.g. at Stanford and MIT ).

About

Human memory drives the encoding, storing, and retrieval of our experiences. Artificial intelligence may help us in understanding challenges in memory research and could improve but potentially also hinder memory encoding and retrieval. Experts from Psychology, HCI, and Computer Science will discuss challenges and opportunities on the intersection of AI and Human Memory from a human-centered perspective in this workshop.

Video presentation

Video of the presentations on AI and Human Memory
Video of the presentations on AI and Human Memory

Organizers

Albrecht Schmidt, Antti Oulasvirta, Robin Welsch, Kashyap Todi

Programme

14:00‑14:10 Welcome and setting the stage by Albrecht Schmidt
Intro to HumaneAI Net and today’s event
14:10‑14:45 Guest talk by Zoya Bylinskii

Research Scientist at Adobe Research

14:45‑14:55 Talk by James Crowley

The Role of Emotion in Concept Formation and Recall when Solving Problems

14:55‑15:05 Talk by Robin Welsch

Understanding autobiographical memory  in Virtual Reality

15:05‑15:15 Talk by Catharine Oertel

Memory Aware Conversational AI to Aid Virtual Team-Meetings

15:15‑15:25 Talk by Aurelien Nioche

Improving Artificial Teachers by Considering How People Learn and Forget

15:25‑15:55 Panel discussion
15:55‑16:00 Closing

Meet the Speakers and Moderators

Zoya Bylinskii, Adobe Research
Zoya Bylinskii, Adobe Research

Albrecht Schmidt, LMU Munich
Albrecht Schmidt, LMU Munich

Antti Oulasvirta, Aalto University
Antti Oulasvirta, Aalto University

Robin Welsch, LMU Munich
Robin Welsch, LMU Munich

Kashyap Todi, Aalto University
Kashyap Todi, Aalto University

James Crowley, Institut Polytechnique de Grenoble

Catherine Oertel, TU Delft
Catherine Oertel, TU Delft

Aurelien Nioche, Aalto University
Aurelien Nioche, Aalto University

Network

The Humane AI Net project funded by the European Union Horizon 2020 program aims to bring together the European AI community to develop the scientific foundations and technological breakthroughs needed to shape the AI revolution in a direction that is beneficial to humans both individually and societally, and that adheres to European ethical values and social, cultural, legal, and political norms. Key specific questions that the project addresses are:

  • AI systems that „understand” humans,
  • AI systems that can interact in complex social settings
  • AI systems that enhance  human capabilities
  • AI systems that empower both individuals and society as a whole carefully balancing individual benefits and social impact of their functionality
  • AI systems that respect human autonomy and self-determination
  • Ethics and Legal Protection “by design” in complex dynamic AI systems

 

Facilitating a European brand of trustworthy, ethical AI that enhances Human capabilities and empowers citizens and society to effectively deal with the challenges of an interconnected globalized world

Network

The Humane AI Net project funded by the European Union Horizon 2020 program aims to bring together the European AI community to develop the scientific foundations and technological breakthroughs needed to shape the AI revolution in a direction that is beneficial to humans both individually and societally, and that adheres to European ethical values and social, cultural, legal, and political norms. Key specific questions that the project addresses are:

  • AI systems that „understand” humans,
  • AI systems that can interact in complex social settings
  • AI systems that enhance  human capabilities
  • AI systems that empower both individuals and society as a whole carefully balancing individual benefits and social impact of their functionality
  • AI systems that respect human autonomy and self-determination
  • Ethics and Legal Protection “by design” in complex dynamic AI systems

Video presentation

Video of the debate on Facilitating a European brand of trustworthy, ethical AI
Video of the debate on Facilitating a European brand of trustworthy, ethical AI

About

In this moderated online panel we will discuss the vision and plans of the Humane AI Net project. The project coordination team (Paul, Virginia, and John) and experts in law and human centered-computing (Mireille and Albrecht) will share their view of how AI in Europe can be advanced while maximizing the value for individuals and society.

The meeting will take place online using Zoom. The link for the Zoom meeting will be posted here 30 Minutes prior to the meeting.

Programme

17:00‑17:15 Welcome and setting the stage

People in Humane AI

17:15‑18:15 Roundtable: Panel: HumaneAI-Net: A Vision for Human-Centered AI in Europe and Beyond

MODERATOR: Eva Wolfangel

  • Mireille Hildebrandt, Vrije Universiteit Brussel
  • Paul Lukowicz, DFKI, Germany, HumaneAI coordinator
  • John Shaw-Taylor, University College London, UNESCO Chair in Artificial Intelligence
  • Virginia Dignum, Umeå University
  • Albrecht Schmidt, LMU Munich
18:15‑18:30
  • Opportunities for Engagement with Humane AI by Paul Lukowicz, DFKI, Germany, HumaneAI coordinator
18:30‑19:00
  • Open Discussion

Meet the speakers

Eva Wolfangel
Eva Wolfangel

Paul Lukowicz, German Research Center for Artificial Intelligence

Mireille Hildebrandt, Vrije Universiteit Brussel
Mireille Hildebrandt, Vrije Universiteit Brussel

John Shawe-Taylor, University College London, UNESCO Chair in AI

Virginia Dignum, Umeå University

Albrecht Schmidt, LMU Munich
Albrecht Schmidt, LMU Munich

The German Entrepreneurship GmbH and the Ludwig-Maximilians-Universität München facilitated a workshop on the topic of "Innovation in AI & AI in Innovation" on the 30th and 31st October 2019 in Munich. The participants were consortium members and guests from industry (health care, automobiles, aircraft, industry 4.0), start-up companies (mainly AI service start-ups) and the scientific community (universities, students, research labs etc.).

The workshop sessions consisted of input lectures, key notes, group works, discussions and exchange formats like speed dating. Thereby, experiences were shared, and a knowledge transfer was facilitated. Active and intensive exchange between the participants was fostered and common challenges and new ideas were identified.

On October 10 and 11 the HumaneAI partners meet in Den Haag to create the Reseach Roadmap for the new science of Human Centric Artificial Intelligence. Our team of researchers was determined to place strong focus on scientific excellence. Most importantly, we want to has to provide unique and world leading results across most areas of AI.

Europe currently has some of the best AI researchers, and we need to make sure that they are a core part of our project as their input will be instrumental to shaping and steering this project in a highly dynamic environment, all while minimizing additional burden on the researchers themselves.

To achieve global impact, the project must deliver world-leading and fundamentally new results across many domains; a strategy of simply providing the state-of-the-art (“me too”) will simply have no visibility in the global competitive landscape, that's why we need a roadmap.

At the same time, we need to make sure our results are integrated into the network of  partners and transferred to industry as quickly as possible. This typically requires different groups from research or industry, so organizing this pipeline will be a key goal of this workshop.