Organizers

Event Contact

Programme

Time Speaker Description
June 27th 14:00-16:00 Hackathon Opening Session During this session, you will have the opportunity to hear different keynote speakers with focus on the Hackathon challenges and tooling tutorials.
June 27th 16:00-19:00 Hackathon development During the first day, you will start the development of your projects supported by international mentors.
June 28th 09:00-19:00 Hackathon development During the second day, you will start the development of your projects supported by international mentors
June 29th 09:00-15:30 Hackathon development Final phase of development, where you will finish the projects and provide the presentations
June 29th 15:30-17:00 Hackathon Pitching! You will be pitching your projects to an audience and juris
June 29th 17:00-18:00 Awards session During this session we will provide the different Awards.

Background

The EU-IoT Hackathon focuses on “sustainable next generation IoT applications”. We invite you to bring your ideas and to develop solutions that address IoT skills training, IoT sustainable business models, IoT novel technical solutions in the context of 6 challenge domains: IoT interfaces, far Edge, near Edge, infrastructure, and also: a specific challenge domain of the European Factory Platform (EFPF), with focus on manufacturing. It will take place between 27th-29th June 2022, in Munich (Germany). The hackathon is co-located with CONASENSE2022.

The aim of the EU-IoT Hackathon is to disseminate new business ideas, experiments and prototypes as first step to best support next generation sustainable IoT solutions.

The teams shall have the opportunity to develop their ideas within an international flagship environment being mentored by several international experts from the Next Generation IoT (NGIoT) community and being in contact with NGIoT community flagship events.
Awards

EU-IoT Challenges Award – UnternehmerTUM Makerspace Award – first prize. Corresponds to 1 year membership for incubation in the UnternehmerTUM Makerspace in Munich for the best project overall, across all domains and project categories.
EU-IoT Challenges Award – IoT Week 2023 Ticket – second prize. One free registration for 1 person of a team, full programme of the IoT Week 2023.
EU-IoT Challenges Award IoT-starter kit – All winning teams (3). IoT starter kit provided to each team member of the 3 winning teams
EFPF Challenges Award – 1st prize. 1 smartphone valued at 600 Euros.
EFPF Challenges Award – 2nd prize. Smart home kit valued at 300 Euros.
EFPF Challenges Award – 3rd prize. IoT starter kit valued at 200 Euros
EFPF Challenges Award – 4th prize. IoT starter it valued at 100 Euros.

Deadlines

Team registration and challenge selection: 15.05.2022
EU-IoT Hackathon preparation event (online): 31.05.2022
Hackathon: 27th-29th 2022, co-located with the CONASENSE 2022 symposium in Munich, Germany

If you are interested:

1. Check the hackathon page, awards and rules and register via DevPost: https://eu-iot-hackathon.devpost.com/
2. Register your project, team (1-6 persons) : https://forms.gle/5BvJL8dj7Zk4S3EX9
3. Join the Hackathon via slack, https://eu-iot-hackathon.slack.com/, where we will be regularly disseminating information about the different tools, and updates to the EFPF catalyser programme and final hackathon event.
4. Contact us to get more information via eu-iot-hackathon@fortiss.org!

Committees:
- Organizers: fortiss GmbH (Mitula Donga, Rute C. Sofia); UnternhemerTUM Makerspace GmbH (Florian Küster)

Technical Committees :
-EU-IoT Committee: Rute Sofia (fortiss), Lamprini Kolovou (Martel); John Soldatos (Intracom); Mirko Presser (Aarhus University); Brendan Rowan (Bluspecs)
-EFPF Committee: Mitula Donga (fortiss), Alexandros Nizami (ITI-CERTH), Florian Jasche (Fraunhofer FIT), Ingo Martens (Hanse Aerospace); Carlos Coutinho (Caixa Mágica), Usman Wajid (Information Catalyst)

Organizers

Event Contact

Programme

Time Speaker Description
9:00 Wolfgang Köhler

Background

In this webinar, fortiss presents the potentials and benefits of a data-driven digitization strategy for small and medium-sized enterprises (SMEs). Success stories and practical application examples also illustrate how networked information systems and artificial intelligence (AI) can create targeted added value for companies.

Organizers

Event Contact

Programme

Time Speaker Description
18:00 Welcome and Introduction
18:05 TBA Keynote
18:30 Albrecht Schmidt, Virginia Dignum, Arno De Bois, Marc Hilbert Panel on the European AI Act

Background

Regulation to foster or hinder the AI community? Join our discussion on the proposed European AI Act with Albrecht Schmidt, Virginia Dignum, Arno De Bois and Marc Hilbert on March 3, 18:00 CET (12pm EST, 9am PST).

Organizers

Event Contact

Programme

Time Speaker Description
18:05 Roberto Di Cosmo Keynote
18:30 Roberto Di Cosmo, Ana Trisovic, Sebastian Feger, Feng Wang Panel: OpenData Business Cases – What is the Value of Data?

Background

Panel on OpenData Business Cases – What is the Value of Data?

18:00 – Welcome, Introduction
Albrecht Schmidt

18:05 – Keynote: Roberto Di Cosmo, Founder and CEO of Software Heritage
Roberto Di Cosmo

18:30 – Panel: OpenData Business Cases – What is the Value of Data?
Roberto Di Cosmo, Ana Trisovic, Sebastian Feger, Feng Wang

19:30 – Wrap up and Closing
Albrecht Schmidt

Organizers

  • Roel Dobbe (TU Delft)
  • Ana Valdivia (King's College London)

Event Contact

  • Maria Perez-Ortiz (University College London)

Programme

Time Speaker Description
Monday 13. June 2022 Workshops, Tutorials and other Events
Tuesday 14. June 2022 Workshops, Tutorials and other Events

Background

HHAI-2022 workshops will provide a platform for discussing Hybrid Human-Artificial Intelligence in more informal settings and for a broad audience. We invite proposals for full-day and half-day events during the two days leading up to the main conference. Registration for the main conference is expected, arrangements for non-traditional conference attendees can be requested.

The goal of workshops is to bring together academics, professionals and users of technology to better understand the socio-technical benefits, risks and limitations that artificial intelligence has when interacting with humans from different perspectives. Thus we encourage workshops presenting broad concepts of human-artificial intelligence interaction or specific cases. We invite submissions for events that foster cross-disciplinary interaction, scientific discourse, and creative and critical reflection, rather than just being mini-conferences. To do so, we offer organizers flexibility for format that best suit the goals of their event. We especially welcome submissions of communities that are usually not featured prominently in artificial intelligence events and conferences.

Important Dates

January 31, 2022: Workshop proposals due
February 7, 2022: Workshop proposal acceptance notification
February 14, 2022: Deadline for announcing the Workshops Call for Papers/Contributions
April 1, 2022: Workshop application deadline for contributions to the workshop
April 29, 2022: Recommended deadline for paper acceptance notification
June 13,14 2022: HHAI2022 Workshops

Organizers

  • Stefan Schlobach (Vrije Universiteit Amsterdam)
  • Maria Perez-Ortiz (University College London)
  • Myrthe Tielman (TU Delft)
  • Ana Valdivia (King's College London)
  • Roel Dobbe (TU Delft)
  • Shenghui Wang (University of Twente)

Event Contact

  • Maria Perez-Ortiz (University College London)

Programme

Time Speaker Description
Monday 13. June 2022 TBC Workshops, Tutorials and other Events
Tuesday 14. June 2022 TBC Workshops, Tutorials and other Events
Wednesday-Friday 15-17. June 2022 TBC Main Research Program

Background

Hybrid Human Artificial Intelligence (HHAI2022) is the first international conference focusing on the study of Artificial Intelligent systems that cooperate synergistically, proactively and purposefully with humans, amplifying instead of replacing human intelligence.

HHAI2022 is organised by the Dutch Hybrid Intelligence Center and the European HumaneAI Network, as the first conference in what we intend to become a series of conferences about Hybrid Human Artificial Intelligence.

HHAI2022 will be an in-person event at the VU Amsterdam, The Netherlands, and will be organized as a single-track conference.

HHAI aims for AI systems that assist humans and vice versa, emphasizing the need for adaptive, collaborative, responsible, interactive and human-centered intelligent systems that leverage human strengths and compensate for human weaknesses, while taking into account social, ethical and legal considerations. This field of study is driven by current developments in AI, but also requires fundamentally new approaches and solutions. In addition, we need collaboration with areas such as HCI, cognitive and social sciences, philosophy & ethics, complex systems, and others. In this first international conference, we invite scholars from these fields to submit their best original, new as well as in progress, visionary and existing work on Hybrid Human-Artificial Intelligence.

Please for more information visit our website at https://www.hhai-conference.org/

Organizers

Event Contact

Programme

Time Speaker Description
14:00 - 14:10 Roberto Trasarti SoBigData++ project: an ecosystem for Ethical Social Mining - This talk introduces SoBigData++ project with the aim of putting in context the participants presenting the main objectives of the project and the consortium of experts involved working on the vertical contextes: Societal Debates and Online Misinformation, Sustainable Cities for Citizens, Demography, Economics & Finance 2.0, Migration Studies, Sports Data Science, Social Impact of Artificial Intelligence and Explainable Machine Learning. Part of this presentation will be the description of an ethical approach to data science which is a pillar of the SoBigData++ project.
14:10 - 14:25 Valerio Grossi SoBigData RI Services - An overview of the SoBigData RI services will be shown including the Exploratories (Vertical research contexts), the resource catalogue, the training area and SoBigData Lab.
14:25 - 14:55 Giulio Rossetti Hands-on JupyterHub service and SoBigData Libraries - This first hands-on session focuses on the libraries and methods developed within the SoBigData consortium. Code examples and case studies will be introduced by leveraging a customized JupyterHub notebook service hosted by SoBigData. Using such a freely accessible coding environment, we will discuss a subset of the functionalities available to SoBigData users to design and run their experiments.
14:55 - 15:10 Massimiliano Assante Hands-on computational engine & technologies - In this second hands-on session, the tutorial will focus on the computational engine provided by SoBigData. Real examples will be presented in order to highlight the functionalities to deploy an algorithm and run it on the cloud.
15:10 - 15:25 Giovanni Comandè Legality Attentive data Science: it is needed and it is possible!
15:25 - 15:35 Francesca Pratesi FAIR: an E-learning module for GDPR compliance and ethical aspects
15:35 - 16:00 Beatrice Rapisarda (moderator) An open discussion to give more details on specific aspects according to the requests of the audience (not already addressed during the tutorial or presentations).

Objectives

The objectives of the tutorial are to show how SoBigData RI can support data scientists in doing cutting-edge science and experiments. In this perspective, our target audience also includes people interested in big data analytics, computational social science, digital humanities, city planners, wellbeing, migration, sport, health within the legal/ethical framework for responsible data science and artificial intelligence applications. With its tools and services, SoBigData RI promotes the possibilities that new generations of researchers have for executing large-scale experiments on the cloud making them accessible and transparent to a community. Moreover, specialized libraries developed in SoBigData++ project will be freely accessible in order to make cutting edge science in a cross-field environment.

Format: The tutorial will be 3 hours containing:

  • 1 hour of presentations describing the European project SoBigData++, the RI Services, and the Responsible Data Science principles and tools;
  • 45 minutes and half of practical use of the RI with real examples of analysis in a dedicated Virtual research environment;
  • 20 minutes for an open discussion with the attendees on the various aspects presented.

About

This webinar is hosted by Carbon Re together with the UNESCO International Research Centre on Artificial Intelligence (IRCAI), in association with London Climate Action Week and with the generous support of HumaneAI-Net.

Register →here

According to the Climate Change Committee, greenhouse gas emissions from manufacturing and construction were 66 MtCO2 in 2018 - 12% of the UK total. If there is a lesson from the pandemic it is that we need to multiply our efforts to mitigate climate change, if we are to avoid economic, social and political disaster. Yet, we can’t achieve net zero goals without decarbonizing manufacturing and construction. We need to tackle the hard problems today. This means industrial policies, R&D funding, business support and innovation that accelerate the zero-carbon transition needed to address these challenges.

Join us to hear from world leading experts in AI, business and policy discussing current applications and high-potential use cases.

Keynote speech by Professor John Shawe-Taylor

Professor John Shawe-Taylor is the UNESCO Chair in AI, and Director of the International Research Center on Artificial Intelligence under the auspices of UNESCO.

Followed by a panel discussion with

Sana Khareghani, Head of Office for Artificial Intelligence. The Office for AI is a joint unit between Department for Digital, Media, Culture and Sport (DCMS) and the Department for Business, Energy and Industrial Strategy (BEIS).

Jade Cohen, Co-Founder and CPO at Qualis Flow. Qualis Flow works with construction teams to enable them to track and manage their social and environmental impact, and take a data driven approach to improving that impact.

Mark Enzer OBE, CTO of Mott MacDonald and Head of the National Digital Twin Programme at the Centre for Digital Built Britain. The Centre for Digital Built Britain is a partnership between the BEIS and the University of Cambridge. It seeks to understand how the construction and infrastructure sectors can use a digital approach to better design, build, operate and integrate the built environment.

Professor Aidan O’Sullivan is Co-Founder and CTO at Carbon Re, Associate Professor in Energy and AI at the UCL Energy Institute and Programme Chair for AI and Climate Change at the International Research Center on Artificial Intelligence (IRCAI) under the auspices of UNESCO.

About the hosts

Carbon Re is an AI and Climate tech startup focused on decarbonising cement and foundation industries with deep reinforcement learning.

IRCAI is an AI center under the auspices of UNESCO looking to establish a global network of AI centres that are working in the area of sustainable development.

About our Sponsor

EU-funded HumanE-AI-Net project aims to develop robust, trustworthy AI systems that can ‘understand’ humans, adapt to complex real-world environments and interact appropriately in complex social settings.

Background

A collective intelligence exercise towards shaping the research questions of Social AI, driven by societal challenges. It is implemented through a structured conversation among inter-disciplinary scientists, looking at the relationship between AI and society from multiple perspectives.

For human-AI scientists and social scientists, the challenge is how to achieve better understanding of how AI technologies could support or affect emerging social challenges, and how to design human-centered AI ecosystems that help mitigate harms and foster beneficial outcomes oriented at the social good.

Social Artificial Intelligence

As increasingly complex socio-technical systems emerge, made of people and intelligent machines, the social dimension of AI becomes evident. Examples range from urban mobility, with travellers helped by smart assistants to fulfill their agendas, to the public discourse and the markets, where diffusion of opinions as well as economic and financial decisions are shaped by personalized recommendation systems. In principle, AI could empower communities to face complex societal challenges. Or it can create further vulnerabilities and exacerbate problems, such as bias, inequalities, polarization, and depletion of social goods.

The point is that a crowd of (interacting) intelligent individuals is not necessarily an intelligent crowd. On the contrary, it can be stupid in many cases, due to network effects: the sum of many individually “optimal” choices is often not collectively beneficial, because individual choices interact and influence each other, on top of common resources. Navigation systems suggest directions that make sense from an individual perspective, but may create a mess if too many drivers are directed on the same route. Personalized recommendations on social media often make sense to the user, but may artificially amplify polarization, echo-cambers, filter bubbles, and radicalization. Profiling and targeted advertising may further increase inequality and monopolies, with harms of perpetuating and amplifying biases, discriminations and “tragedies of the commons”.

The network effects of AI and their impact on society are not sufficiently addressed by AI research, first of all because they require a step ahead in the trans-disciplinary integration of AI, data science, network science and complex systems with the social sciences. How to understand and mitigate the harmful outcomes? How to design “social AI” mechanisms that help towards agreed collective outcomes, such as sustainable mobility in cities, diversity and pluralism in the public debate, fair distribution of resources?

Registration

Please register here

Organizers

  • Dino Pedreschi (University of Pisa)
  • Chiara Boldrini (IIT-CNR)
  • Letizia Milli (University of Pisa)
  • Laura Sartori (University of Bologna)

In collaboration with:

  • SoBigData++, the European Research Infrastructure for Big Data and Social Mining
  • SAI, the CHIST-ERA project “Social eXplainable Artificial Intelligence”
  • XAI, the ERC Advanced Grant "Science and technology for the eXplanation of AI decision making

Event Contact

  • Dino Pedreschi (University of Pisa)

Programme

Time Speakers
16:00 – 17:00 Setting-the-stage – plenary session

Fire-start addresses by AI scientists and social scientists:

17:00 – 18:00 Breakout – four parallel brainstorming rooms
  • Bias (video)
    • Mentors: Katharina Kinder-Kurlanda (Univ. Klagenfurt) and Salvatore Ruggieri (Univ. Pisa), Rapporteur: Anna Monreale (Univ. Pisa)
  • Inequality (video)
    • Mentors: Laura Sartori (Univ. Bologna) and Mark Coté (King’s College), Rapporteur: Luca Pappalardo (ISTI-CNR)
  • Polarization (video)
    • Mentors: Kalina Bontcheva (Univ. Sheffield) and János Kertész (Central European Univ. Vienna), Rapporteur: Chiara Boldrini (IIT-CNR)
  • Social good (video)
    • Mentors: Mohamed Chetouani (Sorbonne Univ.), Frank Dignum (Umea Univ.), Andrzej Nowak (Univ. Warsaw), Rapporteur: Michele Bezzi (SAP)
18:00 – 18:30 Restitution – plenary session (video)
reports from the mentors and rapporteurs of the breakout sessions, wrap-up

Meet the speakers

Alex 'Sandy' Pentland
Alex 'Sandy' Pentland

Professor Alex 'Sandy' Pentland directs MIT Connection Science, an MIT-wide initiative, and previously helped create and direct the MIT Media Lab and the Media Lab Asia in India. He is one of the most-cited  computational scientists in the world, and Forbes recently declared him one of the "7 most powerful data scientists in the world" along with Google founders and the Chief Technical Officer of the United States.  He is on the Board of the UN Foundations' Global Partnership for Sustainable Development Data, co-led the World Economic Forum discussion in Davos that led to the EU privacy regulation GDPR, and was central in forging the transparency and accountability mechanisms in the UN's Sustainable Development Goals.  He has received numerous awards and prizes such as the McKinsey Award from Harvard Business Review, the 40th Anniversary of the Internet from DARPA, and the Brandeis Award for work in privacy. He is a member of advisory boards for the UN Secretary General and the UN Foundation,  and the American Bar Association, and previously for Google, AT&T, and Nissan.  He is a member of the U.S. National Academy of Engineering and council member within the World Economic Forum.

Laura Sartori
Laura Sartori

Laura Sartori is an Associate Professor of Sociology at the Department of Political and Social Sciences at the University of Bologna. She holds a Ph.D. in Sociology and Social Research from the University of Trento (2002) and ever since worked on several topics related to the social and political implications of technology: from ICTs to AI. Current projects are about 1. Inequalities and public perception of Artificial Intelligence, 2. Money and Complementary currencies.

 Stuart Russell
Stuart Russell

Stuart Russell is a Professor of Computer Science at the University of California at Berkeley, holder of the Smith-Zadeh Chair in Engineering, and Director of the Center for Human-Compatible AI. He is a recipient of the IJCAI Computers and Thought Award and from 2012 to 2014 held the Chaire Blaise Pascal in Paris. He is an Honorary Fellow of Wadham College, Oxford, an Andrew Carnegie Fellow, and a Fellow of the American Association for Artificial Intelligence, the Association for Computing Machinery, and the American Association for the Advancement of Science. His book "Artificial Intelligence: A Modern Approach" (with Peter Norvig) is the standard text in AI, used in 1500 universities in 135 countries. His research covers a wide range of topics in artificial intelligence, with an emphasis on the long-term future of artificial intelligence and its relation to humanity. He has developed a new global seismic monitoring system for the nuclear-test-ban treaty and is currently working to ban lethal autonomous weapons.

Mona Sloane
Mona Sloane

Mona Sloane is a sociologist working on design and inequality, specifically in the context of AI design and policy. She is a Senior Research Scientist at the NYU Center for Responsible AI, an Adjunct Professor at NYU’s Tandon School of Engineering, a Fellow with NYU’s Institute for Public Knowledge (IPK) and The GovLab, and the Director of the *This Is Not A Drill* program on technology, inequality and the climate emergency at NYU’s Tisch School of the Arts. She is principal investigator on multiple research projects on AI and society, and holds an affiliation with the Tübingen AI Center at the University of Tübingen in Germany. Mona also is the conveyor of the IPK Co-Opting AI series and serves as editor of the technology section at Public Books. Follow her on Twitter @mona_sloane.

Dino Pedreschi
Dino Pedreschi

Dino Pedreschi is a professor of computer science at the University of Pisa, and a pioneering scientist in data science and artificial intelligence. He co-leads with Fosca Giannotti the Pisa KDD Lab - Knowledge Discovery and Data Mining Laboratory http://kdd.isti.cnr.it, a joint research initiative of the University of Pisa and the Italian National Research Council - CNR. His research focus is on big data analytics and mining, machine learning and AI, and their impact on society: human mobility and sustainable cities, social network analysis, complex social and economic systems, data ethics, discrimination-preventing and privacy-preserving data analytics, explainable AI. He is currently shaping the research frontier of Human-centered Artificial Intelligence, as a leading figure in the European network of research labs Humane-AI-Net (scientific director of the line “Social AI”). He is a founder of SoBigData.eu, the European H2020 Research Infrastructure “Big Data Analytics and Social Mining Ecosystem” www.sobigdata.eu. Dino is currently the Italian member of the Responsible AI working group of GPAI – the Global Partnership an AI, a member of the OECD Network of Experts in AI and the coordinator of the working group “Big Data & AI for Policy” of the Italian Government “data-driven” Taskforce for the Covid-19 emergency. Twitter: @DinoPedreschi

Organizers

Event Contact

Watch the Recording

You can now watch the recording of the entire event at: https://youtu.be/pwOy6KKh_tk.
The total duration of the video is 2 hours 40 minutes.

Attend the Event

Register for an e-mail reminder: https://forms.gle/LamUhKpzN2N9FfPG7

Event Description

Recent developments have enabled humans and AI-based systems to cooperatively work towards joint goals in interactive and collaborative settings. They have not only showcased various application domains and use-cases for such interactive capabilities but also highlighted several issues and opportunities. Experts from Psychology, HCI, AI, and Computer Science will discuss some current progress, challenges, opportunities, and a vision for the future of such systems from a human-centered perspective.

Programme

Time Speaker Description
14:00–14:05 Kashyap Todi Welcome
14:05–14:30 Wendy Mackay Plenary Talk: Human–Computer Partnerships
14:30–14:55 Janet Rafner & Jacob Sherson Plenary Talk: Hybrid Intelligence
14:55–15:00 Break
15:00–15:10 Alessandro Saffiotti Short Talk: Human-AI in artistic co-creation
15:10–15:20 Janin Koch Short Talk: Visual Design Ideation with Machines
15:20–15:30 Silvia Miksch Short Talk: Guide Me in the Analysis: How can Visual Analytics enriched by guidance contribute to gaining insights and decision making
15:30–15:40 Mohamed Chetouani Short Talk: Social Learning Agents: Role of Human Behaviors
15:40–16:00 Panel Discussion
16:00 Event Close

Meet the Speakers and Organisers

Abstracts

Human–Computer Partnership (Wendy Mackay)

In this talk, Wendy Mackay will talk about moving beyond the traditional 'human-in-the-loop' perspective, which focuses on using human input to improve algorithms. She will share her vision for 'computer-in-the-loop', where intelligent algorithms serve to enhance human capabilities.

Hybrid Intelligence: First Rate Humans, Not Second Class Robots (Janet Rafner & Jacob Sherson) 

In light of the recent deep learning driven success of AI in both corporate and social life there has been a growing fear of human displacement and a related call to develop IA (intelligence augmentation) rather than pure AI. In reality, most current AI applications have a significant human-in-the-loop (HITL) component and are therefore arguably more IA than AI already. From here, there are currently two trends in the field. In one trend, increasing machine autonomy is pursued, first by placing the human-on-the-loop in order to verify the result of the machine computation and then by hoping to take the human completely out of the loop, as in the pursuit of artificial general intelligence. Two main challenges of this approach are a) the value-alignment problem (how do we ensure that the machine satisfies human preferences when we often cannot even express or agree on these ourselves) and b) the extensive human deskilling that often accompanies algorithmic advances. In our talk, we will discuss how these two challenges may potentially be overcome by the second trend: the pursuit of increasingly intertwined human-machine operation. We will present and give examples of an operational and ambitious framework, hybrid intelligence (HI), in which the two interact synergistically and continually learn from each other.

Human-AI Collaboration in Artistic Co-creation (Alessandro Saffiotti)

Live artistic performance, like music, dance or acting, provides an excellent domain to observe and analyze the mechanisms of human-human collaboration. In this short talk, I use this domain to study human-AI collaboration. I propose a model for collaborative artistic performance, in which an AI system mediates the interaction between a
human performer and an artificial one. I will illustrate this model with case studies involving different combinations of human musicians, human dancers, robot dancers, and a virtual drummer.

Visual Design Ideation with Machines (Janin Koch)

In this short talk, Janin Koch will talk about 'MayAI', 'ImageSense', and her current postdoctoral research on how humans and machines can collaborate during visual design ideation, and how this collaboration enhances the creative process and results.

Guide Me in the Analysis: How can Visual Analytics enriched by guidance contribute to gaining insights and decision making (Silvia Miksch)

Visual Analytics is "the science of analytical reasoning facilitated by interactive visual interfaces." Guidance is a "computer-assisted process that aims to actively resolve a knowledge gap encountered by users during an interactive visual analytics session.” I will illustrate how guidance-enriched Visual Analytics contribute to gaining insights and decision making.

Social Learning Agents: Role of Human Behaviors (Mohamed Chetouani)

There are increasing situations in which humans and AI systems are acting, deciding and/or learning. In this short talk, we discuss approaches and models able to capture specific strategies of humans while they are teaching agents. We will see how social learning based approaches make it possible to take into account such strategies in the development of interactive machine learning techniques and in particular when it comes to social robotics.

Network

The Humane AI Net project funded by the European Union Horizon 2020 program aims to bring together the European AI community to develop the scientific foundations and technological breakthroughs needed to shape the AI revolution in a direction that is beneficial to humans both individually and societally, and that adheres to European ethical values and social, cultural, legal, and political norms. Key specific questions that the project addresses are:

  • AI systems that „understand” humans,
  • AI systems that can interact in complex social settings
  • AI systems that enhance  human capabilities
  • AI systems that empower both individuals and society as a whole carefully balancing individual benefits and social impact of their functionality
  • AI systems that respect human autonomy and self-determination
  • Ethics and Legal Protection “by design” in complex dynamic AI systems

Free textbook materials

Check the free online access to the eBook conference proceedings for conference members and enjoy the Human-Centered Artificial Intelligence Advanced Lectures.

About the Course

The Advanced Course on AI (ACAI) is a specialized course in Artificial Intelligence sponsored by EurAI in odd-numbered years. The theme of the 2021 ACAI School is Human-Centered AI.

The notion of “Human Centric AI”  increasingly dominates the public AI debate in Europe[1].  It postulates a “European brand” of AI beneficial to humans on both individual and social level that is characterized by a focus on supporting and empowering humans as well as incorporating “by design” adherence to appropriate ethical standards and values such as privacy protection, autonomy (human in control), and non-discrimination. Stated this way (which is how it mostly appears in the political debate) it may seem more like a broad, vague wish list than a tangible scientific/technological concept. Yet, at a second glance, it turns out that it is closely connected to some of the most fundamental challenges of AI[1].

Within ACAI 2021, researchers from the HumanE-AI-Net consortium will teach courses related to the state of the art in the above areas focusing not just on narrow AI questions but emphasising issues related to the interface between AI and Human-Computer Interaction (HCI), Computational Social Science (and Complexity Science) as well as ethics and legal issues. We intend to provide the attendees with the basic knowledge needed to design, implement, operate and research the next generation of Human Centric AI systems that are focused on enhancing Human capabilities and optimally cooperating with humans on both the individual and the social level.

ACAI 2021 will have a varied format, including keynote presentations, labs/hands-on sessions, short tutorials on cutting edge topics and longer in-depth tutorials on main topics in AI.

Please check for updates!


Topics

Learning and Reasoning with Human in the Loop

Learning, reasoning, and planning are interactive processes involving close synergistic collaboration between AI system(s) and user(s) within a dynamic, possibly open-ended real-world environment. Key gaps in knowledge and technology that must be addressed toward this vision include combining symbolic-subsymbolic learning, explainability,  translating a broad, vague notion of “fairness” into concrete algorithmic representations, continuous and incremental learning, compositionality of models and ways to adequately quantify and communicate model uncertainty.

Multimodal Perception

Human interaction and human collaboration depend on the ability to understand the situation and reliably assign meanings to events and actions. People infer such meanings either directly from subtle cues in behavior, emotions, and nonverbal communications or indirectly from the context and background knowledge. This requires not only the ability to sense subtle behavior, and emotional and social cues but an ability to automatically acquire and apply background knowledge to provide context. The acquisition must be automatic because such background knowledge is far too complex to be hand-coded. Research on artificial systems with such abilities requires a strong foundation for the perception of humans, human actions, and human environments. In HumanE AI Net, we will provide this foundation by building on recent advances in multimodal perception and modelling sensory, spatiotemporal, and conceptual phenomena

Representations and Modeling

Perception is the association of external stimuli to an internal model. Perception and modelling are inseparable. Human ability to correctly perceive and interpret complex situations, even when given limited and/or noisy input, is inherently linked to a deep, differentiated, understanding based on the human experience.  A new generation of complex modelling approaches is needed to address this key challenge of Human Centric  AI including Hybrid representations that combine symbolic, compositional approaches with statistical and latent representations. Such hybrid representations will allow the benefits of data-driven learning to be combined with knowledge representations that are more compatible with the way humans view and reason about the world around them.

Human Computer Interaction (HCI)

Beyond considering the human in the loop, the goal of human-AI is to study and develop methods for combined human-machine intelligence, where AI and humans work in cooperation and collaboration. This includes principled approaches to support the synergy of human and artificial intelligence, enabling humans to continue doing what they are good at but also be in control when making decisions. It has been proposed that AI research and development should follow three objectives: (i) to technically reflect the depth characterized by human intelligence; (ii) improve human capabilities rather than replace them; and (iii) focus on AI’s impact on humans. There has also been a call for the HCI community to play an increasing role in realizing this vision, by providing their expertise in the following: human-machine integration/teaming, UI modelling and HCI design, transference of psychological theories, enhancement of existing methods, and development of HCI design standards.

Social AI

As increasingly complex sociotechnical systems emerge, consisting of many (explicitly or implicitly) interacting people and intelligent and autonomous systems, AI acquires an important societal dimension. A key observation is that a crowd of (interacting) intelligent individuals is not necessarily an intelligent crowd. Aggregated network and societal effects and of AI and their (positive or negative) impacts on society are not sufficiently discussed in the public and not sufficiently addressed by AI research, despite the striking importance to understand and predict the aggregated outcomes of sociotechnical AI-based systems and related complex social processes, as well as how to avoid their harmful effects. Such effects are a source of a whole new set of explainability, accountability, and trustworthiness issues, even assuming that we can solve those problems for an individual machine-learning-based AI system.

Societal, Legal and Ethical Impact

Every AI system should operate within an ethical and social framework in understandable, verifiable and justifiable ways. Such systems must in any case operate within the bounds of the rule of law, incorporating fundamental rights protection into the AI infrastructure. Theory and methods are needed for the Responsible Design of AI Systems as well as to evaluate and measure the ‘maturity’ of systems in terms of compliance with legal, ethical and societal principles. This is not merely a matter of articulating legal and ethical requirements but involves robustness, and social and interactivity design. Concerning the ethical and legal design of AI systems, we will clarify the difference between legal and ethical concerns, as well as their interaction and ethical and legal scholars will work side by side to develop both legal protection by design and value-sensitive design approaches.


European Association for
Artificial Intelligence

 

 

 

The 2021 ACAI School will take place on 11-14 October 2021.

We are going to use different locations all very close to each other. This allows us to keep up with the maximum occupancy restrictions:
• 3IT, Salzufer 6, Entrance: Otto-Dibelius-Strasse
• Forum Digital Technologies (FDT) // CINIQ Center: Salzufer 6 (main venue), 10587 Berlin ( Entrance Otto-Dibelius-Straße);
• Loft am Salzufer: Salzufer 13-14, 10587 Berlin 
• Hörsaal HHI, Fraunhofer Institute for Telecommunications (HHI): Einsteinufer 37, 10587 Berlin (across the bridge)

There will be a possibility to participate in the School's activities online.

According to the current regulations in Germany associated with the COVID-19, we are restricted to a maximum of 60 students attending in person. The format of the event is subject to the COVID-19 regulations at the time of the School.

The program will be updated regularly. (Download the program)


Monday, 11 October
09.00-09.30 Registration (venue: Loft)
09.30-10.00 Welcome and Introduction (venue: Loft)
10.00-12.00 Mythical Ethical Principles for AI and How to Operationalise Them (venue: Loft)
Deep Learning Methods for Multimodal Human Activity Recognition (venue: 3IT)
Social Artificial Intelligence (venue: FDT)
12.00-13.00 Keynote: Yvonne Rogers (venue: Loft)
13.00-14.00 Lunch
14.00-18.00 Why and How Should We Explain in AI? (venue: Loft)
Multimodal Perception and Interaction with Transformers (venue: 3IT)
Social Artificial Intelligence (venue: FDT)
18.00-20.00 Welcome Reception and Student Poster Mingle (venue: Loft)

Tuesday, 12 October
09.00-13.00 Ethics and AI: An Interdisciplinary Approach (venue: Hörsaal HHI)
Machine Learning With Neural Networks (venue: FDT)
Social Simulation for Policy Making (venue: 3IT)
13.00-14.00 Lunch
14.00-16.00 Learning Narrative Frameworks from Multimodal Inputs (venue: 3IT)
Interactive Robot Learning (venue: FDT)
Argumentation in AI (venue: Hörsaal HHI)
16.00-17.00 Keynote: Atlas of AI: Mapping the Wider Impacts of AI by Kate Crawford
17.00-18.00 EURAI Dissertation Award

Unsupervised machine translation by Mikel Artetxe


Wednesday, 13 October
09.00-13.00 Law for Computer Scientists (venue: 3IT)
Computational Argumentation and Cognitive AI (venue: FDT)
Operationalising AI Ethics: Conducting Socio-Technical Assessment (venue: Hörsaal HHI)
13.00-14.00 Lunch
14.00-18.00 Explainable Machine Learning for Trustworthy AI (venue: FDT)
Cognitive Vision: On Deep Semantics for Explainable Visuospatial Computing (venue: 3IT)
Operationalising AI Ethics: Conducting Socio-Technical Assessment (venue: Hörsaal HHI)

Thursday, 14 October
09.00-11.00 Children and the Planet - The Ethics and Metrics of "Successful" AI (venue: Loft)
Learning and Reasoning with Logic Tensor Networks (venue: FDT)
Writing Science Fiction as An Inspiration for AI Research and Ethics Dissemination (venue: 3IT)
11.00-13.00 Introduction to intelligent UIs (venue: 3IT)
11.00-14.00 Student mentorship meetings with lunch (venue: Loft)
14.00-16.00 HumaneAI-net Micro-Project Presentation (venue: Loft)
16.00-18.00 Challenges and Opportunities for Human-Centred AI: A dialogue between Yoshua Bengio and Ben Shneiderman, moderated by Virginia Dignum (venue: Loft)
18.00-20.00 ACAI 2021 Closing Reception/Welcome HumaneAI-net (venue: Loft)

European Association for
Artificial Intelligence

 

 

Cognitive Vision: On Deep Semantics for Explainable Visuospatial Computing, Mehul Bhatt, Örebro University - CoDesign Lab EU; Jakob Suchan, University of Bremen
(see Tutorial Outline)

Ethics and AI: An Interdisciplinary Approach, Guido Boella, Università di Torino; Maurizio Mori, Università di Torino
(see Tutorial Outline)

Children and the Planet - The Ethics and Metrics of "Successful" AI, John Havens, IEEE; Gabrielle Aruta, Filo Sofi Arts
(see Tutorial Outline)

Mythical Ethical Principles for AI and How to Operationalise Them, Marija Slavkovik, University of Bergen
(see Tutorial Outline)

Operationalising AI Ethics: Conducting Socio-Technical Assessment, Andreas Theodorou, Umeå University & VeRAI AB; Virginia Dignum, Umeå University & VeRAI AB
(see Tutorial Outline)

Explainable Machine Learning for Trustworthy AI, Fosca Giannotti, CNR; Riccardo Guidotti, University of Pisa
(see Tutorial Outline)

Why and How Should We Explain in AI?, Stefan Buijsman, TU Delft
(see Tutorial Outline)

Interactive Robot Learning, Mohamed Chetouani, Sorbonne Université
(see Tutorial Outline)

Multimodal Perception and Interaction with Transformers, Francois Yvon, Univ Paris Saclay, James Crowley, INRIA and Grenoble Institut Polytechnique
(see Tutorial Outline)

Argumentation in AI (Argumentation 1), Bettina Fazzinga, ICAR-CNR
(see Tutorial Outline)

Computational Argumentation and Cognitive AI (Argumentation 2), Emma Dietz, Airbus Central R&T; Antonis Kakas, University of Cyprus; Loizos Michael, Open University of Cyprus
(see Tutorial Outline)

Social Simulation for Policy Making, Frank Dignum, Umeå University; Loïs Vanhée, Umeå University; Fabian Lorig, Malmö University
(see Tutorial Outline)

Social Artificial Intelligence, Dino Pedreschi, University of Pisa; Frank Dignum, Umeå University
(see Tutorial Outline)

Introduction to Intelligent User Interfaces (UIs), Albrecht Schmidt, LMU Munich; Sven Mayer, LMU Munich; Daniel Buschek, University of Bayreuth
(see Tutorial Outline)

Machine Learning With Neural Networks, James Crowley, INRIA and Grenoble Institut Polytechnique
(see Tutorial Outline)

Deep Learning Methods for Multimodal Human Activity Recognition, Paul Lukowicz, DFKI/TU Kaiserslautern

Learning and Reasoning with Logic Tensor Networks, Luciano Serafini, Fondazione Bruno Kessler

Learning Narrative Frameworks From Multi-Modal Inputs, Luc Steels, Universitat Pompeu Fabra Barcelona
(see Tutorial Outline)

Law for Computer Scientists, Mireille Hildebrandt, Vrije Universiteit Brussel; Arno De Bois, Vrije Universiteit Brussel
(see Tutorial Outline)

Writing Science Fiction as An Inspiration for AI Research and Ethics Dissemination, Carme Torras, UPC
(see Tutorial Outline)


European Association for
Artificial Intelligence

 

Yoshua Bengio, MILA, Quebec

Yoshua Bengio is recognized worldwide as one of the leading experts in artificial intelligence, Yoshua Bengio is most known for his pioneering work in deep learning, earning him the 2018 A.M. Turing Award, “the Nobel Prize of Computing,” with Geoffrey Hinton and Yann LeCun. He is a Full Professor at Université de Montréal, and the Founder and Scientific Director of Mila – Quebec AI Institute. He co-directs the CIFAR Learning in Machines & Brains program as Senior Fellow and acts as Scientific Director of IVADO. In 2019, he was awarded the prestigious Killam Prize and in 2021, became the second most cited computer scientist in the world. He is a Fellow of both the Royal Society of London and Canada and Officer of the Order of Canada. Concerned about the social impact of AI and the objective that AI benefits all, he actively contributed to the Montreal Declaration for the Responsible Development of Artificial Intelligence.

Kate Crawford

Kate Crawford

Kate Crawford, Professor, is a leading international scholar of the social and political implications of artificial intelligence. Her work focuses on understanding large-scale data systems in the wider contexts of history, politics, labor, and the environment. She is a Research Professor of Communication and STS at USC Annenberg, a Senior Principal Researcher at Microsoft Research New York, and an Honorary Professor at the University of Sydney. She is the inaugural Visiting Chair for AI and Justice at the École Normale Supérieure in Paris, where she co-leads the international working group on the Foundations of Machine Learning. Over her twenty year research career, she has also produced groundbreaking creative collaborations and visual investigations. Her project Anatomy of an AI System with Vladan Joler won the Beazley Design of the Year Award, and is in the permanent collection of the Museum of Modern Art in New York and the V&A in London. Her collaboration with the artist Trevor Paglen produced Training Humans – the first major exhibition of the images used to train AI systems. Their investigative project, Excavating AI, won the Ayrton Prize from the British Society for the History of Science. Crawford's latest book, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press) has been described as “a fascinating history of data” by the New Yorker, a “timely and urgent contribution” by Science. and named one of the best books on technology in 2021 by the Financial Times.

Yvonne Rogers, UCLIC - UCL Interaction Centre

Yvonne Rogers is a Professor of Interaction Design, the director of UCLIC and a deputy head of the Computer Science department at University College London. Her research interests are in the areas of interaction design, human-computer interaction and ubiquitous computing. A central theme of her work is concerned with designing interactive technologies that augment humans. The current focus of her research is on human-data interaction and human-centered AI. Central to her work is a critical stance towards how visions, theories and frameworks shape the fields of HCI, cognitive science and Ubicomp. She has been instrumental in promulgating new theories (e.g., external cognition), alternative methodologies (e.g., in the wild studies) and far-reaching research agendas (e.g., "Being Human: HCI in 2020"). She has also published two monographs "HCI Theory: Classical, Modern and Contemporary." and "Research in the Wild." with Paul Marshall. She is a fellow of the ACM, BCS and the ACM CHI Academy. 

Ben Shneiderman

Ben Shneiderman, University of Maryland

Ben Shneiderman is an Emeritus Distinguished University Professor in the Department of Computer Science, Founding Director (1983-2000) of the Human-Computer Interaction Laboratory, and a Member of the UM Institute for Advanced Computer Studies (UMIACS) at the University of Maryland. He is a Fellow of the AAAS, ACM, IEEE, NAI, and the Visualization Academy and a Member of the U.S. National Academy of Engineering. He has received six honorary doctorates in recognition of his pioneering contributions to human-computer interaction and information visualization. His widely-used contributions include the clickable highlighted web-links, high-precision touchscreen keyboards for mobile devices, and tagging for photos. Shneiderman’s information visualization innovations include dynamic query sliders for Spotfire, the development of treemaps for viewing hierarchical data, novel network visualizations for NodeXL, and event sequence analysis for electronic health records. Ben is the lead author of Designing the User Interface: Strategies for Effective Human-Computer Interaction (6th ed., 2016). He co-authored Readings in Information Visualization: Using Vision to Think (1999) and Analyzing Social Media Networks with NodeXL (2nd edition, 2019). His book Leonardo’s Laptop (MIT Press) won the IEEE book award for Distinguished Literary Contribution. The New ABCs of Research: Achieving Breakthrough Collaborations (Oxford, 2016) describes how research can produce higher impacts. His forthcoming book on Human-Centered AI will be published by Oxford University Press in January 2022.


European Association for
Artificial Intelligence

 

Mikel Artetxe at Facebook AI Research has been selected as the winner of the EurAI Doctoral Dissertation Award 2021.

In his PhD research, Mikel Artetxe has fundamentally transformed the field of machine translation, by showing that unsupervised machine translation systems can be competitive with traditional, supervised methods. This is a game-changing finding which has already made a huge impact on the field. To solve this challenging problem of unsupervised machine translation, he has first introduced an innovative strategy for aligning word embeddings from different languages, which are then used to induce bilingual dictionaries in a fully automated way. These bilingual dictionaries are subsequently used in combination with monolingual language models, as well as denoising and back translation strategies, to end up with a full machine translation system.

The EurAI Doctoral Dissertation Award will be officially presented at ACAI 2021 on Tuesday, October 12th, at 17.00 (CET). Mikel Artetxe will also give a talk:

Title: Unsupervised machine translation

Abstract: While modern machine translation has relied on large parallel corpora, a recent line of work has managed to train machine translation systems in an unsupervised way, using monolingual corpora alone. Most existing approaches rely on either cross-lingual word embeddings or deep multilingual pre-training for initialization, and further improve this system through iterative back-translation. In this talk, I will give an overview of this area, focusing on our own work on cross-lingual word embedding mappings, and both unsupervised neural and statistical machine translation.

 


European Association for
Artificial Intelligence

 

The number of places for on-site participation is limited. The registration is now closed.

Early-bird registration

(15 September)

Late registration

(after 16 September)

(PhD) Student 250€ 300€
Non-student 400€ 450€

Members of EurAI member societies are eligible for a discount (30€).

Students attending on-site will have an opportunity to apply for scholarships.

By registering, you

  • commit to attend the ACAI2021 School and do the assignments (where applicable),
  • commit to receiving further instructions,
  • confirm having acquired approval for participation in ACAI2021 School from your supervisor (where applicable).

Please note, the registration fee does not cover accommodation or travel costs.

Please check the information on entry restrictions, testing and quarantine regulations in Germany.


European Association for
Artificial Intelligence

 

Virginia Dignum, Umeå University
ACAI 2021 General Chair

 

Paul Lukowicz, German Research Center for Artificial Intelligence
ACAI 2021 General Chair

 

Mohamed Chetouani, Sorbonne Université

Mohamed Chetouani, Sorbonne Université
ACAI 2021 Publications Chair

 

Davor Orlic, Knowledge 4 All Foundation
ACAI 2021 Publicity Chair

 

Tatyana Sarayeva, Umeå University
ACAI 2021 Organising Chair


European Association for
Artificial Intelligence

 

Venue: Forum Digital Technologies // CINIQ Center: Salzufer 6 (main venue), 10587 Berlin

Travelling and staying in Berlin: The ACAI 2021 school participants are responsible for their own accommodation and trip to Berlin.

Visa: Organizing committee can provide the ACAI 2021 school participant with an invitation letter. For the invitation letter,  we need proof of enrollment at your university and a recommendation letter from your supervisor describing why is important for you to attend ACAI 2021. The ACAI 2021 school participant is responsible for the visa application.

COVID-19 guidance: Please check the information on entry restrictions, testing and quarantine regulations in Germany.

 


European Association for
Artificial Intelligence

 

Organizers

The organizer of this event is Prof. Virginia Dignum and other consortium members in charge of the work package on AI Ethics and Responsible AI. The research in WP5 will deal with various ethical issues such as transparency, whether biases are pre- programmed, are unintendedly introduced by the algorithm, or are the result of disproportionate data.

About the event

In this virtual event, we'll discuss the issue of defining AI for regulatory and policy purposes. There is an increasing realisation that researchers, regulators and policymakers are struggling with identifying what exactly are they addressing, with views ranging from 'magic' to the whole of computing, from robotics to very narrow specific statistical techniques, which render any attempts at regulation or policy guidance quite useless.

The result of this event will a research brief proposing a definitional framework to inform the current discussion around AI regulation. Our primary focus are the current regulatory efforts at European Parliament and Commission, but we hope to be useful to a wider audience, including proposals that contribute to shaping education, auditing and industry views on AI.

Register here

Participation is free of charge, but registration is required in order to organise the round table discussions. Link to Zoom meeting will be sent prior to the event to all registered participants.

Programme

17:00‑17:30 Welcome, fire start presentations and  Q/A
Marko Grobelnik

There won't be any perfect definition of AI, but we urgently needed a 'good enough' one yesterday

Eva Kaili

EU approach to AI regulation

Catelijne Muller

TBA

Francesca Rossi

Can we really define AI?

Michael Wooldridge

When is an algorithm AI? And if we can't answer that, how can we regulate AI?

17:45‑18:45 Round table discussions
18:45‑19:00 Summary and conclusions

Meet the Speakers

Marko Grobelnik, Artificial Intelligence Laboratory, JSI

Marko Grobelnik is a researcher in the field of Artificial Intelligence. Marko co-leads Artificial Intelligence Lab at Jozef Stefan Institute, cofounded UNESCO International Research Center on AI (IRCAI), and is the CEO of Quintelligence.com. He collaborates with major European academic institutions and major industries such as Bloomberg, British Telecom, European Commission, Microsoft Research, New York Times. Marko is co-author of several books, co-founder of several start-ups and is/was involved into over 70 EU funded research projects in various fields of Artificial Intelligence. Marko represents Slovenia in OECD AI Committee (ONE AI), in Council of Europe Committee on AI (CAHAI), and Global Partnership on AI (GPAI). In 2016 Marko became Digital Champion of Slovenia at European Commission.

Eva Kaili, member of European parliament
Eva Kaili, Member of the European parliament

Eva Kaili is a Member of the European Parliament, part of the Hellenic S&D Delegation since 2014. She is the Chair of the Future of Science and Technology Panel in the European Parliament (STOA) and the Centre for Artificial Intelligence (C4AI), Member of the Committees on Industry, Research and Energy (ITRE), Economic and Monetary Affairs (ECON), Budgets (BUDG), and the Special Committee on Artificial Intelligence in a Digital Age (AIDA). Eva is a member of the delegation to the ACP-EU Joint Parliamentary Assembly (DACP), the delegation for relations with the Arab Peninsula (DARP), and the delegation for relations with the NATO Parliamentary Assembly (DNAT). In her capacity, she has been working intensively on promoting innovation as a driving force of the establishment of the European Digital Single Market. She has been the draftsperson of multiple pieces of legislation in the fields of blockchain technology, online platforms, big data, fintech, AI and cybersecurity, as well as the ITRE draftsperson on Juncker plan EFSI2 and more recently the InvestEU program. She has also been the Chair of the Delegation to the NATO PA in the European Parliament, focusing on Defence and Security of Europe. Prior to that, she has been elected as a Member of the Hellenic Parliament 2007-2012, with the PanHellenic Socialist Movement (PASOK). She also worked as a journalist and newscaster prior to her political career. She holds a Bachelor degree in Architecture and Civil Engineering, and Postgraduate degree in European Politics.

Catelijne Muller, ALLAI
Catelijne Muller, ALLAI

Catelijne Muller is President and co-founder of ALLAI, an independent organisation that promotes responsible development, deployment and use of AI. She is a former member of EU High Level Expert Group on AI, that advised the European Commission on economic, social, legal and ethical strategies for AI. She is AI-Rapporteur at the EESC and was Rapporteur of the EESC opinion on Artificial Intelligence and Society, the EESC opinion on the EU Whitepaper on AI and the EESC opinion on the EU AI Regulation (upcoming). From 2018 to 2020 she headed the EESC Temporary Study Group on AI and she is a member of the EESC Digital Single Market Observatory. She is a member of the OECD Network of Experts on AI (ONE.AI). She advises the Council of Europe on the impact of AI on human rights, democracy and the rule of law. Catelijne is a Master of Laws by training and worked as a Dutch qualified lawyer for over 14 years prior to committing her efforts to the topic of Responsible AI.

Michael Wooldridge, Oxford UniversityMichael Wooldridge, Oxford University

Michael Wooldridge (Oxford University) is a Professor of Computer Science and Head of Department of Computer Science at the University of Oxford, and a programme director for AI at the Alan Turing Institute. He has been an AI researcher for more than 30 years, and has published more than 400 scientific articles on the subject, including nine books. He is a Fellow of the Association for Computing Machinery (ACM), the Association for the Advancement of AI (AAAI), and the European Association for AI (EurAI). From 2014-16, he was President of the European Association for AI, and from 2015-17 he was President of the International Joint Conference on AI (IJCAI). 

Francesca Rossi (IBM)

Francesca Rossi is an IBM fellow and the IBM AI Ethics Global Leader. She is an AI scientist with over 30 years of experience in AI research,
on which she published more than 200 articles in top AI journals and conferences. She co-leads the IBM AI ethics board and she actively participate in many global multi-stakeholder initiatives on AI ethics. She is a member of the board of directors of the Partnership on AI and the industry representative in the steering committee of the Global Partnership on AI. She is a fellow of both the worldwide association of AI (AAAI) and of the European one (EurAI),
and she will be the next president of AAAI.