Beyond ChatGPT:

How Can Europe Get Ahead in Generative AI?

Our goal is to facilitate a European brand of trustworthy, ethical AI that enhances Human capabilities and empowers citizens and society

  When? Thursday, May 25th (14:00 - 16:00)
  Where? Brussels, Belgium @ the European Parliament | Room A5G1

Paul-Henri Spaak Building, Rue Wiertz 60, 1047 Brussels, Belgium

  What? In-person event (by invitation) + Public Online Event (by registration)
  How to Register? Fill out this form to get the streaming link (Deadline: Tuesday, May 23rd)

Learn about the Meeting Outcomes

Agenda

 

The event is moderated by Ms. Lenneke Hoedemaker.

Please note that the agenda is subject to minor changes

 

14:00 Welcome and setting the stage: State of Play in European Regulation on Artificial Intelligence by Irena Joveva MEP, Committee on Culture and Education.
14:10 Welcome from ICT 48 AI Project Coordinators
  • Humane AI represented by Paul Lukowicz, DFKI.
  • ELISE represented by  Cees Snoek, University of Amsterdam, Netherlands.
  • TAILOR represented by Fredrik Heintz,  Linköping University, Sweden.
  • AI4Media represented by Ioannis Kompatsiaris, CERTH 
14:25 Scientific Foundations of Large Language Models (LLMs) by Hermann Ney, RWTH Aachen, Germany.
14:40 Panel on Industrial and Research Potential of AI in Europe

MODERATOR: Lenneke Hoedemaker, Moderator and Presenter.

  • Virginia Dignum, Professor in Responsible AI, Umea University, Sweden and the Scientific Director of WASP-HS.
  • Ieva Martinkenaite, Senior Vice President and Head of Research and Innovation, Telenor Group.
  • Francesca Rossi, IBM Fellow and AI Ethics Global Leader and AAAI President.
15:10 Message from the Co-Sponsoring Organisations
  • CLAIRE and EurAI are represented by Holger Hoos, RWTH Aachen University, Germany.
  • IRCAI-UNESCO and OECD views are represented by Marko Grobelnik, International Research Centre on Artificial Intelligence (IRCAI) under the auspices of UNESCO, Jozef Stefan Institute, Slovenia.
15:20 Panel on the Societal Impact and AI Policies in Europe

MODERATOR: Lenneke Hoedemaker, Moderator and Presenter.

  • Catelijne Muller, President and co-founder of ALLAI.
  • Brando Benifei, Member of the European Parliament.
  • Clara Neppel,  Senior Director of the IEEE European office.
  • Dino Pedreschi, Professor at the University of Pisa, Italy and GPAI member.
15:55 - 16:00 Closing remarks by Cécile Huet, Deputy Head of the Unit “Robotics and Artificial Intelligence Innovation and Excellence” at the European Commission
16:30 – 18:00 Social Reception: Networking and Informal Scientific Discussions (Attendance is in-person only by invitation. The location will be shared by email.)

What is the purpose of the HumaneAI European Parliament event?

We would like to suggest a half-day event on May 25th at the European Parliament, Paul-Henri Spaak Building, Rue Wiertz 60, 1047 Brussels, Belgium, titled Beyond ChatGPT: How can Europe get in front of the pack on Generative AI Models?, organized by a broad consortium from science and civil society, including the HumanE-AI-Net European Network of Centres of Excellence in Artificial Intelligence.

HumanE-AI-Net is a research network of leading European universities, AI institutes, and corporations funded by Future and Emerging Technologies (FET) and dedicated to benefiting people’s empowerment through the scientific and technological development of AI, in accordance with European ethical, social, and cultural values.

Other European partners and communities that support this event and will be involved in its organization are the International Research Centre on Artificial Intelligence (IRCAI) under the auspices of UNESCO, the Confederation of Laboratories for Artificial Intelligence in Europe (CLAIRE), and other ICT-48 networks such as TAILOR, AI4Media, and VISION, and language projects like ELG and ELE.

The Beyond ChatGPT aims to bring together AI experts, policymakers, and other stakeholders to demystify and critically examine some of the key concepts and concerns, and to provide an opportunity for a well-grounded discussion of the question of what needs to be done to ensure that European economies and societies will benefit from the development and deployment of AI technologies, such as LLMs.

Which questions will be addressed?

  1. To which extent does Europe have the capability and capacity to compete with US-based industries on LLMs and similarly impactful AI technologies?
  2. What can and must be done to ensure European competitiveness in this area?
  3. How can we best harness the opportunities afforded by the latest AI technologies for the benefit of European economies and societies? What role does the proposed AI Act play in this context?
  4. Which of the widely debated risks are real, and how should these be addressed? Is there a need for a moratorium or similar restrictions on research and innovation in key areas of AI?

What is the rationale?

Artificial intelligence has been an intense focus of public debate in recent years, and – following recent progress in so-called large language models (LLMs), such as ChatGTP – is now an area of increasingly vigorous economic activity and societal concerns.

Europe carries the responsibility of shaping the AI revolution. The choices we face today are related to fundamental ethical issues about the impact of AI on society, in particular, how it affects labor, social interactions, healthcare, privacy, fairness, and security. The ability to make the right choices requires new solutions to fundamental scientific questions in AI and human-computer interaction (HCI).

What is the vision?

This vision closely follows the ambitions articulated by the EC in its Communication on AI:  A European brand of AI that, by design, is trustworthy, adheres to European ethical, political, and social norms, and focuses on the benefit to European citizens as individuals, European society and the European economy.  At the heart of our vision is the understanding that those ambitions can neither be achieved by legislation or political directives alone nor by traditional research in established disciplinary “silos”. Instead, it needs fundamentally new solutions to core research problems at the Interface of AI, human-computer interaction (HCI), and social science, combining theory, real-world use cases, and innovation-oriented research.

What are we trying to achieve?

The HumaneAI community aims to develop the scientific foundations and technological breakthroughs needed to shape the ongoing artificial intelligence (AI) revolution to fit the above vision. Key challenges include: learning complex world models; building effective and fully explainable machine learning systems; adapting AI systems to dynamic, open-ended real-world environments achieving an in-depth understanding of humans and complex social contexts; and enabling self-reflection within AI systems.

What will be the impact?

The HumanE AI community has mobilized a research landscape far beyond the direct project funding and brought together a unique innovation ecosystem.  This has the potential for significant disruption across its socio-economic impact areas, including Industry 4.0, health & well-being, mobility, education, policy, and finance. We aim to spearhead the efforts required to help Europe achieve a step-change in AI uptake across the economy.

Why are we the best to do it?

The project consortium, with 53 institutions across 20 European countries, advocates that Artificial Intelligence is made by us humans, European researchers and citizens, who care deeply about the future of AI in Europe and its use for the benefit of all Europeans.

HumaneAI across Europe
HumaneAI across Europe

Location: Live Digital Event
Host: STI Forum – 7th Multi-stakeholder Forum on Science, Technology and Innovation for the Sustainable Development Goals
Date: 4 May 2022, 9:00 AM – 10.30 AM (ET), 02:00 PM London (GMT+1:00), 10:00 PM Tokyo (GMT+9:00)

Hosted by IRCAI and Permanent Mission of Slovenia to the UN, co-sponsored by Permanent Mission of Japan to the UN and Permanent Mission of South Africa to the UN

Download PDF Invitation->

Description

The Permanent Mission of Slovenia to the UN and the International Research Centre on Artificial Intelligence under the auspices of UNESCO (IRCAI) are organizing a side event launching a Global Network of Excellence Centres in artificial intelligence (AI) for sustainable development goals (SDGs).

AGENDA

9:00 – 9:10 ET Opening and Introduction 3 minutes each

  • Ambassador Boštjan Malovrh, Permanent Representative of Slovenia to the UN
  • Ambassador Tetsuya Kimura, Permanent Representative of Japan to the UN
  • Tshilidzi Marwala, Vice Chancellor and Principal, University of Johannesburg
  • Marielza Oliveira, Director for Partnerships and Operational Programme Monitoring, Communications and Information Sector, UNESCO

9:10 – 9:15 ET Keynote

  • Maria Fasli, UNESCO Chair in Analytics and Data Science, Executive Dean, Faculty of Science and Health at University of Essex

9:15 – 9:20 ET Introduction into the Network

  • John Shawe-Taylor, Director IRCAI

9:20 – 10:30 ET Flash talks presenting the history, aim, objectives, composition, activities, programmes, and technology focus of the Network 3 minutes each

Network portfolio and history

  • Samuel Kaski, Aalto University, ELISE Network Coordinator
  • Paul Lukowicz, DFKI, HumaneAI Network Coordinator

Network as a catalyst for research and innovation

  • Matthew Smith, Senior Program Specialist, IDRC
  • Nelson González, Head Global Impact Computing, AWS

Network solutions and evidence of impact in real-life

  • Kathleen Siminyu, Machine Learning Fellow, Mozilla Foundation
  • Nuria Oliver,  ELLIS Unit Alicante Foundation

Network connecting worldwide communities of practice

  • Ulrich Paquet, Deep Learning Indaba, DeepMind
  • Nuria Oliver,  ELLIS Unit Alicante Foundation

Network reach across all UN regions

  • Alexandre F. Barbosa, Director Cetic
  • Emmanuel Letouzé, Director Data-Pop Alliance

10.30 – 10:50  A virtual Press Conference will be organised after the event with speakers available for questions:

  • John Shawe-Taylor, Director IRCAI
  • Emmanuel Letouzé, Director Data-Pop Alliance

Organizers

Event Contact

Programme

Time Speaker Description
09:20 – 09.30 Paul Lukowicz Introduction to Humane AI Net
09:30 – 10:00 Hideki Koike Skill Acquisition and Transfer System using Computer Vision, Deep Learning, and Soft Robotics
10:00 – 10:30 Elisabeth André Augmentative Technologies for People with Special Needs
10:30 – 11:00 Shinichi Furuya Beyond expertise of experts: novel sensorimotor training specialized for expert pianists
11:30 – 12:00 Asa Ito The paradox in skill acquisition: what does it mean for a body to be able to what it could not do?
12:00 – 12:30 Albrecht Schmidt Amplifying the Human Intellect through Artificial Intelligence
12:30 – 13:00 Jun Rekimoto Human-AI Integration: Using Deep Learning to Extend Human Abilities and Support Ability Acquisition

Background

Are you interested in the latest advancements and research trends in human augmentation? Join us for the “Symposium on Interaction with Technologies for Human Augmentation” organized by LMU Munich and HumaneAI. This one-day event will feature keynotes from leading experts in the field, a lab tour with demonstrations of current research projects, and opportunities for discussion and networking. The event will be held on Monday, Feb 20, 2023 at the premises of LMU Munich. In addition, we plan to broadcast the talks for registered remote participants. Register now!

Organizers

Event Contact

Programme

Time Speaker Description
7 PM CET Start

Background

Artificial intelligence (AI) is expected to contribute over $15 Billion to the global economy by 2030 (PWC) and shape the future of human society. A critical challenge for the industry to live up to its potential is the need for more diversity in the development, research, application, and evaluation of new AI technology. With their series “Remarkable Women in AI,” the AI Competence Center at German Entrepreneurship and the Transatlantic AI eXchange invite all genders to a series of inspirational, educational, collaborative, and global discussions on gender diversity in AI – with the aim of inspiring attendees to take steps in their respective roles to address the gender gap in AI.

Target Audience
This event is directed towards all gender students, entrepreneurs and women in research institutions and corporations.

Organizers

Event Contact

Programme

Time Speaker Description
18:00 Start

Background

#ShareTheFailure - We change the view on failure.

On February 6th we will continue with FuckupNights®Munich Vol.3 Artificial Intelligence.

◉F_ckuppers tell stories about burned money, personnel decisions that led to total failure and products that had to be recalled. They tell it all, with us on stage.

🔥 The idea comes from Mexico, where five friends realized that it takes a very humid evening to let your pants down instead of adulating each other about professional successes. In other words, to tell the stories that no one includes in their résumé. Because without failure there is no success.◉

◉ CURRENT:

✅ Check-in 6:00 p.m.

✅ Start 6.30 p.m.

✅ End approx. 21.00 hrs.

This project aims to take seriously the fact that the development and deployment of AI systems is not above the law, as decided in constitutional democracies. This feeds into the task of addressing the question of incorporation of fundamental rights protection into the architecture of AI systems including (1) checks and balances of the Rule of Law and (2) requirements imposed by positive law that elaborates fundamental rights protection.

A key result of this task will be a report on a coherent set of design principles firmly grounded in relevant positive law, with a clear emphasis on European law (both EU and Council of Europe). To help developers understand the core tenets of the EU legal framework, we have developed two tutorials, one in 2020 on Legal Protection by Design in relation to EU data protection law [hyperlink to Tutorial 2020] and one in 2021 on the European Commission’s proposal of an EU AI Act [hyperlink to Tutorial 2021]. In the Fall of 2022 we will follow up with a Tutorial on the proposed EU AI Liability Directive.

Our findings will entail: - A sufficiently detailed overview of legally relevant roles, such as end-users, targeted persons, software developers, hardware manufacturers, those who put AI applications on the market, platforms that integrate service provision both vertical and horizontal, providers of infrastructure (telecom providers, cloud providers, providers of cyber-physical infrastructure, smart grid providers, etc.);

A sufficiently detailed legal vocabulary, explained at the level of AI applications, such as legal subjects, legal objects, legal rights and obligations, private law liability, fundamental rights protection; - High level principles that anchor the Rule of Law: transparency (e.g. explainability, preregistration of research design), accountability (e.g. clear attribution of tort liability, fines by relevant supervisors, criminal law liability), contestability (e.g. the repertoire of legal remedies, adversarial structure of legal procedure).

Lecture series for Tutorial 2021 AI Act
Lecture series for Tutorial 2021 AI Act

 

This tutorial explains, in the form of slides with audio, the proposal for an EU AI Act, as proposed by the European Commission in the Spring of 2021. It does not discuss the subsequently proposed amendments.

Key issues discussed are: (1) the overall architecture of the AI, (2) the pragmatic approach to the definition of AI systems (which is not about ‘AI’ but about ‘AI systems’), (3) the different roles, notably that of the providers of these systems, (4) the emphasis on high risk AI systems and (5) the details of the requirement that must be met by all high risk systems. It also explain what AI practices are prohibited and what transparency requirements must be met by a small set of AI systems.

Lectures series for Tutorial 2020 Legal Protection by Design
Lectures series for Tutorial 2020 Legal Protection by Design

Organizers

Event Contact

  • Rita P. Ribeiro (INESC TEC)

Programme

Time Speaker Description
9:00 Virginia Dignum Responsible AI: from Principles to Action

Background

The workshop intends to attract papers on how Data Science can and does contribute to social good in its widest sense.

Topics of interest include:

  • Government transparency and IT against corruption

  • Public safety and disaster relief

  • Access to food, water, sanitation and utilities

  • Efficiency and sustainability

  • Climate change

  • Data journalism

  • Social and personal development

  • Economic growth and improved infrastructure

  • Transportation

  • Energy

  • Smart city services

  • Education

  • Social services, unemployment and homeless

  • Healthcare and well-being

  • Support for people living with disabilities

  • Responsible consumption and production

  • Gender equality, discrimination against minorities

  • Ethical issues, fairness, and accountability.

  • Trustability and interpretability

  • Topics aligned with the UN development goals

The major selection criteria will be the novelty of the application and its social impact. Position papers are welcome too.

We are also interested in applications that have built a successful business model and are able to sustain themselves economically. Most Social Good applications have been carried out by non-profit and charity organisations, conveying the idea that Social Good is a luxury that only societies with a surplus can afford. We would like to hear from successful projects, which may not be strictly "non-profit" but have Social Good as their main focus.

Accepted papers will be published by Springer as joint proceedings of several ECML PKDD workshops.

Organizers

Event Contact

Programme

Time Speaker Description
1:00 to 4:30 pm Dr. Julian Wörmann

Background

Experts agree that artificial intelligence is as disruptive as electricity or the Internet. But what does that mean for you and your mid-sized company? Which use cases are really relevant for you? And what do you need to do to actually implement them? We would like to discuss these questions with you again this year and provide answers.

The aim of the event is to accompany small and medium-sized companies in particular as they enter the field of artificial intelligence. The benefits and potentials of the technology for your company will be highlighted, and entrepreneurs who have already successfully tackled the topic of artificial intelligence will show how AI can succeed.

Expect leading AI experts and users to share their knowledge with you, contacts to support you with your projects and questions, and interactive formats where you can learn how to harness the potential of AI for your business today.

Organizers

Event Contact

Programme

Time Speaker Description
14.00 - 16.00 Mireille Hildebrandt Tutorial on the proposal for an AI Act

Background

All partners need to prepare for the tutorial, made easy by a small library of presentations that

- discuss the most important players, concepts, structure and obligations in the proposal

The presentations consist of slides with audio, explaining the text.

The library can be found at the internal service of the HAI network

During the session Hildebrandt will present a general introduction to the proposal, highlighting its architecture, links and deeplinks with the existing framework (product safety) and the upcoming framework (Digital Market Act, Digital Services Act, Data Governance Act). This introduction will form slide-set 0, which will be added to the library after the event.

TUTORIAL Library:

A series of Slide-sets with Audio:

    1. AIA General provisions, definitions and prohibitions
      Title I – art. 1-4, Annex I and Title II – art. 5
    2. AIA What are high-risk AI systems?
      Title III, chapter 1 – art. 6-7, Annex II and III
    3. AIA Risk management system for high risk systems
      Title III, chapter 2 – art. 9
    4. AIA Data and data governance of high risk systemsWhat information must be provided to whom and how?
      Title III, chapter 2 – art. 10
    5. AIA Transparency of high risk systems
      Title III, chapter 2 – art. 13
    6. AIA Human oversight of high risk systems
      Title III, chapter 2 – art. 14
    7. AIA Accuracy, robustness and cybersecurity of high risk systems
      Title III, chapter 2 – art. 15
    8. AIA Obligations of providers of high risk systems
      Title III, chapter 3 – art. 16-23
    9. AIA Obligations for users of high risk systems
      Title III, chapter 3 – art. 29
    10. AIA Notification, conformity assessment of high risk systems
      Title III, chapter 5 – art. 42-44 and 48-49
    11. AIA Transparency for medium risk systems
      Title IV – art. 52
    12. AIA Remaining issues

This was the EU-funded HumanE-AI-Net project brings together leading European research centres, universities and industrial enterprises into a network of centres of excellence. Leading global artificial intelligence (AI) laboratories will collaborate with key players in areas, such as human-computer interaction, cognitive, social and complexity sciences. The project is looking forward to drive researchers out of their narrowly focused field and connect them with people exploring AI on a much wider scale. The challenge is to develop robust, trustworthy AI systems that can ‘understand’ humans, adapt to complex real-world environments and interact appropriately in complex social settings. HumanE-AI-Net will lay the foundations for designing the principles for a new science that will make AI based on European values and closer to Europeans.

Organizers

  • CHETOUANI (Sorbonne University)

Event Contact

  • Chetouani (Sorbonne)

Programme

Time Speaker Description
9:30 Mohamed Chetouani (Sorbonne University) Introduction & Objectives
9:45 Paul Lukowicz (DFKI) HumanE AI NET
10:00 Ioannis Pitas (AI4MEDIA) Lessons learnt from the AI4Media Curriculum formation exercise
10:20 Helena Lindgren (Umeå University) Human-Centered AI Education Addressing Societal Challenges
10:50 Wendy Mackay (INRIA) Participatory Design for Human-Centered AI
11:10 Andrea Aler Tubella ( University) How to teach Trustworthy AI? Challenges and recommendations from expert interviews.
11:30 Loïs Vanhée (Umeå University) Towards a GEDAI academy - Growing Ethical Designers of AI
11:50 Martin Welß (Fraunhofer Institute) AI4EU Experiments (alias VirtualLab in HumaneAI)
12:10 Mohamed Chetouani (Sorbonne University) De-briefing and Conclusions

Background

Objectives:
Human Centric AI should be beneficial to individuals and the society as a whole, trustworthy, ethical and value-oriented, and focused on enhancing user’s capabilities and empowering them to achieve their goals.

Human Centric AI requires new approaches to train current and future actors in AI, human-machine interaction, cognitive science and the social sciences. These approaches are central to HumanE AI Net and should be now translated into Human Centric AI curricula that could be used to derive local curricula.

The focus of this workshop is the design of coherent Human Centric AI curricula by defining disciplines, strategies, methods and learning outcomes aligned with the needs of the society

Zoom Link
https://us02web.zoom.us/j/89703561322?pwd=SEQ2V3BvbGNZWXpWN2pvbFpJRjFTQT09

ID : 897 0356 1322
Code: 269442