Organizers

  • Joongi Shin (Aalto University, Finland)
  • Janin Koch (LISN, Université Paris-Saclay, CNRS, Inria, France)
  • Andrés Lucero (Aalto University, Finland)
  • Peter Dalsgaard (Aarhus University, Denmark)
  • Wendy E. Mackay (LISN, Université Paris-Saclay, CNRS, Inria, France)

Event Contact

  • Janin Koch (LISN, Université Paris-Saclay, CNRS, Inria, France)

Programme

Time Speaker Description
Morning session All participants Co-design the roles of AI in human-human collaborative ideation
Afternoon session All participants Co-design the process and form of human-human collaborative ideation

Background

People can generate more innovative ideas when they collaborate with one another, collectively exploring ideas and exchanging viewpoints. Advancements in artificial intelligence have opened up new opportunities in people's creative activities where individual users ideate with diverse forms of AI. For instance, AI agents and intelligent tools have been designed as ideation partners that provide inspiration, suggest ideation methods, or generate alternative ideas. However, what AI can bring to collaborative ideation among a group of users has not been fully understood. Compared to ideating with individuals, ideating with multiple users would require understanding users' social interaction, transforming individual efforts into a group effort, and—in the end—making users satisfied that they collaborated with other group members. This workshop aims to bring together a community of researchers and practitioners to explore the integration of AI in human-human collaborative ideation. The exploration will center around identifying the potential roles of AI as well as the process and form of collaborative ideation, considering what users want to do with AI or humans.

Organizers

  • Florian Müller (Ludwig-Maximilians-University Munich)
  • Yuanting Liu (fortiss GmbH)
  • Andreas Keilhacker (Start2)

Event Contact

  • Yuanting Liu (fortiss GmbH)

Programme

Time Speaker Description
Day1, 11:00 - 11:10 Florian Müller (LMU), Yuanting Liu (fortiss), Andreas Keilhacker (start2) Welcome
11:10 - 11:35 Joongi Shin (Aalto University) Keynote 1: Can LLMs make people more creative?
11:35 - 12:00 Albrecht Schmidt (LMU) Keynote 2: Symbiotic Creativity: The Fusion of Human Ingenuity and Machine Intelligence
12:00 - 13:00 Lunch
13:00 - 13:30 Thomas Pfau (Aalto University) The Task and Technology Stack
13:30 - 20:00 Hackathon Time!
Day 2, 09:00 - 11:00 Finalization of the Projects
11:00 - 11:30 Coffee Break
11:30 - 13:00 Pitches and Demos
13:00 - 14:00 Lunch
14:00 - 15:00 Closing and Awards

Background

This Hackathon seek to develop systems that utilize this potential and support users in generating creative texts together with LLMs. These systems should promote creativity through the clever use of prompts, asking the user, and similar techniques. The solutions will be evaluated by a jury, which will pay particular attention to the extent to which they support joint creativity between humans and machines.

Organizers

  • Florian Müller (Ludwig-Maximilians-Universität München)
  • Jana Kümmel (fortiss GmbH)
  • Manuela Lambacher (fortiss GmbH)
  • Yuanting Liu (fortiss GmbH)
  • Andreas Keilhacker (Start2)

Event Contact

  • Jana Kümmel (fortiss GmbH)
  • Manuela Lambacher (fortiss GmbH)

Programme

Time Speaker Description
14:00 - 14:30 Dr. Holger Pfeifer (fortiss) + Dr. Sabine Wiesmüller (Start2) Welcome and introduction
14:30 - 15:15 Zhiwei Han Process optimization in practice: Opportunities through LLMs for SMEs
15:30 - 16:15 Thomas Weber Effective prompts for SMEs: Hands-on techniques for using LLMs

Background

n the workshop event “LLM4SME” you will learn how to use LLMs and how to apply them to different business models:

Discover the many possible applications of LLMs for small and medium-sized enterprises (SMEs), from customer service automation to content development and market research.
Through real-world examples, you will develop a better understanding of how LLMs can help increase efficiency, drive innovation and gain competitive advantage.
Get practical examples and learn how other companies are successfully using LLMs.
Take part in interactive discussions and exchange ideas with other SME representatives.

Organizers

Event Contact

Programme

Time Speaker Description
xxx xxx xxx

Background

The Research Seminar Ethics and AI, led by Prof. Karen Joisten and Dr. Ettore Barbagallo, was aimed at PhD students, postdoctoral scholars, and research fellows, and took place at the University of Kaiserslautern-Landau (RPTU) during the winter term 2023-2024 (October 2023 to February 2024). The seminar focused, on the one hand, on the language used both in the media and academic settings to describe AI systems and the human-AI interaction; on the other hand, it addressed the ethical consequences of language use concerning AI and human society. Prof. Joisten and Dr. Barbagallo presented their research findings, which were based on the analysis of texts and documents such as the Ethics Guidelines for Trustworthy AI (AI HLEG 2019), and engaged in discussions with seminar participants. Their research began with the observation of the increasing resemblance between AI systems and biological systems, particularly human beings. This phenomenon, they argued, makes it increasingly natural for people to attribute human qualities to AI systems. Prof. Joisten and Dr. Barbagallo demonstrated that the primary ethical risk of using anthropomorphic language in relation to AI is not the humanization of AI, but rather the mechanization of human life.

Organizers

Event Contact

  • Sónia Teixeira (INESC TEC)

Programme

Time Speaker Description
9:00 - 12:30 Parallel sessions
14:00 - 15:30 Roundtable
15:45 - 17:30 Parallel sessions
17:30 Networking

Background

INESC TEC organized the HumanE-AI Metrics for Ethics Workshop on June 26, 2024, in Porto, bringing together consortium members and industry guests. In the morning, parallel discussion sessions were held on “Methods and Tools” and “Critical Multidisciplinary Studies.” In the afternoon, alongside the continuation of the morning sessions, a roundtable took place with representatives from the HumanE-AI project and leading companies. The roundtable discussed the “Trustworthy Assessment for Companies” questionnaire, developed within the AI4EU project. The event fostered an intense exchange of ideas and common challenges, resulting in suggestions for improvements to the dashboard and questionnaire and inspiring reflections on the challenge of developing ethical and trustworthy AI systems.

Organizers

  • Juan M Duran (TUDelft)

Event Contact

  • Juan M Duran (TUDelft)

Programme

Time Speaker Description
16:00 - 18:00 Manuel Barrantes Professor of philosophy at California State University Sacramento

Background

This was a single talk offered a talk on social explanations, understood as those that abstract from the physical, biological, and psychological level. The talk was part of the Micro Project coordinated by Dr. Ettore Barbagallo.

Organizers

  • Joao Gama (Inesc Tec)

Event Contact

  • Rita P. Ribeiro (Inesc Tec)

Programme

Time Speaker Description
9:00 Alex Jaimes AI & Public Data for Peacekeeping and Emergency Response

Background

The Eighth Workshop on Data Science for Social Good, SoGood 2023, held in conjunction with ECML PKDD 2023, at Torino, Italy.

The workshop intends to attract papers on how Data Science can and does contribute to social good in its widest sense.

Topics of interest include:

Government transparency and IT against corruption

Public safety and disaster relief

Access to food, water, sanitation and utilities

Efficiency and sustainability

Climate change

Data journalism

Social and personal development

Economic growth and improved infrastructure

Transportation

Energy

Smart city services

Education

Social services, unemployment and homeless

Healthcare and well-being

Support for people living with disabilities

Responsible consumption and production

Gender equality, discrimination against minorities

Ethical issues, fairness, and accountability.

Trustability and interpretability

Topics aligned with the UN development goals

The major selection criteria will be the novelty of the application and its social impact. Position and survey papers are welcome too.

We are also interested in applications that have built a successful business model and are able to sustain themselves economically. Most Social Good applications have been carried out by non-profit and charity organisations, conveying the idea that Social Good is a luxury that only societies with a surplus can afford. We would like to hear from successful projects, which may not be strictly "non-profit" but have Social Good as their main focus.

Organizers

Event Contact

Programme

Time Speaker Description
09:00-09:10 Jennifer Renoux Welcome
09:10-09:30 Introduction Rounds + Lightning talks
09:30-10:30 Ilaria Torre Voices from the future: creating appropriate verbal and nonverbal communication methods for Human-Robot Interaction
10:30-11:00 Coffee Break
11:00-13:00 Networking + Poster Session
13:00-14:00 Lunch
14:00-15:30 World Café
15:30-16:00 Coffee Break
16:00-16:45 Plenary discussion
16:45-17:00 Wrap-up

Background

The primary goal of this workshop is to bridge disciplinary boundaries between various fields, included but not limited to AI, HRI, and HCI, in order to gather a multi-perspective view on the topic of Communication in Human-AI Interaction. In particular, we are interested in exploring the core characteristics of AI communicators and human-AI communication, exchanging research methods, and fostering long-term collaboration between practitioners of different fields.
As the study of communication in human-AI interaction is by essence a multidisciplinary approach, we aim for this workshop to be a multidisciplinary platform where researchers can learn to work together and pave the way to impacting research. We also wish to use this opportunity to draw a tentative disciplinary map of the topic of Communication in Human-AI Interaction, describing different perspectives, research directions, methods, and how these perspectives can be related to one another within the research area as a whole.

The morning will focus on networking. First participants will introduce themselves, and participants who have a position paper accepted will present it in a round of lightning talks. After the keynote and the coffee break, we will organize a poster session for participant to discover each-others research. The afternoon will be organized as a World Café where participants will reflect on topics related to Communication in Human-AI Interaction. Depending on the number of participants, we will hold between three and five rounds of discussion.

Event Description

Details are coming soon :)

Contribution 1: Workshop on #EuroGen: Mapping the Future with Generative AI

  • 35+ experts from the networks of excellence, ADRA, AI4Europe and the European commission attended and contributed to identifying core research challenges for advancing GenAI in Europe.
  • We had 4 experts talks about the core challenge for GenAI in Europe: Paul Lukowicz (DFKI) spoke about grounding GenAI in real world. Rudolph Triebel (DLR) and Michael Beetz  (DLR) complemented the vision and spoke about ways to give robots perception and cognition. John Shaw-Taylor concluded the pitches and spoke about human-centred human-AI collaboration.
  • Given the recent funding initiatives by the European Commission, which aim to allocate 3 billion euros towards the development of GenAI until 2027, we have identified the most pertinent scientific challenges to advance GenAI in Europe: low-resource multimodal GenAI, human-robot interaction and physical grounding of AI in real world.
  • Identifying the top application areas from the domains proposed by the European Commission that can benefit from GenAI: health, digital industry & climate change.

 

Contribution 2: Panel on Harnessing Generative AI for Inclusive Global Education

 

Contribution 3: Sharing experiences about using AI on Demand

 

Organizers

Event Contact

Background

Follow-up Tutorial on the final text of the AI Act, with a focus on the introduction of legal obligations for the placing on the market or putting into use of General Purpose AI Models and General Purpose AI Systems and their relevance for Human-Centric AI.
The Tutorial builds on the HAI-NET Tutorial of 2021, explaining the structure of the proposal of the AI Act. See here to access the 2021 Tutorial.
The 28 June 2024 Tutorial will be based on the final text of the AI Act, that is in force from 10 July 2024, and become applicable within two years, depending on which part. See for the final text here.
The objectives of the Tutorial are to help computer scientists better understand
  • the main goals of the Act in the context of the EU internal market (harmonisation)
  • the applicability with regard to General Purpose AI Models and Systems (new compared to the 2021 proposal)
  • some of the legal obligations with respect to the design of these models and systems
  • the relevance of the AI Act for human-centric AI models and systems
You can find the recording of the Tutorial here.
After the Tutorial we have finalised a series of seven audio-slide-decks, which you can find below:

organized by Prof. Mireille Hildebrandt and Dr. Gianmarco Gori
Research Group of Law, Science, Technology & Society studies (LSTS)
Vrije Universiteit Brussel

12.00 – 14.00 Online
Those who wish to register should send an email to Bert.Frans.P.De.Bisschop@vub.be by Thursday 27 noon CEST.
They will receive the link on Friday morning.

 

The Tutorial is organised by the Legal Partner of the HAI-NET. The focus will be on the introduction of legal obligations for placing on the market or putting into use  General Purpose AI Models and General Purpose AI Systems.

The Tutorial will build on the HAI-NET Tutorial of 2021, explaining the structure of the proposal of the AI Act. See here to access the 2021 Tutorial: http://www.vernon.eu/wiki/AI_Act_Tutorial.

The 2024 Tutorial will be based on the final text of the AI Act, that will be in force within weeks from now (June 2024), and become applicable two years after that (though some parts will become applicable earlier). See for the final text: https://data.consilium.europa.eu/doc/document/PE-24-2024-INIT/en/pdf

The objectives of the Tutorial are to help computer scientists better understand:

  • the main goals of the Act in the context of the EU internal market (harmonisation)
  • the applicability with regard to General Purpose AI Models and Systems (new compared to the 2021 proposal, targeting Large Whatever Models)
  • some of the legal obligations with respect to the design of these models and systems
  • the relevance of the AI Act for human-centric AI models and systems

We need to emphasise that our objective is to give our audience a first taste of the legal regime that applies to real world human-centric AI systems that integrate generative AI. For more an in-depth understanding we refer to the report that Dr. Gori is preparing and to the Chapter that Dr. Gori and Prof. Hildebrandt are co-authoring on the subject in the Handbook of Generative AI for Human-AI Collaboration, eds. Mohamed Chetouani, Andrzej Nowak and Paul Lukowicz (Springer forthcoming).

Organizers

Event Contact

Programme

Time Speaker Description
12.00-14.00 Dr. Gianmarco Gori and Prof. Mireille Hildebrandt Prof. Mireille Hildebrandt is a Research Professor of 'Interfacing Law and Technology' at the Law & Criminology Faculty at Vrije Universiteit Brussels and holds the Chair of 'Smart Environments, Data Protection and the Rule of Law' at the Science Faculty of Radboud University in the Netherlands. Dr. Gianmarco Gori is a guest professor and postdoctoral researcher at the Research Group of Law Science Technology and Society (LSTS) at the Law Faculty of Vrije Universiteit Brussel.

Background

In this tutorial we will focus on the extent to which Generative AI, based on ‘Large Whatever Models’, falls within the scope of the AI Act and on the kind of legal obligations that should be taken into account by the developers of Generative AI that is meant to contribute to human-centric AI.

To this end we will first unpack the legal definitions of General Purpose AI Models (GPAI Models) and General Purpose AI Systems (GPAI Systems) and explain what kind of models qualify as GPAI models and what kind of systems qualify as GPAI systems. This will be followed by an inquiry into when a GPAI system is – legally speaking – a high risk AI system and into when a GPAI model is – legally speaking – an AI model generating systemic risk.

Second, we will elicit a small set of requirements that must be met by providers and/or deployers of GPAI Systems that integrate GPAI Models. As the HAI-NET is focused on contributing to real world human-centric AI, we will not focus on the research exemption that may apply to HAI-NET research. The whole point of legal protection by design is to ensure that such protection is built into the design phase. This means that developers must be aware of the requirements that providers and/or deployers of real-world applications of their models face.

Finally, we need to emphasise that our objective is to give our audience a first taste of the legal regime that applies to real world human-centric AI systems that integrate generative AI. For more an in-depth understanding we refer to the HAI-NET report that Dr. Gori is preparing on the subject and to the Chapter that Dr. Gori and Prof. Hildebrandt are co-authoring in the Handbook of Generative AI for Human-AI Collaboration, eds. Mohamed Chetouani, Andrzej Nowak and Paul Lukowicz (Springer forthcoming).