HumanE AI Net January 2023 microproject call

 General Information

This is the call for microprojects proposals that is in place from January 2023 and  reflects the Humane AI net research agenda (see HAI-Net-Deliverable-D6.1.pdf ( designed last year. Although individual topics have been driven by different WPs (WP combinations) each of the topics is open to all WPs and submissions across WPs as well as proposals working across the topics are encouraged.  There is no deadline for the call. Instead this call is handled as a “living document” that will be updated as new topics and insights emerge.

The admin rules of the micro projects are as before (at least two partners, best 2-6 months duration, 2-4PMs per partner, obligation to produce a tangible result to be made available through the AI4EU (AI4Europe) platform). New is the following:

  1. We will use the HumaneAI Net website for submission. This is where the most current  versions of the call will be available.
  2. After COVID is hopefully finally over for good, the travel requirement that was originally (in the proposal) a core part of the concept is back. This means that each microproject should include a couple of days of visits between the sites (the longer the better, after all this is about networking).
  3. Partners who have run out for funds can now apply for micro-projects from our “reserve fund”. The following applies:
    1. We expect typical requests per partner to be between 10K Euro and 20K Euro. This should give most partners 2-3PM and should allow us to finance around 30 “grants”.
    2. Preference will be given to projects that involve external partners (who, as described below will get funding from a separate “pot” (see below) and those that involve industry
  4. We need to intensify the interaction with groups outside the project. Microprojects that include at least one HumanE AI Net partner and one external group will be given priority, especially when reserve funds are requested for the HumanE  group. For the external group travel funds (including subsistence for longer stays) will be paid from the project. This includes inviting external partners to even spend the entire MP duration at a HumanE site. No person months can be paid for the external partners though.
  5. Evaluation Criteria:
    1. Alignment with call objectives and the specific research direction
    2. Feasibility, innovation and societal relevance of the proposed approach
    3. Measurable impact potential of the solution on theories, methods, and societal or economic/business impact
    4. Quality of the proposed collaboration and partnership

Establishing Common Ground for Collaboration with AI Systems (WP 1-2 motivated)

The focus of this topic is on ‘Collaborative Artificial Intelligence’ as described in Section 3.1 of the updated Humane AI Research Agenda We are interested in micro-projects and clusters of micro-projects that seek to provide practical demonstrations, tools, or new theoretical models for AI systems that can collaborate with and empower individuals or groups of people to attain shared goals.

Recent progress in machine learning has provided a variety of powerful new enabling technologies for intelligent systems.  We seek to harness these advances in areas such as large language models, generative systems, adversarial learning, self-supervised learning, visual object detection and natural language understanding to provide new foundations for establishing common ground for collaboration with AI systems.

We are interested in AI systems that can both communicate and understand descriptions of situations, goals, intentions or operational plans in order to establish shared understanding for collaboration. Descriptions may be expressed by sounds, motion, mechanical forces, visual displays, natural language, or any other communication mode, but must be expressed in a manner that is comprehensible to a human partner. We are particularly interested in theories and demonstrations of systems that can explain their internal models by providing additional information to justify statements and answer questions such as who, what, where, when, why and how.

Examples of targeted outcomes could include tools, demonstrations or theoretical models for topics such as, but not limited to:

  1. Systems that use sounds and visual displays and/or mechanical forces to guide and assist human operations in dynamically changing environments;
  2. Systems that use natural language or other interactions to obtain a model of the goals and intentions of a partner in order to provide information, explanations, warnings, or suggest possible courses of actions;
  3. Systems that can interpret narrative descriptions of events in order to verify facts, answer questions to provide explanations for events or provide additional information.

The ambition should be to enable interaction between AI systems and humans that is two-way in order to construct shared representations, and include the possibility of defining new representations for perceptions, actions or events.   This has the advantage that the shared representation does not necessarily need to be finalised before interaction begins, but can be adapted in response to the exchange of information analogous to the way that humans often synchronise their understanding with explanations of different terms, concepts, or situations. Micro-projects that explore this or other aspects of interactive alignment are encouraged even if they are only aiming at modest complexities of representation. In general, we encourage applications that aim to demonstrate modest but measurable steps towards the more ambitious goals!

We are keen that Micro-projects should demonstrate tangible progress towards the more ambitious goals described above and so encourage applicants to provide concrete measures of progress towards Collaborative AI that their project will monitor. At the same time in order to keep the projects aligned with the larger goals we also encourage projects to document how the work addresses those goals and if appropriate we also encourage you to include in your submission applications for follow-on MPs that can build on the work of the initial MP if that proves successful. We would consider giving pre-approval to follow-on projects conditional on the initial project successfully meeting its objectives.

Creation/Augmentation of realistic Datasets (WP 6 motivated)

Having access to data and ensuring data privacy are difficult to realize at the same time. While data protection laws, such as GDPR, provide a form of safety for users, they can also create challenges for data engineers and AI practitioners. Most often, data is even available within a company, but cannot be accessed by other departments due to legal restrictions

This leads to a chicken-and-egg problem. Researchers cannot provide sufficient reasoning for data access as the merit of analyzing the data is not known a priori. Yet this merit can only be assessed if access to the data is granted.

Consequently, there is a need for datasets that do not fall under these restrictions. One option is anonymized data. However, completely anonymizing data is often complex and very costly. In this call for microprojects, we propose an alternate option: generating artificial data that has the same characteristics as restricted personal data. This artificial data could be used for preliminary data analysis, possibly warranting access to the real data. Apart from technical challenges, this call encompasses selected ethical and societal challenges:

  • Creating a latent representation of the original data, that can be used to generate artificial data
  • Evaluating the quality of artificial data in terms of its usefulness but also its degree of anonymization
  • User modeling/personalization based on a latent representation instead of personal data
  • Ethical aspects and legal boundaries of modeling users via digital twins (a latent representation of their personal data)

We invite micro projects covering one or more of the aforementioned challenges. Additionally, micro projects should focus on conducting applied research within industrial applications and with societal use cases in mind. While relevant for our overall research agenda (responsible usage of data), this call particularly addresses pillars 2 (providing usable data for multimodal perception and modeling) and 5 (ethical aspect and legal boundaries of artificially created data).

Interactive Grounding (WP3 motivated)

Recent breakthroughs in AI have shown proficiency in interactions with the natural world and with language, however coordination and collaboration with human partners is an open challenge.

This topic focuses on methods, theories, studies, and techniques for common ground. Common ground refers to shared (common) beliefs and goals related to a shared activity. When there is no common ground, repairs and compensations may be needed. Participants must correct misunderstandings or take time to re-establish common ground.

This call invites proposals considering both technical and human aspects of grounding. Based on the Stockholm workshop, topics in 2023 include but are not limited to:

  • Exploiting context-awareness for grounding
  • Grounding with Large Language Models
  • Pragmatics, including linguistic and embodied aspects
  • Affordances for grounding
  • Co-adaptive processes in grounding
  • Storytelling and narratives
  • Information retrieval
  • Speech-based and multimodal interaction with AI
  • Cultural factors affecting grounding
  • Empirical measurements of grounding
  • Design processes for grounding
  • Special application areas with specific requirements for grounding, such as translation, games, explainable AI, etcetera.

We invite micro-projects covering one or more of the aforementioned challenges. Micro-projects with a focus on industrial applications and societal use cases are also welcome. All proposals must make clear how they contribute to the theme of the call: interactive grounding.

Measuring, modeling, predicting the individual and collective effects of different forms of AI influence in socio-technical systems at scale. (WP4 motivated)

The rise of large-scale socio-technical systems (STS) in which humans interact with AI systems, including assistants and recommenders, multiplies the opportunity for the emergence of collective phenomena and tipping points, with unexpected, possibly unintended, consequences. A better understanding is needed of the impact of AI systems on complex STS and the unique feedback loop they generate: the past evolution of a complex system influences the training of the AIs, which in turn influences the complex system’s future evolution. For example, navigation systems’ suggestions may create chaos if too many drivers are directed on the same route, and personalised recommendations on social media may amplify polarisation, filter bubbles, and radicalisation. On the other hand, we may learn how to foster “wisdom of crowds” and collective action effects to face social and environmental challenges.

This topic focuses on methods for measuring, modeling, predicting the individual and collective effects of different forms of AI influence in socio-technical systems at scale. In order to understand the impact of AI on socio-technical systems and design next-generation AIs that team with humans to help overcome societal problems rather than exacerbate them, we need to lay the foundations of Social AI, a new discipline at the intersection of Complex Systems, Network Science, AI and the (Computational) Social Sciences.

Activities that will be funded, include but are not limited to case studies, experiments, simulations and novel models and methods exploring the frontier of Social AI along the following dimensions:

  • How can we describe STS rigorously in mathematical terms?
  • What is the impact of AIs on individual and collective goals?
  • What are the new network effects and collective phenomena due to the interacting human-AI system?
  • How to design next-generation human-centered AI architectures that balance individual and collective goals with platform sustainability?
  • How to make people aware of the impact of AIs on collectivity?

ELS evaluation projects (WP5 motivated)

This topic focuses on  micro-projects that aim to assess/evaluate/monitor the implementation and adherence to ELS principles and guidelines (ethical, legal, societal). The micro-projects should involve collaboration between at least one partner from the network and at least one external organization or industrial player. Micro projects are expected to take 2-4 months and deliver a tangible output (eg. demo, dataset, publication…)

Activities that will be funded, include but are not limited to:

  • Research on methods and tools for assessment and monitoring ELS, with particular relevance those that address the European Trustworthy AI guidelines or AI act/
  • Implementation and testing of ELS principles and guidelines in real-world scenarios
  • Development and validation of metrics to evaluate ELS principles
  • Dissemination and communication of the results and impact to relevant stakeholders
  • Methods for theexplanation and justification of the output of machine learning

Innovation projects (WP6&7 motivated)

We recognize that many research results remain in laboratory and do not reach market or end-users. This is why we aim to run a call on innovation projects to transfer research results and generate outcomes that benefit society across various domains including healthcare, finance, transportation, and more.

This topic  aims to support the development and implementation of innovative AI solutions that not only have significant technological advancements but also have a measurable impact on society and the economy. Activities aim to address real-world challenges and opportunities in various domains such as healthcare, transportation, energy, and agriculture among others.

We invite proposals for microprojects that aim to develop and implement innovative AI solutions with significant socio-economic impact. The microprojects should involve collaboration between at least one partner from the network and at least one external organization or industrial player.

Activities that will be funded, include but are not limited to:

  • Research and development of innovative AI solutions
  • Implementation and testing of the solutions in real-world scenarios
  • Measuring and evaluating the socio-economic impact of the solutions
  • Dissemination and communication of the results and impact to relevant stakeholders

Evaluation Criteria:

  • Feasibility and innovation of the proposed solution
  • Relevance and alignment with the specific research direction
  • Impact potential of the solution on society and the economy
  • Quality of the proposed collaboration and partnership

Education & training projects (WP8 motivated)

Human-Centered AI mobilizes several disciplines such as AI, human-machine interaction, philosophy, ethics, law and social sciences.

The ambition of HumanE AI Net is to establish a training agenda to improve the education of a new generation of creative researchers and innovators, knowledgeable and skilled in Human-Centered AI. This call for micro-projects aims to create and distribute relevant dissemination and knowledge spreading materials such as Human-Centered Curricula, lectures, practicals, tutorials, MOOCs, which could take the forms of online materials as well as training events.

We encourage micro-projects engaging external partners, in particular micro-projects conducting in the context of the International Artificial Intelligence Doctoral Academy (AIDA), which gather the four ICT-48 networks (AI4Media, ELISE, HumanE AI NET, TAILOR) and the VISION project.

Call for submissions is closed