Dynamic Funding with Internal Calls

Innovative funding mechanisms for AI projects

The HumanE AI preparatory action proposal was based on the observation that an operational Flagship follows two opposite funding models. The first one is a conventional RIA model with most of the funds “statically” assigned to partners for specific tasks defined in the DOW. In other words, it’s a normal RIA just scaled up. According to the second, a Flagship works more like a funding agency than a RIA. It has a core group setting the agenda and distributing most funds through open calls. Both have their advantages but also significant limitations, especially considering the specifics of AI and the AI community.

The “scaled up RIA” model is not well suited to deal with a such a highly dynamic, extremely fast evolving field as AI.  Even for a typical RIA with a 3- to 4-year lifetime, plus about 1 year from proposal idea to project start, new developments can make ideas and approaches defined at the time of writing not just outdated but often even plain obsolete. In a longer run, mechanisms for dynamic refinement of the research agenda and/or approach are indispensable. In other words, the proposal needs to define the broad research direction and expected results while the detailed research questions and approaches need to be continuously evolved and refined on the basis of both project internal results/insights and general developments in the field.

The above considerations have motivated the “funding agency” model in our case. Unfortunately, this only works within well defined, highly specialized communities. Within very broad, loose communities it is difficult to avoid extreme oversubscription of the calls and a dilution of the community.  An example of a functioning “funding agency model” is the quantum technology Flagship. The ability to produce a proposal for a well-defined topic in quantum technology is limited to a manageable set of groups that are the community that the Flagship wants to address and keep together. By contrast, it is sometimes said that “anyone who can program Python and install Tensor Flow considers himself an AI expert”. This means that open calls run the risk of being swamped by submissions creating huge overhead and making it difficult to build and maintain a focused community that has the right mix of competences and, most of all, the required scientific and technological excellence.

In the proposal we have put forward a concept for combining the above two approaches. This concept has been further developed during the preparatory action as detailed below. It is based on the following ideas:

Core group funding

The backbone of a project is a stable group of core partners recruited from top European AI groups in such a way that it includes adequate representation from all relevant communities and covers (at least at top level) all relevant competences. Such a stable core guarantees coherence, consistency and continuity over the course of the project.  In a large-scale long-term initiative, 10-20% of the annual budget would be devoted to financing the core group activities with a focus on management, agenda setting, communication and integration of results.

Network funding

At the center of the S&T activities would be large network of excellent academic and industrial research groups (up to 100 organizations with up to 400 individual groups). The members of the network should, as basic funding, receive small contributions, mostly travel- and dissemination-oriented grants (for a total of 5-10% of the annual budget). In addition, they should have the right to apply for more significant research funds from the third pillar (see below). The set of institutions in the network should remain relatively stable, but should be subject to regular reviews and updates. Network membership could, for example, default to three years with the need to reapply afterward. This means that less active groups could be dropped, allowing emerging stars to join as new members. Thus, even in highly dynamic areas such as AI long running initiatives such as the original Flagships one could always ensure the participation of top groups in the field.

Internal competitive calls

Members of the network would be implementing the research agenda through medium-size and -length research projects granted through competitive internal calls to network members (about 60-70% of the annual budget). The calls would be made in accordance with the project long term research vision adapting it to the evolving state of the art.  Thus, the calls would provide a bridge between the long-term vision and expected impact on one side and short-term developments both in the research field and emerging ideas and project results on the other. They would be driven by the core partners with input from the larger network, the community as a whole and the project officers.  The calls should be lightweight and focused on scientific excellence, but they should include requirements and incentives to foster community building, exploitation, multiplier effects and other “political” concerns. Most projects should be collaborative between at least two network partners and require the inclusion of external partners to broaden the community. There should also be incentives (e.g., more funds, ability to apply for more projects) for aligning additional research around the calls (e.g., parallel applications to different funding agencies, PhD theses, etc.). Finally, various mechanisms All projects should have to present their results at regular plenary project meetings in addition to showing other success factors (e.g., publications, software, etc.).