Organizers

  • Frank Dignum (UMU)
  • Virginia Dignum (UMU)

Event Contact

  • Frank Dignum (UMU)

Programme

Time Speaker Description
12.15 Carlos Zednik Upcoming seminar Friday 6 October: Does Explainable AI Need Cognitive Models?

Background

#frAIday is a series of inspiring talks on Artificial Intelligence organised by TAIGA, the Centre for Transdisciplinary AI at Umeå University. Participating in #frAiday is your opportunity to share your knowledge about AI, learn more, and discuss a wide range of perspectives on AI. Join, and meet new interesting people!

Location: Live Digital Event
Host: STI Forum – 7th Multi-stakeholder Forum on Science, Technology and Innovation for the Sustainable Development Goals
Date: 4 May 2022, 9:00 AM – 10.30 AM (ET), 02:00 PM London (GMT+1:00), 10:00 PM Tokyo (GMT+9:00)

Hosted by IRCAI and Permanent Mission of Slovenia to the UN, co-sponsored by Permanent Mission of Japan to the UN and Permanent Mission of South Africa to the UN

Download PDF Invitation->

Description

The Permanent Mission of Slovenia to the UN and the International Research Centre on Artificial Intelligence under the auspices of UNESCO (IRCAI) are organizing a side event launching a Global Network of Excellence Centres in artificial intelligence (AI) for sustainable development goals (SDGs).

AGENDA

9:00 – 9:10 ET Opening and Introduction 3 minutes each

  • Ambassador Boštjan Malovrh, Permanent Representative of Slovenia to the UN
  • Ambassador Tetsuya Kimura, Permanent Representative of Japan to the UN
  • Tshilidzi Marwala, Vice Chancellor and Principal, University of Johannesburg
  • Marielza Oliveira, Director for Partnerships and Operational Programme Monitoring, Communications and Information Sector, UNESCO

9:10 – 9:15 ET Keynote

  • Maria Fasli, UNESCO Chair in Analytics and Data Science, Executive Dean, Faculty of Science and Health at University of Essex

9:15 – 9:20 ET Introduction into the Network

  • John Shawe-Taylor, Director IRCAI

9:20 – 10:30 ET Flash talks presenting the history, aim, objectives, composition, activities, programmes, and technology focus of the Network 3 minutes each

Network portfolio and history

  • Samuel Kaski, Aalto University, ELISE Network Coordinator
  • Paul Lukowicz, DFKI, HumaneAI Network Coordinator

Network as a catalyst for research and innovation

  • Matthew Smith, Senior Program Specialist, IDRC
  • Nelson González, Head Global Impact Computing, AWS

Network solutions and evidence of impact in real-life

  • Kathleen Siminyu, Machine Learning Fellow, Mozilla Foundation
  • Nuria Oliver,  ELLIS Unit Alicante Foundation

Network connecting worldwide communities of practice

  • Ulrich Paquet, Deep Learning Indaba, DeepMind
  • Nuria Oliver,  ELLIS Unit Alicante Foundation

Network reach across all UN regions

  • Alexandre F. Barbosa, Director Cetic
  • Emmanuel Letouzé, Director Data-Pop Alliance

10.30 – 10:50  A virtual Press Conference will be organised after the event with speakers available for questions:

  • John Shawe-Taylor, Director IRCAI
  • Emmanuel Letouzé, Director Data-Pop Alliance

Organizers

Event Contact

Programme

Time Speaker Description
7 PM CET Start

Background

Artificial intelligence (AI) is expected to contribute over $15 Billion to the global economy by 2030 (PWC) and shape the future of human society. A critical challenge for the industry to live up to its potential is the need for more diversity in the development, research, application, and evaluation of new AI technology. With their series “Remarkable Women in AI,” the AI Competence Center at German Entrepreneurship and the Transatlantic AI eXchange invite all genders to a series of inspirational, educational, collaborative, and global discussions on gender diversity in AI – with the aim of inspiring attendees to take steps in their respective roles to address the gender gap in AI.

Target Audience
This event is directed towards all gender students, entrepreneurs and women in research institutions and corporations.

Organizers

Event Contact

Programme

Time Speaker Description
Sept, 04 Application Deadline
Oct, 25 Andreas Keilhacker Pitch Day

Background

The startup-investor matching event Cashwalk gives startups and investors a platform in an exclusive environment without any distractions to meet potential partners, and kick off prosperous relationships.

Is your startup planning the next seed or series A funding round? Then Cashwalk is your time and stage to shine! Apply by September 4 to pitch in front of 100 investors!

You will pitch your business live on the virtual stage. During networking breaks, the participating investors do have the chance to meet you at your online startup booth.

Organizers

Event Contact

Programme

Time Speaker Description
September, 1 Thomas Sattelberger Keynote: Deep Tech Horizon 2030
September, 1 Co-Founder Matching
September, 2 Northstar Workshop
September, 3 Mentor Madness
September, 3 Deep Dive Workshop
September, 4 Pitch Day

Background

We are announcing Deep Tech Momentum; the go-to conference for deep tech enthusiasts and founders in the early startup stages.

This event is designed to

👉 help you find co-founders,

👉 build your business faster through mentorship and

👉 receive world-class fundraising advice.

Techstars Berlin, RWTH Aachen, German Entrepreneurship and Audi Denkwerkstatt are designing this next level European experience in collaboration with the industry’s most innovative Deep Tech experts and top Deep Tech VCs.

Deep Tech Momentum aims to connect the best European early stage deep tech founders with potential co-founders and investors. With our first conference we aim to accelerate and connect 25+ top deep tech startups and 50+ "deep tech enthusiasts".

Interested? Or know any Deep Tech enthusiasts or founders? Check out the website.

Organizers

  • CHETOUANI (Sorbonne University)

Event Contact

  • Chetouani (Sorbonne)

Programme

Time Speaker Description
9:30 Mohamed Chetouani (Sorbonne University) Introduction & Objectives
9:45 Paul Lukowicz (DFKI) HumanE AI NET
10:00 Ioannis Pitas (AI4MEDIA) Lessons learnt from the AI4Media Curriculum formation exercise
10:20 Helena Lindgren (Umeå University) Human-Centered AI Education Addressing Societal Challenges
10:50 Wendy Mackay (INRIA) Participatory Design for Human-Centered AI
11:10 Andrea Aler Tubella ( University) How to teach Trustworthy AI? Challenges and recommendations from expert interviews.
11:30 Loïs Vanhée (Umeå University) Towards a GEDAI academy - Growing Ethical Designers of AI
11:50 Martin Welß (Fraunhofer Institute) AI4EU Experiments (alias VirtualLab in HumaneAI)
12:10 Mohamed Chetouani (Sorbonne University) De-briefing and Conclusions

Background

Objectives:
Human Centric AI should be beneficial to individuals and the society as a whole, trustworthy, ethical and value-oriented, and focused on enhancing user’s capabilities and empowering them to achieve their goals.

Human Centric AI requires new approaches to train current and future actors in AI, human-machine interaction, cognitive science and the social sciences. These approaches are central to HumanE AI Net and should be now translated into Human Centric AI curricula that could be used to derive local curricula.

The focus of this workshop is the design of coherent Human Centric AI curricula by defining disciplines, strategies, methods and learning outcomes aligned with the needs of the society

Zoom Link
https://us02web.zoom.us/j/89703561322?pwd=SEQ2V3BvbGNZWXpWN2pvbFpJRjFTQT09

ID : 897 0356 1322
Code: 269442

Organizers

Event Contact

Programme

Time Speaker Description
18:00 Welcome and Introduction
18:05 TBA Keynote
18:30 Albrecht Schmidt, Virginia Dignum, Arno De Bois, Marc Hilbert Panel on the European AI Act

Background

Regulation to foster or hinder the AI community? Join our discussion on the proposed European AI Act with Albrecht Schmidt, Virginia Dignum, Arno De Bois and Marc Hilbert on March 3, 18:00 CET (12pm EST, 9am PST).

Organizers

Event Contact

Programme

Time Speaker Description
18:05 Roberto Di Cosmo Keynote
18:30 Roberto Di Cosmo, Ana Trisovic, Sebastian Feger, Feng Wang Panel: OpenData Business Cases – What is the Value of Data?

Background

Panel on OpenData Business Cases – What is the Value of Data?

18:00 – Welcome, Introduction
Albrecht Schmidt

18:05 – Keynote: Roberto Di Cosmo, Founder and CEO of Software Heritage
Roberto Di Cosmo

18:30 – Panel: OpenData Business Cases – What is the Value of Data?
Roberto Di Cosmo, Ana Trisovic, Sebastian Feger, Feng Wang

19:30 – Wrap up and Closing
Albrecht Schmidt

Background

A collective intelligence exercise towards shaping the research questions of Social AI, driven by societal challenges. It is implemented through a structured conversation among inter-disciplinary scientists, looking at the relationship between AI and society from multiple perspectives.

For human-AI scientists and social scientists, the challenge is how to achieve better understanding of how AI technologies could support or affect emerging social challenges, and how to design human-centered AI ecosystems that help mitigate harms and foster beneficial outcomes oriented at the social good.

Social Artificial Intelligence

As increasingly complex socio-technical systems emerge, made of people and intelligent machines, the social dimension of AI becomes evident. Examples range from urban mobility, with travellers helped by smart assistants to fulfill their agendas, to the public discourse and the markets, where diffusion of opinions as well as economic and financial decisions are shaped by personalized recommendation systems. In principle, AI could empower communities to face complex societal challenges. Or it can create further vulnerabilities and exacerbate problems, such as bias, inequalities, polarization, and depletion of social goods.

The point is that a crowd of (interacting) intelligent individuals is not necessarily an intelligent crowd. On the contrary, it can be stupid in many cases, due to network effects: the sum of many individually “optimal” choices is often not collectively beneficial, because individual choices interact and influence each other, on top of common resources. Navigation systems suggest directions that make sense from an individual perspective, but may create a mess if too many drivers are directed on the same route. Personalized recommendations on social media often make sense to the user, but may artificially amplify polarization, echo-cambers, filter bubbles, and radicalization. Profiling and targeted advertising may further increase inequality and monopolies, with harms of perpetuating and amplifying biases, discriminations and “tragedies of the commons”.

The network effects of AI and their impact on society are not sufficiently addressed by AI research, first of all because they require a step ahead in the trans-disciplinary integration of AI, data science, network science and complex systems with the social sciences. How to understand and mitigate the harmful outcomes? How to design “social AI” mechanisms that help towards agreed collective outcomes, such as sustainable mobility in cities, diversity and pluralism in the public debate, fair distribution of resources?

Registration

Please register here

Organizers

  • Dino Pedreschi (University of Pisa)
  • Chiara Boldrini (IIT-CNR)
  • Letizia Milli (University of Pisa)
  • Laura Sartori (University of Bologna)

In collaboration with:

  • SoBigData++, the European Research Infrastructure for Big Data and Social Mining
  • SAI, the CHIST-ERA project “Social eXplainable Artificial Intelligence”
  • XAI, the ERC Advanced Grant "Science and technology for the eXplanation of AI decision making

Event Contact

  • Dino Pedreschi (University of Pisa)

Programme

Time Speakers
16:00 – 17:00 Setting-the-stage – plenary session

Fire-start addresses by AI scientists and social scientists:

17:00 – 18:00 Breakout – four parallel brainstorming rooms
  • Bias (video)
    • Mentors: Katharina Kinder-Kurlanda (Univ. Klagenfurt) and Salvatore Ruggieri (Univ. Pisa), Rapporteur: Anna Monreale (Univ. Pisa)
  • Inequality (video)
    • Mentors: Laura Sartori (Univ. Bologna) and Mark Coté (King’s College), Rapporteur: Luca Pappalardo (ISTI-CNR)
  • Polarization (video)
    • Mentors: Kalina Bontcheva (Univ. Sheffield) and János Kertész (Central European Univ. Vienna), Rapporteur: Chiara Boldrini (IIT-CNR)
  • Social good (video)
    • Mentors: Mohamed Chetouani (Sorbonne Univ.), Frank Dignum (Umea Univ.), Andrzej Nowak (Univ. Warsaw), Rapporteur: Michele Bezzi (SAP)
18:00 – 18:30 Restitution – plenary session (video)
reports from the mentors and rapporteurs of the breakout sessions, wrap-up

Meet the speakers

Alex 'Sandy' Pentland
Alex 'Sandy' Pentland

Professor Alex 'Sandy' Pentland directs MIT Connection Science, an MIT-wide initiative, and previously helped create and direct the MIT Media Lab and the Media Lab Asia in India. He is one of the most-cited  computational scientists in the world, and Forbes recently declared him one of the "7 most powerful data scientists in the world" along with Google founders and the Chief Technical Officer of the United States.  He is on the Board of the UN Foundations' Global Partnership for Sustainable Development Data, co-led the World Economic Forum discussion in Davos that led to the EU privacy regulation GDPR, and was central in forging the transparency and accountability mechanisms in the UN's Sustainable Development Goals.  He has received numerous awards and prizes such as the McKinsey Award from Harvard Business Review, the 40th Anniversary of the Internet from DARPA, and the Brandeis Award for work in privacy. He is a member of advisory boards for the UN Secretary General and the UN Foundation,  and the American Bar Association, and previously for Google, AT&T, and Nissan.  He is a member of the U.S. National Academy of Engineering and council member within the World Economic Forum.

Laura Sartori
Laura Sartori

Laura Sartori is an Associate Professor of Sociology at the Department of Political and Social Sciences at the University of Bologna. She holds a Ph.D. in Sociology and Social Research from the University of Trento (2002) and ever since worked on several topics related to the social and political implications of technology: from ICTs to AI. Current projects are about 1. Inequalities and public perception of Artificial Intelligence, 2. Money and Complementary currencies.

 Stuart Russell
Stuart Russell

Stuart Russell is a Professor of Computer Science at the University of California at Berkeley, holder of the Smith-Zadeh Chair in Engineering, and Director of the Center for Human-Compatible AI. He is a recipient of the IJCAI Computers and Thought Award and from 2012 to 2014 held the Chaire Blaise Pascal in Paris. He is an Honorary Fellow of Wadham College, Oxford, an Andrew Carnegie Fellow, and a Fellow of the American Association for Artificial Intelligence, the Association for Computing Machinery, and the American Association for the Advancement of Science. His book "Artificial Intelligence: A Modern Approach" (with Peter Norvig) is the standard text in AI, used in 1500 universities in 135 countries. His research covers a wide range of topics in artificial intelligence, with an emphasis on the long-term future of artificial intelligence and its relation to humanity. He has developed a new global seismic monitoring system for the nuclear-test-ban treaty and is currently working to ban lethal autonomous weapons.

Mona Sloane
Mona Sloane

Mona Sloane is a sociologist working on design and inequality, specifically in the context of AI design and policy. She is a Senior Research Scientist at the NYU Center for Responsible AI, an Adjunct Professor at NYU’s Tandon School of Engineering, a Fellow with NYU’s Institute for Public Knowledge (IPK) and The GovLab, and the Director of the *This Is Not A Drill* program on technology, inequality and the climate emergency at NYU’s Tisch School of the Arts. She is principal investigator on multiple research projects on AI and society, and holds an affiliation with the Tübingen AI Center at the University of Tübingen in Germany. Mona also is the conveyor of the IPK Co-Opting AI series and serves as editor of the technology section at Public Books. Follow her on Twitter @mona_sloane.

Dino Pedreschi
Dino Pedreschi

Dino Pedreschi is a professor of computer science at the University of Pisa, and a pioneering scientist in data science and artificial intelligence. He co-leads with Fosca Giannotti the Pisa KDD Lab - Knowledge Discovery and Data Mining Laboratory http://kdd.isti.cnr.it, a joint research initiative of the University of Pisa and the Italian National Research Council - CNR. His research focus is on big data analytics and mining, machine learning and AI, and their impact on society: human mobility and sustainable cities, social network analysis, complex social and economic systems, data ethics, discrimination-preventing and privacy-preserving data analytics, explainable AI. He is currently shaping the research frontier of Human-centered Artificial Intelligence, as a leading figure in the European network of research labs Humane-AI-Net (scientific director of the line “Social AI”). He is a founder of SoBigData.eu, the European H2020 Research Infrastructure “Big Data Analytics and Social Mining Ecosystem” www.sobigdata.eu. Dino is currently the Italian member of the Responsible AI working group of GPAI – the Global Partnership an AI, a member of the OECD Network of Experts in AI and the coordinator of the working group “Big Data & AI for Policy” of the Italian Government “data-driven” Taskforce for the Covid-19 emergency. Twitter: @DinoPedreschi

Organizers

Event Contact

Watch the Recording

You can now watch the recording of the entire event at: https://youtu.be/pwOy6KKh_tk.
The total duration of the video is 2 hours 40 minutes.

Attend the Event

Register for an e-mail reminder: https://forms.gle/LamUhKpzN2N9FfPG7

Event Description

Recent developments have enabled humans and AI-based systems to cooperatively work towards joint goals in interactive and collaborative settings. They have not only showcased various application domains and use-cases for such interactive capabilities but also highlighted several issues and opportunities. Experts from Psychology, HCI, AI, and Computer Science will discuss some current progress, challenges, opportunities, and a vision for the future of such systems from a human-centered perspective.

Programme

Time Speaker Description
14:00–14:05 Kashyap Todi Welcome
14:05–14:30 Wendy Mackay Plenary Talk: Human–Computer Partnerships
14:30–14:55 Janet Rafner & Jacob Sherson Plenary Talk: Hybrid Intelligence
14:55–15:00 Break
15:00–15:10 Alessandro Saffiotti Short Talk: Human-AI in artistic co-creation
15:10–15:20 Janin Koch Short Talk: Visual Design Ideation with Machines
15:20–15:30 Silvia Miksch Short Talk: Guide Me in the Analysis: How can Visual Analytics enriched by guidance contribute to gaining insights and decision making
15:30–15:40 Mohamed Chetouani Short Talk: Social Learning Agents: Role of Human Behaviors
15:40–16:00 Panel Discussion
16:00 Event Close

Meet the Speakers and Organisers

Abstracts

Human–Computer Partnership (Wendy Mackay)

In this talk, Wendy Mackay will talk about moving beyond the traditional 'human-in-the-loop' perspective, which focuses on using human input to improve algorithms. She will share her vision for 'computer-in-the-loop', where intelligent algorithms serve to enhance human capabilities.

Hybrid Intelligence: First Rate Humans, Not Second Class Robots (Janet Rafner & Jacob Sherson) 

In light of the recent deep learning driven success of AI in both corporate and social life there has been a growing fear of human displacement and a related call to develop IA (intelligence augmentation) rather than pure AI. In reality, most current AI applications have a significant human-in-the-loop (HITL) component and are therefore arguably more IA than AI already. From here, there are currently two trends in the field. In one trend, increasing machine autonomy is pursued, first by placing the human-on-the-loop in order to verify the result of the machine computation and then by hoping to take the human completely out of the loop, as in the pursuit of artificial general intelligence. Two main challenges of this approach are a) the value-alignment problem (how do we ensure that the machine satisfies human preferences when we often cannot even express or agree on these ourselves) and b) the extensive human deskilling that often accompanies algorithmic advances. In our talk, we will discuss how these two challenges may potentially be overcome by the second trend: the pursuit of increasingly intertwined human-machine operation. We will present and give examples of an operational and ambitious framework, hybrid intelligence (HI), in which the two interact synergistically and continually learn from each other.

Human-AI Collaboration in Artistic Co-creation (Alessandro Saffiotti)

Live artistic performance, like music, dance or acting, provides an excellent domain to observe and analyze the mechanisms of human-human collaboration. In this short talk, I use this domain to study human-AI collaboration. I propose a model for collaborative artistic performance, in which an AI system mediates the interaction between a
human performer and an artificial one. I will illustrate this model with case studies involving different combinations of human musicians, human dancers, robot dancers, and a virtual drummer.

Visual Design Ideation with Machines (Janin Koch)

In this short talk, Janin Koch will talk about 'MayAI', 'ImageSense', and her current postdoctoral research on how humans and machines can collaborate during visual design ideation, and how this collaboration enhances the creative process and results.

Guide Me in the Analysis: How can Visual Analytics enriched by guidance contribute to gaining insights and decision making (Silvia Miksch)

Visual Analytics is "the science of analytical reasoning facilitated by interactive visual interfaces." Guidance is a "computer-assisted process that aims to actively resolve a knowledge gap encountered by users during an interactive visual analytics session.” I will illustrate how guidance-enriched Visual Analytics contribute to gaining insights and decision making.

Social Learning Agents: Role of Human Behaviors (Mohamed Chetouani)

There are increasing situations in which humans and AI systems are acting, deciding and/or learning. In this short talk, we discuss approaches and models able to capture specific strategies of humans while they are teaching agents. We will see how social learning based approaches make it possible to take into account such strategies in the development of interactive machine learning techniques and in particular when it comes to social robotics.

Network

The Humane AI Net project funded by the European Union Horizon 2020 program aims to bring together the European AI community to develop the scientific foundations and technological breakthroughs needed to shape the AI revolution in a direction that is beneficial to humans both individually and societally, and that adheres to European ethical values and social, cultural, legal, and political norms. Key specific questions that the project addresses are:

  • AI systems that „understand” humans,
  • AI systems that can interact in complex social settings
  • AI systems that enhance  human capabilities
  • AI systems that empower both individuals and society as a whole carefully balancing individual benefits and social impact of their functionality
  • AI systems that respect human autonomy and self-determination
  • Ethics and Legal Protection “by design” in complex dynamic AI systems

Organizers

The organizer of this event is Prof. Virginia Dignum and other consortium members in charge of the work package on AI Ethics and Responsible AI. The research in WP5 will deal with various ethical issues such as transparency, whether biases are pre- programmed, are unintendedly introduced by the algorithm, or are the result of disproportionate data.

About the event

In this virtual event, we'll discuss the issue of defining AI for regulatory and policy purposes. There is an increasing realisation that researchers, regulators and policymakers are struggling with identifying what exactly are they addressing, with views ranging from 'magic' to the whole of computing, from robotics to very narrow specific statistical techniques, which render any attempts at regulation or policy guidance quite useless.

The result of this event will a research brief proposing a definitional framework to inform the current discussion around AI regulation. Our primary focus are the current regulatory efforts at European Parliament and Commission, but we hope to be useful to a wider audience, including proposals that contribute to shaping education, auditing and industry views on AI.

Register here

Participation is free of charge, but registration is required in order to organise the round table discussions. Link to Zoom meeting will be sent prior to the event to all registered participants.

Programme

17:00‑17:30 Welcome, fire start presentations and  Q/A
Marko Grobelnik

There won't be any perfect definition of AI, but we urgently needed a 'good enough' one yesterday

Eva Kaili

EU approach to AI regulation

Catelijne Muller

TBA

Francesca Rossi

Can we really define AI?

Michael Wooldridge

When is an algorithm AI? And if we can't answer that, how can we regulate AI?

17:45‑18:45 Round table discussions
18:45‑19:00 Summary and conclusions

Meet the Speakers

Marko Grobelnik, Artificial Intelligence Laboratory, JSI

Marko Grobelnik is a researcher in the field of Artificial Intelligence. Marko co-leads Artificial Intelligence Lab at Jozef Stefan Institute, cofounded UNESCO International Research Center on AI (IRCAI), and is the CEO of Quintelligence.com. He collaborates with major European academic institutions and major industries such as Bloomberg, British Telecom, European Commission, Microsoft Research, New York Times. Marko is co-author of several books, co-founder of several start-ups and is/was involved into over 70 EU funded research projects in various fields of Artificial Intelligence. Marko represents Slovenia in OECD AI Committee (ONE AI), in Council of Europe Committee on AI (CAHAI), and Global Partnership on AI (GPAI). In 2016 Marko became Digital Champion of Slovenia at European Commission.

Eva Kaili, member of European parliament
Eva Kaili, Member of the European parliament

Eva Kaili is a Member of the European Parliament, part of the Hellenic S&D Delegation since 2014. She is the Chair of the Future of Science and Technology Panel in the European Parliament (STOA) and the Centre for Artificial Intelligence (C4AI), Member of the Committees on Industry, Research and Energy (ITRE), Economic and Monetary Affairs (ECON), Budgets (BUDG), and the Special Committee on Artificial Intelligence in a Digital Age (AIDA). Eva is a member of the delegation to the ACP-EU Joint Parliamentary Assembly (DACP), the delegation for relations with the Arab Peninsula (DARP), and the delegation for relations with the NATO Parliamentary Assembly (DNAT). In her capacity, she has been working intensively on promoting innovation as a driving force of the establishment of the European Digital Single Market. She has been the draftsperson of multiple pieces of legislation in the fields of blockchain technology, online platforms, big data, fintech, AI and cybersecurity, as well as the ITRE draftsperson on Juncker plan EFSI2 and more recently the InvestEU program. She has also been the Chair of the Delegation to the NATO PA in the European Parliament, focusing on Defence and Security of Europe. Prior to that, she has been elected as a Member of the Hellenic Parliament 2007-2012, with the PanHellenic Socialist Movement (PASOK). She also worked as a journalist and newscaster prior to her political career. She holds a Bachelor degree in Architecture and Civil Engineering, and Postgraduate degree in European Politics.

Catelijne Muller, ALLAI
Catelijne Muller, ALLAI

Catelijne Muller is President and co-founder of ALLAI, an independent organisation that promotes responsible development, deployment and use of AI. She is a former member of EU High Level Expert Group on AI, that advised the European Commission on economic, social, legal and ethical strategies for AI. She is AI-Rapporteur at the EESC and was Rapporteur of the EESC opinion on Artificial Intelligence and Society, the EESC opinion on the EU Whitepaper on AI and the EESC opinion on the EU AI Regulation (upcoming). From 2018 to 2020 she headed the EESC Temporary Study Group on AI and she is a member of the EESC Digital Single Market Observatory. She is a member of the OECD Network of Experts on AI (ONE.AI). She advises the Council of Europe on the impact of AI on human rights, democracy and the rule of law. Catelijne is a Master of Laws by training and worked as a Dutch qualified lawyer for over 14 years prior to committing her efforts to the topic of Responsible AI.

Michael Wooldridge, Oxford UniversityMichael Wooldridge, Oxford University

Michael Wooldridge (Oxford University) is a Professor of Computer Science and Head of Department of Computer Science at the University of Oxford, and a programme director for AI at the Alan Turing Institute. He has been an AI researcher for more than 30 years, and has published more than 400 scientific articles on the subject, including nine books. He is a Fellow of the Association for Computing Machinery (ACM), the Association for the Advancement of AI (AAAI), and the European Association for AI (EurAI). From 2014-16, he was President of the European Association for AI, and from 2015-17 he was President of the International Joint Conference on AI (IJCAI). 

Francesca Rossi (IBM)

Francesca Rossi is an IBM fellow and the IBM AI Ethics Global Leader. She is an AI scientist with over 30 years of experience in AI research,
on which she published more than 200 articles in top AI journals and conferences. She co-leads the IBM AI ethics board and she actively participate in many global multi-stakeholder initiatives on AI ethics. She is a member of the board of directors of the Partnership on AI and the industry representative in the steering committee of the Global Partnership on AI. She is a fellow of both the worldwide association of AI (AAAI) and of the European one (EurAI),
and she will be the next president of AAAI.

Organizers

Background

Every year, students develop numerous ideas to solve societal problems using Artificial Intelligence (AI). But the majority of these valuable ideas are not getting further pursued or turned into businesses. This event provides a stage for outstanding student projects and seeks to promote and match them with leading professionals. AI experts from research, business, and the startup scene evaluate participants' ideas and highlight opportunities for further development. The most promising ideas will receive an award.

Don’t miss the chance to get an overview of exciting ideas and a great networking opportunity with high potential students as well as experts from the AI ecosystem.

You have an idea you want to present? Share your Idea and take part in the AI Prize! Please send an email to Sebastian Feger (sebastian.feger@um.ifi.lmu.de) with a short description and a link to a video that showcases your idea or prototype.

If you want to attend, please register for the reminder mail.

The prizes include:

  • An expert coaching – helping you to bring your idea to the next level.
  • A team dinner event – celebrating your first step to start your business.
  • Smart speaker – communication with AI.

Register here

Programme

17:00‑17:15 Welcome by Albrecht Schmidt and Jan Alpmann

Intro to HumaneAI Net and today’s event

17:15‑17:25 Guest talk by Timon Ruban

Our journey of building an AI Startup

17:25‑17:30 Setting the stage by Albrecht Schmidt
17:30‑18:30 Starting the Pitches with Jury members:
  • Matthias Notz
  • Albrecht Schmidt
  • Timon Ruban
  • Bernd Blumoser
  • Gülce Cesur Pitches including Q&A
18:30‑18:40 Short Break
18:40‑19:00 Panel discussion – From AI ideas to businesses
19:00‑19:15 Award ceremony & farewell by Albrecht Schmidt and Jan Alpmann s
19:15‑19:30 Open Networking

Meet the Jury

  • Ludwig-Maximilian-University Munich: Prof. Dr. Albrecht Schmidt
  • CEO German Entrepreneurship: Matthias Notz
  • Innovation Head of AI Lab: Bernd Blumoser
  • VW Data Lab: Gülce Cesur
  • Co-Founder Luminovo: Timon Ruban