The German Entrepreneurship’s AI Beyond Borders Awards have been created to recognize the top 3 AI startups who are not only pushing the boundaries in artificial intelligence technology, but also are about to take off on an exciting journey across borders to expand internationally.
AI (Artificial Intelligence) startups face particular challenges when they take their business across international borders, be those national data regulations, or simply the cultural acceptance of artificial intelligence technology. It’s important for startups to plan for internationalization early, to educate themselves on potential markets, to make early connections, and to forge long-term partnerships in each new country.
All events
Legal Protection Debt in the ML Pipeline
Panel discussion on the HAI Report on 'Legal Protection Debt'
12 September 2023 Brussels
ARI Conference: Local & Sustainable AI, Data, and Robotics
Mireille Hildebrandt (moderator), Gianmarco Gori (introducing the Report as author), Paul Lukowicz (coordinator HAI Net), Masha Medvedeva (CS), Tom Lenaerts (CS), Johannes Textor (CS), Irena Kamera (Law), Michael Veale (CS, STS, Law)
Background
The machine learning pipeline kicks off with the collection and curation of training data. Besides the environmental impact of generating, storing and processing ever more data, there are more reasons to question the sustainability of this type of data-driven AI systems. This panel will engage with a report written by Gianmarco Gori for the Human AI Network that is focused on groundbreaking research into human-centred AI. The report investigates how a ‘legal protection debt’ builds up in the ML pipeline where upstream design decisions may have a major downstream impact on the fundamental rights and freedoms of natural persons. The report confronts the legal duties of the GDPR and the legal framework for open data and open science, highlighting the need for responsible AI from the perspective of legal protection by design. The panel hosts computer scientists, developers and legal scholars, promising an animated debate about arduous issues in data-driven research.
Please consult the report in this link: https://www.cohubicol.com/assets/uploads/hai-net-report.pdf
All events
European Summer School on Artificial Intelligence (ESSAI)
Advanced Course on AI and 1st European Summer School on Artificial Intelligence
From this year EurAI made the ACAI school to be part of a larger European Summer School on Artificial Intelligence (ESSAI), structured following the successful model developed by ESSLLI.
Core local organizing committee
Aleksander Sadikov (University of Ljubljana)
Vida Groznik (University of Primorska, University of Ljubljana)
Sašo Džeroski (Jožef Stefan Institute)
Jure Žabkar (University of Ljubljana)
All events
Founders 4 Impact Night
Connecting early-stage entrepreneurs and talents looking to join their mission
Are you ready to embark on a transformative journey to tackle today’s challenges? “Founders4impact Night – Co-founder Match” connects early-stage entrepreneurs with outstanding talents that are ready to leverage their expertise to create impactful ventures.
Get inspired by thrilling pitches, engage in networking with free drinks and food and get to know startup enthusiasts sharing the same values and interests by joining dedicated discussion tables. Find someone joining your mission or make your fellow founder’s mission your own!
In an exciting evening, we facilitate this first step in the successful journey of outstanding founding teams. We want to empower entrepreneurs to take action, in a world calling out for innovative solutions.
Join us at the event for free and kickstart your entrepreneurial journey!
All events
Imagining the AI Landscape after the AI Act
This workshop (open to interdisciplinary researchers) aims at analyzing how the new regulation will shape the AI technologies of the future.
June 27, 2023 Munich, Germany
The second International Conference on Hybrid Human-Artificial Intelligence
his workshop aims at analyzing how this new regulation will shape the AI technologies of the future. We will cover issues such as the ability of the AIA requirements to be operationalized, privacy, fairness, and explainability by design, individual rights and AIA, AI risk assessment, and much more. The workshop will bring together legal experts, tech experts and other interested stakeholders for constructive discussions. We aim at stakeholder and geographical balance. The workshop's main goal is to help the community understand and reason over the implications of an AI regulation, what problems does it solve, what problems does it not solve, what problems does it cause, discuss the new proposed amendments to the text of the AI Act, and propose new approaches that maybe have not been tackled yet.
Papers are welcome from academics, researchers, practitioners, postgraduate students, private sector, and anyone else with an interest in law and technology. Submissions with an interdisciplinary orientation are particularly welcome, e.g. works at the boundary between ML, AI, human-computer interaction, law, and ethics. Submitted applications can include regular papers, short papers, working papers and/or extended abstracts.
All events
Science4Impact Entrepreneurship Bootcamp
A dynamic workshop format that empowers visionary minds in the field of AI to take the first step in their entrepreneurial journey
Discover the Science4Impact Bootcamp for AI Researchers—a dynamic workshop format that empowers visionary minds in the field of AI to take the first step in their entrepreneurial journey.
Explore groundbreaking ideas, apply agile methods and gain the skills to create real-world impact with your knowledge.
To ensure the best possible experience for our startups and target audience, we reserve the right to cancel tickets if we determine that a participant’s intentions do not align with the purpose of the event. Please note that any commercial use of video and photo recordings of the event requires explicit permission from German Entrepreneurship.
All events
HumaneAI delivering a one day event @ European Parliament
Presenting the new science of Artificial Intelligence that can put Europe on the world stage
Please note that the agenda is subject to minor changes
14:00
Welcome and setting the stage: State of Play in European Regulation on Artificial Intelligence by Irena Joveva MEP, Committee on Culture and Education.
ELISE represented by Cees Snoek, University of Amsterdam, Netherlands.
TAILOR represented by Fredrik Heintz, Linköping University, Sweden.
AI4Media represented by Ioannis Kompatsiaris, CERTH
14:25
Scientific Foundations of Large Language Models (LLMs) by Hermann Ney, RWTH Aachen, Germany.
14:40
Panel on Industrial and Research Potential of AI in Europe
MODERATOR: Lenneke Hoedemaker, Moderator and Presenter.
Virginia Dignum, Professor in Responsible AI, Umea University, Sweden and the Scientific Director of WASP-HS.
Ieva Martinkenaite, Senior Vice President and Head of Research and Innovation, Telenor Group.
Francesca Rossi, IBM Fellow and AI Ethics Global Leader and AAAI President.
15:10
Message from the Co-Sponsoring Organisations
CLAIRE and EurAI are represented by Holger Hoos, RWTH Aachen University, Germany.
IRCAI-UNESCO and OECDviews are represented by Marko Grobelnik, International Research Centre on Artificial Intelligence (IRCAI) under the auspices of UNESCO, Jozef Stefan Institute, Slovenia.
15:20
Panel on the Societal Impact and AI Policies in Europe
MODERATOR: Lenneke Hoedemaker, Moderator and Presenter.
Catelijne Muller, President and co-founder of ALLAI.
Brando Benifei, Member of the European Parliament.
Clara Neppel, Senior Director of the IEEE European office.
Dino Pedreschi, Professor at the University of Pisa, Italy and GPAI member.
15:55 - 16:00
Closing remarks by Cécile Huet, Deputy Head of the Unit “Robotics and Artificial Intelligence Innovation and Excellence” at the European Commission
16:30 – 18:00
Social Reception: Networking and Informal Scientific Discussions (Attendance is in-person only by invitation. The location will be shared by email.)
What is the purpose of the HumaneAI European Parliament event?
We would like to suggest a half-day event on May 25th at the European Parliament, Paul-Henri Spaak Building, Rue Wiertz 60, 1047 Brussels, Belgium, titled Beyond ChatGPT: How can Europe get in front of the pack on Generative AI Models?, organized by a broad consortium from science and civil society, including the HumanE-AI-Net European Network of Centres of Excellence in Artificial Intelligence.
HumanE-AI-Net is a research network of leading European universities, AI institutes, and corporations funded by Future and Emerging Technologies (FET) and dedicated to benefiting people’s empowerment through the scientific and technological development of AI, in accordance with European ethical, social, and cultural values.
Other European partners and communities that support this event and will be involved in its organization are the International Research Centre on Artificial Intelligence (IRCAI) under the auspices of UNESCO, the Confederation of Laboratories for Artificial Intelligence in Europe (CLAIRE), and other ICT-48 networks such as TAILOR, AI4Media, and VISION, and language projects like ELG and ELE.
The Beyond ChatGPT aims to bring together AI experts, policymakers, and other stakeholders to demystify and critically examine some of the key concepts and concerns, and to provide an opportunity for a well-grounded discussion of the question of what needs to be done to ensure that European economies and societies will benefit from the development and deployment of AI technologies, such as LLMs.
Which questions will be addressed?
To which extent does Europe have the capability and capacity to compete with US-based industries on LLMs and similarly impactful AI technologies?
What can and must be done to ensure European competitiveness in this area?
How can we best harness the opportunities afforded by the latest AI technologies for the benefit of European economies and societies? What role does the proposed AI Act play in this context?
Which of the widely debated risks are real, and how should these be addressed? Is there a need for a moratorium or similar restrictions on research and innovation in key areas of AI?
What is the rationale?
Artificial intelligence has been an intense focus of public debate in recent years, and – following recent progress in so-called large language models (LLMs), such as ChatGTP – is now an area of increasingly vigorous economic activity and societal concerns.
Europe carries the responsibility of shaping the AI revolution. The choices we face today are related to fundamental ethical issues about the impact of AI on society, in particular, how it affects labor, social interactions, healthcare, privacy, fairness, and security. The ability to make the right choices requires new solutions to fundamental scientific questions in AI and human-computer interaction (HCI).
What is the vision?
This vision closely follows the ambitions articulated by the EC in its Communication on AI: A European brand of AI that, by design, is trustworthy, adheres to European ethical, political, and social norms, and focuses on the benefit to European citizens as individuals, European society and the European economy. At the heart of our vision is the understanding that those ambitions can neither be achieved by legislation or political directives alone nor by traditional research in established disciplinary “silos”. Instead, it needs fundamentally new solutions to core research problems at the Interface of AI, human-computer interaction (HCI), and social science, combining theory, real-world use cases, and innovation-oriented research.
What are we trying to achieve?
The HumaneAI community aims to develop the scientific foundations and technological breakthroughs needed to shape the ongoing artificial intelligence (AI) revolution to fit the above vision. Key challenges include: learning complex world models; building effective andfully explainable machine learning systems; adapting AI systems to dynamic, open-ended real-world environments achieving an in-depth understanding of humans and complex social contexts; and enabling self-reflection within AI systems.
What will be the impact?
The HumanE AI community has mobilized a research landscape far beyond the direct project funding and brought together a unique innovation ecosystem. This has the potential for significant disruption across its socio-economic impact areas, including Industry 4.0, health & well-being, mobility, education, policy, and finance. We aim to spearhead the efforts required to help Europe achieve a step-change in AI uptake across the economy.
Why are we the best to do it?
The project consortium, with53 institutions across 20 European countries, advocates that Artificial Intelligence is made by us humans, European researchers and citizens, who care deeply about the future of AI in Europe and its use for the benefit of all Europeans.
All events
Launching a Global Network of Excellence Centres in AI and Sustainable Develoopment
Network for Artificial Intelligence, Knowledge and SUStainable development – a nexus and central meeting point between AI and SDGs
4 May 2022, 9:00 AM – 10.30 AM (ET), 02:00 PM London (GMT+1:00), 10:00 PM Tokyo (GMT+9:00) Online
STI Forum – 7th Multi-stakeholder Forum on Science, Technology and Innovation for the Sustainable Development Goals
Location: Live Digital Event Host: STI Forum – 7th Multi-stakeholder Forum on Science, Technology and Innovation for the Sustainable Development Goals Date: 4 May 2022, 9:00 AM – 10.30 AM (ET), 02:00 PM London (GMT+1:00), 10:00 PM Tokyo (GMT+9:00)
Hosted by IRCAI and Permanent Mission of Slovenia to the UN, co-sponsored by Permanent Mission of Japan to the UN and Permanent Mission of South Africa to the UN
9:00 – 9:10 ETOpening and Introduction 3 minutes each
Ambassador Boštjan Malovrh, Permanent Representative of Slovenia to the UN
Ambassador Tetsuya Kimura, Permanent Representative of Japan to the UN
Tshilidzi Marwala, Vice Chancellor and Principal, University of Johannesburg
Marielza Oliveira, Director for Partnerships and Operational Programme Monitoring, Communications and Information Sector, UNESCO
9:10 – 9:15 ET Keynote
Maria Fasli, UNESCO Chair in Analytics and Data Science, Executive Dean, Faculty of Science and Health at University of Essex
9:15 – 9:20 ET Introduction into the Network
John Shawe-Taylor, Director IRCAI
9:20 – 10:30 ET Flash talks presenting the history, aim, objectives, composition, activities, programmes, and technology focus of the Network 3 minutes each
Network portfolio and history
Samuel Kaski, Aalto University, ELISE Network Coordinator
Paul Lukowicz, DFKI, HumaneAI Network Coordinator
Network as a catalyst for research and innovation
Matthew Smith, Senior Program Specialist, IDRC
Nelson González, Head Global Impact Computing, AWS
Network solutions and evidence of impact in real-life
Kathleen Siminyu, Machine Learning Fellow, Mozilla Foundation
Nuria Oliver, ELLIS Unit Alicante Foundation
Network connecting worldwide communities of practice
Ulrich Paquet, Deep Learning Indaba, DeepMind
Nuria Oliver, ELLIS Unit Alicante Foundation
Network reach across all UN regions
Alexandre F. Barbosa, Director Cetic
Emmanuel Letouzé, Director Data-Pop Alliance
10.30 – 10:50 A virtual Press Conference will be organised after the event with speakers available for questions:
John Shawe-Taylor, Director IRCAI
Emmanuel Letouzé, Director Data-Pop Alliance
All events
Symposium on Interaction with Technologies for Human Augmentation
This one-day event will feature keynotes from leading experts in the field, a lab tour with demonstrations of current research projects, and
Skill Acquisition and Transfer System using Computer Vision, Deep Learning, and Soft Robotics
10:00 – 10:30
Elisabeth André
Augmentative Technologies for People with Special Needs
10:30 – 11:00
Shinichi Furuya
Beyond expertise of experts: novel sensorimotor training specialized for expert pianists
11:30 – 12:00
Asa Ito
The paradox in skill acquisition: what does it mean for a body to be able to what it could not do?
12:00 – 12:30
Albrecht Schmidt
Amplifying the Human Intellect through Artificial Intelligence
12:30 – 13:00
Jun Rekimoto
Human-AI Integration: Using Deep Learning to Extend Human Abilities and Support Ability Acquisition
Background
Are you interested in the latest advancements and research trends in human augmentation? Join us for the “Symposium on Interaction with Technologies for Human Augmentation” organized by LMU Munich and HumaneAI. This one-day event will feature keynotes from leading experts in the field, a lab tour with demonstrations of current research projects, and opportunities for discussion and networking. The event will be held on Monday, Feb 20, 2023 at the premises of LMU Munich. In addition, we plan to broadcast the talks for registered remote participants. Register now!
All events
Remarkable Women in AI
A series of global discussions and workshops on diversity in AI beyond training data.
Artificial intelligence (AI) is expected to contribute over $15 Billion to the global economy by 2030 (PWC) and shape the future of human society. A critical challenge for the industry to live up to its potential is the need for more diversity in the development, research, application, and evaluation of new AI technology. With their series “Remarkable Women in AI,” the AI Competence Center at German Entrepreneurship and the Transatlantic AI eXchange invite all genders to a series of inspirational, educational, collaborative, and global discussions on gender diversity in AI – with the aim of inspiring attendees to take steps in their respective roles to address the gender gap in AI.
Target Audience This event is directed towards all gender students, entrepreneurs and women in research institutions and corporations.
All events
FuckupNights®Munich Vol.3 Artificial Intelligence
#ShareTheFailure - We change the view on failure. On February 6th we will continue with FuckupNights®Munich Vol.3 Artificial Intelligence.
January 6, 2023 German Entrepreneurship Center, Haus 19, B, Balanstr. 73, 81541 München
On February 6th we will continue with FuckupNights®Munich Vol.3 Artificial Intelligence.
◉F_ckuppers tell stories about burned money, personnel decisions that led to total failure and products that had to be recalled. They tell it all, with us on stage.
🔥 The idea comes from Mexico, where five friends realized that it takes a very humid evening to let your pants down instead of adulating each other about professional successes. In other words, to tell the stories that no one includes in their résumé. Because without failure there is no success.◉
This project aims to take seriously the fact that the development and deployment of AI systems is not above the law, as decided in constitutional democracies. This feeds into the task of addressing the question of incorporation of fundamental rights protection into the architecture of AI systems including (1) checks and balances of the Rule of Law and (2) requirements imposed by positive law that elaborates fundamental rights protection.
A key result of this task will be a report on a coherent set of design principles firmly grounded in relevant positive law, with a clear emphasis on European law (both EU and Council of Europe). To help developers understand the core tenets of the EU legal framework, we have developed two tutorials, one in 2020 on Legal Protection by Design in relation to EU data protection law [hyperlink to Tutorial 2020] and one in 2021 on the European Commission’s proposal of an EU AI Act [hyperlink to Tutorial 2021]. In the Fall of 2022 we will follow up with a Tutorial on the proposed EU AI Liability Directive.
Our findings will entail: - A sufficiently detailed overview of legally relevant roles, such as end-users, targeted persons, software developers, hardware manufacturers, those who put AI applications on the market, platforms that integrate service provision both vertical and horizontal, providers of infrastructure (telecom providers, cloud providers, providers of cyber-physical infrastructure, smart grid providers, etc.);
A sufficiently detailed legal vocabulary, explained at the level of AI applications, such as legal subjects, legal objects, legal rights and obligations, private law liability, fundamental rights protection; - High level principles that anchor the Rule of Law: transparency (e.g. explainability, preregistration of research design), accountability (e.g. clear attribution of tort liability, fines by relevant supervisors, criminal law liability), contestability (e.g. the repertoire of legal remedies, adversarial structure of legal procedure).