Professor of philosophy at California State University Sacramento
Background
This was a single talk offered a talk on social explanations, understood as those that abstract from the physical, biological, and psychological level. The talk was part of the Micro Project coordinated by Dr. Ettore Barbagallo.
All events
Second Tutorial on the AI Act
Follow-up Tutorial on the final text of the AI Act
Follow-up Tutorial on the final text of the AI Act, with a focus on the introduction of legal obligations for the placing on the market or putting into use of General Purpose AI Models and General Purpose AI Systems and their relevance for Human-Centric AI.
The Tutorial builds on the HAI-NET Tutorial of 2021, explaining the structure of the proposal of the AI Act. See here to access the 2021 Tutorial.
The 28 June 2024 Tutorial will be based on the final text of the AI Act, that is in force from 10 July 2024, and become applicable within two years, depending on which part. See for the final text here.
The objectives of the Tutorial are to help computer scientists better understand
the main goals of the Act in the context of the EU internal market (harmonisation)
the applicability with regard to General Purpose AI Models and Systems (new compared to the 2021 proposal)
some of the legal obligations with respect to the design of these models and systems
the relevance of the AI Act for human-centric AI models and systems
You can find the recording of the Tutorial here.
After the Tutorial we have finalised a series of seven audio-slide-decks, which you can find below:
Tutorial: The AI Act’s relevance for the use of Generative AI in Human-Centric AI
The focus will be on the introduction of legal obligations for placing on the market or putting into use General Purpose AI Models and General Purpose AI Systems
12.00 – 14.00 Online
Those who wish to register should send an email to Bert.Frans.P.De.Bisschop@vub.be by Thursday 27 noon CEST.
They will receive the link on Friday morning.
The Tutorial is organised by the Legal Partner of the HAI-NET. The focus will be on the introduction of legal obligations for placing on the market or putting into use General Purpose AI Models and General Purpose AI Systems.
The Tutorial will build on the HAI-NET Tutorial of 2021, explaining the structure of the proposal of the AI Act. See here to access the 2021 Tutorial: http://www.vernon.eu/wiki/AI_Act_Tutorial.
The 2024 Tutorial will be based on the final text of the AI Act, that will be in force within weeks from now (June 2024), and become applicable two years after that (though some parts will become applicable earlier). See for the final text: https://data.consilium.europa.eu/doc/document/PE-24-2024-INIT/en/pdf
The objectives of the Tutorial are to help computer scientists better understand:
the main goals of the Act in the context of the EU internal market (harmonisation)
the applicability with regard to General Purpose AI Models and Systems (new compared to the 2021 proposal, targeting Large Whatever Models)
some of the legal obligations with respect to the design of these models and systems
the relevance of the AI Act for human-centric AI models and systems
We need to emphasise that our objective is to give our audience a first taste of the legal regime that applies to real world human-centric AI systems that integrate generative AI. For more an in-depth understanding we refer to the report that Dr. Gori is preparing and to the Chapter that Dr. Gori and Prof. Hildebrandt are co-authoring on the subject in the Handbook of Generative AI for Human-AI Collaboration, eds. Mohamed Chetouani, Andrzej Nowak and Paul Lukowicz (Springer forthcoming).
All events
HAI-NET Tutorial on the AI Act’s relevance for generative AI
In this tutorial we will focus on the extent to which Generative AI, based on ‘Large Whatever Models’, falls within the scope of the AI Act
Prof. Mireille Hildebrandt is a Research Professor of 'Interfacing Law and Technology' at the Law & Criminology Faculty at Vrije Universiteit Brussels and holds the Chair of 'Smart Environments, Data Protection and the Rule of Law' at the Science Faculty of Radboud University in the Netherlands. Dr. Gianmarco Gori is a guest professor and postdoctoral researcher at the Research Group of Law Science Technology and Society (LSTS) at the Law Faculty of Vrije Universiteit Brussel.
Background
In this tutorial we will focus on the extent to which Generative AI, based on ‘Large Whatever Models’, falls within the scope of the AI Act and on the kind of legal obligations that should be taken into account by the developers of Generative AI that is meant to contribute to human-centric AI.
To this end we will first unpack the legal definitions of General Purpose AI Models (GPAI Models) and General Purpose AI Systems (GPAI Systems) and explain what kind of models qualify as GPAI models and what kind of systems qualify as GPAI systems. This will be followed by an inquiry into when a GPAI system is – legally speaking – a high risk AI system and into when a GPAI model is – legally speaking – an AI model generating systemic risk.
Second, we will elicit a small set of requirements that must be met by providers and/or deployers of GPAI Systems that integrate GPAI Models. As the HAI-NET is focused on contributing to real world human-centric AI, we will not focus on the research exemption that may apply to HAI-NET research. The whole point of legal protection by design is to ensure that such protection is built into the design phase. This means that developers must be aware of the requirements that providers and/or deployers of real-world applications of their models face.
Finally, we need to emphasise that our objective is to give our audience a first taste of the legal regime that applies to real world human-centric AI systems that integrate generative AI. For more an in-depth understanding we refer to the HAI-NET report that Dr. Gori is preparing on the subject and to the Chapter that Dr. Gori and Prof. Hildebrandt are co-authoring in the Handbook of Generative AI for Human-AI Collaboration, eds. Mohamed Chetouani, Andrzej Nowak and Paul Lukowicz (Springer forthcoming).
All events
SCOPE Retreat: How to use (Generative)AI to Speed Up Social Science Research?
A hands on experience for teaching Social Sceintists the power of GenAI
Publicity: By attending the event, you consent to the capturing and sharing of photos and videos taken during the event, both online and offline. The content is shared through Humane AI social media accounts (Linkedin, Facebook, X) and SCOPE social media accounts (Linkedin and X)
Material: In (Agenda) section
In this two-day workshop, we intend to provide participants from all social and behavioral sciences with tools and approaches on how artificial intelligence can be used profitably during the scientific process. It covers tutorials by experts on varying relevant topics such as literature research, automated transcription of interviews, writing papers, classification and coding of qualitative data, statistical analysis and data visualization.
Agenda
Thursday 6th of June (9:00 - 17:30)
Time
Item
09:00
Opening & setting goals
09:15
Talk 1: Debucking myths about LWMs
by Paul Lukowicz: Professor @RPTU and Director @ German Center for Artificial Intelligence (DFKI)
10:00
Flash introductions (introduce yourself in 30 secs)
10:30
Coffee break (30 mins)
11:00
Talk 2: Qualitative analysis using AI
by Fiona Draxler: Postdoc @Mannheim Univerisity
11:50
Lunch break (1.5 hours)
13:30
Talk 3: How to use AI tools for: Quantitative analysis + Doing literature review + Paper writing cycle
by Razia Aliani: Consultant @ University of Sheffield & top research skills voice on linkedin
16:30
Coffee break (30 mins)
16:45
Group activity 1: Rewrite a paper you know using the tools you've learned
17:30
End of the day
19:00
Dinner (location will be sent later)
Friday 7th of June (9:00 -15:45)
Time
Item
09:00
Talk 4: Creating synethetic users by Hugo Alves: Co-Founder and Chief Product Officer at Synthetic Users company
09:45
Talk 5: Resources to do research using AI
by Passant Elagroudy:Postdoc @ German Center for Artificial Intelligence (DFKI) + Project manager for Humane AI Net
09:55
Coffee break (15 mins)
10:10
Group activity 2: Part 1: Rethink an upcoming research paper (research planning, data collection, analysis, & paper writing)
12:00
Lunch break (~1.5 hours)
13:25
Group activity 2: Part 2: Present the research projects + how you changed them with AI
14:45
Group activity 3: Takeaways
15:00
Coffee break (15 mins)
15:15
SCOPE General assembly (project planning)
15:45
End of the day
Recordings
Coming soon on #AIonDemand Platform :)
Publicity
By attending the event, you consent to the capturing and sharing of photos and videos taken during the event, both online and offline. The content is shared through Humane AI social media accounts (Linkedin, Facebook, X) and SCOPE social media accounts (Linkedin and X)
Tag us to re-share your posts: @humaneainet , @scope_rptu
Location
Mannheim Library: Schloss Ehrenhof Ost, Mannheim, Raum EO 162
This project aims to take seriously the fact that the development and deployment of AI systems is not above the law, as decided in constitutional democracies. This feeds into the task of addressing the question of incorporation of fundamental rights protection into the architecture of AI systems including (1) checks and balances of the Rule of Law and (2) requirements imposed by positive law that elaborates fundamental rights protection.
A key result of this task will be a report on a coherent set of design principles firmly grounded in relevant positive law, with a clear emphasis on European law (both EU and Council of Europe). To help developers understand the core tenets of the EU legal framework, we have developed two tutorials, one in 2020 on Legal Protection by Design in relation to EU data protection law [hyperlink to Tutorial 2020] and one in 2021 on the European Commission’s proposal of an EU AI Act [hyperlink to Tutorial 2021]. In the Fall of 2022 we will follow up with a Tutorial on the proposed EU AI Liability Directive.
Our findings will entail: - A sufficiently detailed overview of legally relevant roles, such as end-users, targeted persons, software developers, hardware manufacturers, those who put AI applications on the market, platforms that integrate service provision both vertical and horizontal, providers of infrastructure (telecom providers, cloud providers, providers of cyber-physical infrastructure, smart grid providers, etc.);
A sufficiently detailed legal vocabulary, explained at the level of AI applications, such as legal subjects, legal objects, legal rights and obligations, private law liability, fundamental rights protection; - High level principles that anchor the Rule of Law: transparency (e.g. explainability, preregistration of research design), accountability (e.g. clear attribution of tort liability, fines by relevant supervisors, criminal law liability), contestability (e.g. the repertoire of legal remedies, adversarial structure of legal procedure).
HumanE AI Network
Tutorial on the proposal for an AI Act
Helping developers and computer scientists to better understand the objectives, architecture and content of the proposed AI Act
All partners need to prepare for the tutorial, made easy by a small library of presentations that
- discuss the most important players, concepts, structure and obligations in the proposal
The presentations consist of slides with audio, explaining the text.
The library can be found at the internal service of the HAI network
During the session Hildebrandt will present a general introduction to the proposal, highlighting its architecture, links and deeplinks with the existing framework (product safety) and the upcoming framework (Digital Market Act, Digital Services Act, Data Governance Act). This introduction will form slide-set 0, which will be added to the library after the event.
This is an internal event, where all partners need to prepare themselves, based a small library of presentations that:
introduce the tutorial and
provide a small set of concepts (and legal norms) core to the GDPR. The presentations consist of slides with audio, explaining the text. The library can be found at the internal webserver of the HAI-NET
The library also provides access to the Textbook Law for Computer Scientists and Other Folk that contains relevant literature, notably in chapters 5 and 10.
TUTORIAL Library:
The Open Access Textbook:
Law for Computer Scientists and Other Folk (OUP 2020, available in Open Access)
SoBigData++ project: an ecosystem for Ethical Social Mining - This talk introduces SoBigData++ project with the aim of putting in context the participants presenting the main objectives of the project and the consortium of experts involved working on the vertical contextes: Societal Debates and Online Misinformation, Sustainable Cities for Citizens, Demography, Economics & Finance 2.0, Migration Studies, Sports Data Science, Social Impact of Artificial Intelligence and Explainable Machine Learning. Part of this presentation will be the description of an ethical approach to data science which is a pillar of the SoBigData++ project.
14:10 - 14:25
Valerio Grossi
SoBigData RI Services - An overview of the SoBigData RI services will be shown including the Exploratories (Vertical research contexts), the resource catalogue, the training area and SoBigData Lab.
14:25 - 14:55
Giulio Rossetti
Hands-on JupyterHub service and SoBigData Libraries - This first hands-on session focuses on the libraries and methods developed within the SoBigData consortium. Code examples and case studies will be introduced by leveraging a customized JupyterHub notebook service hosted by SoBigData. Using such a freely accessible coding environment, we will discuss a subset of the functionalities available to SoBigData users to design and run their experiments.
14:55 - 15:10
Massimiliano Assante
Hands-on computational engine & technologies - In this second hands-on session, the tutorial will focus on the computational engine provided by SoBigData. Real examples will be presented in order to highlight the functionalities to deploy an algorithm and run it on the cloud.
15:10 - 15:25
Giovanni Comandè
Legality Attentive data Science: it is needed and it is possible!
15:25 - 15:35
Francesca Pratesi
FAIR: an E-learning module for GDPR compliance and ethical aspects
15:35 - 16:00
Beatrice Rapisarda (moderator)
An open discussion to give more details on specific aspects according to the requests of the audience (not already addressed during the tutorial or presentations).
Objectives
The objectives of the tutorial are to show how SoBigData RI can support data scientists in doing cutting-edge science and experiments. In this perspective, our target audience also includes people interested in big data analytics, computational social science, digital humanities, city planners, wellbeing, migration, sport, health within the legal/ethical framework for responsible data science and artificial intelligence applications. With its tools and services, SoBigData RI promotes the possibilities that new generations of researchers have for executing large-scale experiments on the cloud making them accessible and transparent to a community. Moreover, specialized libraries developed in SoBigData++ project will be freely accessible in order to make cutting edge science in a cross-field environment.
Format: The tutorial will be 3 hours containing:
1 hour of presentations describing the European project SoBigData++, the RI Services, and the Responsible Data Science principles and tools;
45 minutes and half of practical use of the RI with real examples of analysis in a dedicated Virtual research environment;
20 minutes for an open discussion with the attendees on the various aspects presented.