To develop a trustworthy AI model for situation awareness by using mixed reality in Police interventions.

PURPOSE AND AIMS:
We will address the ethical and societal elements of the ELS theme of HumanE-AI-Net, focusing on how to construct Artificial Intelligent Systems (AIS) whose functions aim to support citizen security and safety units. Citizen Security and Safety Units are those closest to the citizens, having the largest number of officers. Their tasks include to help a disoriented old person, deal with traffic and face dangerous situations, gang fights or shootings. The units training is generalist, lacking training and supporting tools to deal with certain situations, in contrast to specialized units. Units need to train situational awareness (ability to maintain a constant, clear mental picture of relevant information and tactical situation in all types of situations). We aim to provide AI tools to facilitate their work and improving their own safety, efficiency, protecting citizens’ rights and enhancing their trust. Transparency and trustworthiness are the most limiting factors regarding the development of AI solutions for public safety and security forces. Any development should be useful to citizens and officers alike. We will carry out tests using the mixed reality paradigm, i.e. HoloLens, with police officers to collect data, making and assessment of the implementation of Trustworthy AI requirements in the following police intervention scenario:

Vehicle stop. Police officers usually patrol a city in police cars facing all types of situations, that in any moment could scale from going to low risk (traffic preventive tasks) to high risk (tracking a possible suspect of a crime that is travelling at high speed). This scenario offers a pretty common activity for police officers. The project’s interest will be to address the perception and impact of the use of technology that could support the security forces to face these daily tasks, making their work safer (i.e., using drones to track suspects in case they are armed). We select this scenario due to its multiple implications, to assess the relationship of public security officers with AI, and to address possible several societal and legal challenges: -Societal: Use of AI to detect potentially life-threatening situations or vehicles related to crimes. Ensure that fundamental rights like privacy and non-discrimination are preserved, while at the same time guarantee public safety. -Legal: Personal Data Protection issues, possible fundamental rights violations related to the use of camaras, legal barriers on Aerial Robot Regulation and the use of UAVS in public spaces.

THE CALL TOPICS:

The microproject aims to address the ethical and societal elements of the ELS theme of HumanE-AI-Net. Its focus is on how to construct Artificial Intelligent Systems (AIS) whose functions aim to support citizen security and safety units.

During the micro-project, we will organize the following activities:

· Research on methods and tools for assessment and monitoring ELS in police interventions
· To address the European Trustworthy AI guidelines or AI act in AI for public security forces
· Implementation and testing of ELS principles and guidelines in police interventions
· Development and validation of metrics to evaluate ELS principles for security forces
· Dissemination and communication of project findings in journals and international conferences

Output

TANGIBLE RESULTS:

· At least one international conference paper to disseminate findings, possible venues: AAAI, AAMAS, IJCAI, ECAI, etc.
· As continuation of the project findings, we aim to submit a proposal for Horizon Europe call related to Artificial Intelligence and Trustworthy AI.

ESTIMATED TARGET IMPACT:
To make an assessment of Trustworthy AI requirements for AI powered tools aimed at security forces and students in the European countries of the study, we estimate that results of our research and follow-up projects, could reach the following potential targets numbers:

• EU Police Officers: 21,000 in Sweden and ca 29,000 in Catalunya, Spain (Mossos d’Esquadra: 17.888 + Local police: 11.167) Total: 50,000 police officers
• EU Police Students: 9,596 in Catalunya and 4,000 in Sweden. Total of police students: 13,596 police students

RESEARCH VISIT DATES AND OBJECTIVES

For this micro-project we have planned two visits with the following schedule:

Locations:

1. Barcelona and Mollet Del Valles (Spain) in October 2023
2. Umeå (Sweden) in February 2024

The length of all visits will be one week each.

Visit Objectives

A. Barcelona and Mollet Del Valles (Spain):

To organise a visit to COMET and ISPC locations, with the following objectives
1. To carry out tests using mixed reality paradigm, i.e. HoloLens with 10 police officers from the Catalan Police School (Spain), to collect data (reaction and feedback) on the use of AI in the police intervention described scenario.
2. To analyze data from these tests to elaborate a methodology to assess the implementation of the Trustworthy AI requirements in police interventions.
3. To research on possible data privacy and civil rights violation and legal framework on the use of AI powered tools in the police intervention described scenario.
4. To organize a partners’ meeting to discuss and evaluate the data collected during the tests and elaborating a methodology for the assessment of Trustworthy AI requirements in police interventions,
5. To discuss project dissemination strategy workplan.

B) Umea University and Police Education Unit

1. To carry out tests using mixed reality paradigm, i.e. HoloLens with 10 police officers from Umea Police Education Unit (Sweden), to collect data (reaction and feedback) on the use of AI in the police intervention described scenario.
2. To analyze data from these tests to elaborate a methodology to assess the implementation of the Trustworthy AI requirements in police interventions.
3. To research on possible data privacy and civil rights violation and legal framework on the use of AI powered tools in the police intervention described scenario.
4. To organize a partners’ meeting to discuss and evaluate data collected during the tests and elaborate a methodology for the assessment of Trustworthy AI requirements in police interventions.
5. To discuss dissemination actions on project findings and partners’ participation (publications, international conferences etc.)
6. To discuss the project findings and conclusions for future follow-up projects among partners.

Project Partners

  • Umeå University – Computing Science Department, Juan Carlos Nieve
  • Umeå University – Police Education Unit, Jonas Hansson
  • Comet Global Innovation-COMET, Eduardo García Laredo
  • Institut de Seguretat Pública de Catalunya -ISPC, Lola Valles Port

Primary Contact

Juan Carlos Nieves, Umeå University – Computing Science Department