Responsibility and Accountability for the use of AI in Law Enforcement in the European Union

Lost in Negotiations?

Authors

DOI:

https://doi.org/10.60935/mrm2025.30.2.28

Keywords:

AI Act, Law Enforcement, Fundamental Rights, Accountability, Contestability, Risk Regulation, Digital Policy

Abstract

This paper examines the EU AI Act’s application to law enforcement, highlighting how this sector is incorporated into the risk-based approach and assessing the extent to which such incorporation could weaken safeguards for individuals. It argues that, although the newly created accountability framework is complex, it offers only limited remedies for affected individuals. To ensure genuine protection of fundamental rights, the exceptions (‘backdoors’) embedded in the framework must be critically examined, contestability mechanisms must be strengthened, and the responsibilities of providers and deployers of high-risk AI must be clarified. Where appropriate, a rights-based approach should be integrated into the risk-based approach to underscore that fundamental rights are non-negotiable. This integration is essential to align the use of AI with the AI Act’s twin objectives of protecting fundamental rights and promoting innovation.

Author Biographies

Steven Kleemann, University of Potsdam

Steven Kleemann is a doctoral researcher at the Faculty of Law at the University of Potsdam as well as a researcher and policy advisor on digitalisation, AI & human rights at the German Institute for Human Rights. At the time of writing this article, he was a researcher at the Berlin Institute for Safety and Security Research (FÖPS Berlin), working on a project concerning legal aspects of trustworthy AI for police applications. His research focuses on international law, human rights, AI, and security law.

Milan Tahraoui, Centre Marc Bloch; Paris 1 Pantheon-Sorbonne University; Free University of Berlin

Milan Tahraoui is a doctoral researcher associated with the Centre Marc Bloch, as well as a Ph.D. candidate at both the Paris 1 Pantheon-Sorbonne University and the Free University of Berlin. At the time of writing this article, he was a researcher at the Berlin Institute for Safety and Security Research (FÖPS Berlin), working on a project examining the issues surrounding the use of so-called trustworthy AI applications in law enforcement. His research focuses on international and European law, human rights, digital surveillance, AI and security law.

Downloads

Published

2026-02-17

Issue

Section

Contributions