Articles

AI in law enforcement and criminal justice contexts: A preliminary rights-based assessment by the European Parliament

  • Faculty of Law, Economics and Finance (FDEF)
    19 August 2025
  • Category
    Reseach
  • Topic
    Law

Author

Reviewed presentation

The European Parliament’s resolution on the use of AI by the police and judicial authorities in criminal matters1 (“the Resolution”) preceded the adoption of the Union’s AI Act2, but, in comparison, remains till date the most relevant expression of concern voiced by an EU institution on the compatibility of such AI applications with fundamental rights and principles, as enshrined within the Charter of Fundamental Rights of the EU (“the Charter”) and the European Convention on Human Rights (“ECHR”).

Although the Resolution does not lay down an exhaustive list of AI applications in law enforcement and the judiciary, nor does it categorize uses of AI according to their risk level, it does provide as ground for the qualification of AI applications as high-risk “the potential to significantly affect the lives of individuals”. Transposed in the context of a criminal case, this qualification criterion could better read as “the potential to significantly affect the legal position of a criminal suspect or defendant”. This reading seems justified considering the risks that the Resolution identifies as facing the individual subject of an AI system application in criminal matters, in relation to defense rights and fair trial safeguards, as well as other fundamental rights, e.g. non- discrimination and personal data protection. It is also guided by the proclamation in the General Data Protection Regulation3 and the Law Enforcement Directive4 of a data subject’s right not to be subject to a decision based solely on automated processing which produces legal effects concerning him or her or similarly significantly affects him or her.

In particular, the Resolution acknowledges the risk of AI-enabled technologies to be deployed as mass-surveillance tools or to exacerbate biases and social inequalities, but even more interestingly from the standpoint of criminal trials, to compromise defense rights and the overall fairness of criminal proceedings, as interpreted within the framework of Article 47 and 48 of the Charter and Article 6 of the ECHR. It points out that the difficulty for individuals under investigation in obtaining meaningful information on the functioning of AI tools and the resulting difficulty in challenging their outputs in court, risk to undermine a suspect’s and subsequently a defendant’s right to the presumption of innocence, the right to silence, and the right to an effective remedy and a fair trial, including the latter’s underlying principles of equality of arms and respect for the adversarial process.

While acutely aware of the inescapable need to balance fundamental rights with the effectiveness of policing and crime investigations, the drafters of the Resolution aptly capture the relationship between AI deployers and the subjects of AI-powered investigations as asymmetrical, in terms of the influence AI-based solutions may yield in the decision-making process against the targeted person. To the effect of ensuring compliance of AI applications in judicial and law enforcement contexts with fundamental rights and specifically upholding defense and fair trial rights, the Resolution

  • underlines the requirement for any use of AI systems by law enforcement and judicial authorities to comply with the Union’s data protection requirements.
  • calls for a uniform regulation of safeguards against misuses of AI systems in law enforcement and judicial activities across the Union.
  • emphasizes the importance of human oversight and endorses the adoption of a clear regime of legal responsibility and accountability for all applications of AI in the context of law enforcement and criminal justice.
  • underscores the significance of regulated public procurement processes and rules on transparency obligations of judicial and law enforcement authorities vis-à-vis the private companies providing them with AI systems, including the obligation of public disclosure on public-private partnerships.
  • pronounces the role of traceability and auditability through extensive mandatory documentation and periodic-independent-auditing exercises, respectively, and militates against the use of proprietary AI software.
  • advocates for a compulsory fundamental rights impact assessment to be carried out before the implementation or deployment of AI systems in law enforcement and judicial contexts.
  • finally, stresses the need for a new legal framework to govern the use of AI in the field of law enforcement and the judiciary, including the rights of individuals affected- access to the data collection process, access to the analytical process and the conclusions produced and access to effective remedies.

With regard to the final proposition, which emerges as the most consequential one from the perspective of criminal procedure, the Resolution takes a stand for the recalibration of fair trial rights. Although it does not foresee the specifics of such reconfiguration- e.g. will the right to examine a witness5 be expanded to include the source code of the AI system, its developer or its operator under the scope of “witness” examination?-, it is welcome as a call for action to preempt the prospect of an exponential digitalization of the modus operandi of law enforcement and judicial authorities for the purpose of investigating, prosecuting and preventing crime, without being accompanied with sufficient safeguards to act as a counterweight against the overwhelmingly intrusive and control-augmenting capabilities of AI systems.

Seeing as the AI Act, as far as high-risk AI systems are concerned, echoes some of the Resolution’s suggestions, such as the introduction of technical documentation6 and record-keeping7 provisions, of human intervention and monitoring obligations8 and the requirement of a fundamental rights impact assessment9, all of which may enjoy a general and overarching application, without being necessarily tailored to the context of law enforcement and the judiciary, the implications of employing high-risk AI systems in the realms of law enforcement and the judicial system for criminal procedure norms are yet to be addressed by the EU legislators.

1 European Parliament resolution of 6 October 2021 on artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters (2020/2016(INI))
2 Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024
3 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016, Art. 22(1)
4 Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016, Art. 11(1)
5 ECHR, Art. 6(3)(d)
6 AI Act, Art. 11
7 AI Act, Art. 12
8 AI Act, Art. 14
9 AI Act, Art. 27