News

The digital witness: how AI evidence is reshaping criminal justice

  • Faculty of Law, Economics and Finance (FDEF)
    19 February 2025
  • Category
    Research
  • Topic
    Law

Five years ago, when the idea of having large language models and generative AI at our fingertips – let alone enter a courtroom – was a distant idea, criminal law experts started to question how these emerging technologies will reshape the legal system.  

In 2020, Prof. Katalin Ligeti obtained the prestigious Core Grant from the Luxembourg National Research Fund to start the CRIM_AI research project. An expert team investigates how key elements of criminal justice, including fair trials, data protection and privacy rights, can be safeguarded in a future shared with proprietary AI systems. The stakes are high: “The European Union foresees a digitalisation of the justice system, so the adoption of new technologies will only ramp up in the near future”, explains Prof. Ligeti, Dean of the Faculty of Law, Economics and Finance.  

The rise of AI, fueled by revolutionary generative AI technology, occurred in tandem with the researchers’ work, allowing them to assess in real time ongoing changes to legal proceedings.

How do we ensure a fair trial and uphold human rights when proprietary and ‘black-box’ AI systems are used to generate or process evidence? This kind of pressing questions must be dealt with head-on.”

Katalin Ligeti

Dean of the Faculty of Law, Economics and Finance
Professor of Law

Uncovering the different kinds of AI evidence

The research team identified three categories of AI evidence: evidence that is gathered through the assistance of an AI tool, evidence that has been generated or enhanced by AI, and evidence collected with the support of AI-generated leads.   

AI tools are employed for tasks that are very time-consuming, prone to errors or psychologically burdensome, for instance sifting through large amounts of data to search for potential evidence (such as incriminating emails exchanges, or to flag suspicious activities and content, like child sexual abuse videos). The data filtered out or flagged by the AI tool could then be introduced, in their original unaltered form, as evidence in court.  

The second category, which Prof. Ligeti refers to as “AI-generated evidence”, includes, for instance, forensic probabilistic genotyping — where AI interprets mixed and fragmented DNA samples to match suspects — as well as AI enhanced pictures, if AI is used, for example, to enlarge or improve the quality of a CCTV frame. But AI-generated evidence is not confined to forensic applications. AI-integrated consumer products, such as smart watches, virtual assistants or health tech can generate data, patterns, and conclusions that may become relevant for criminal proceedings.  

The third type occurs when AI systems are employed in the investigative stage to generate forensic leads, potentially triggering further analysis carried out by humans. Examples may include using a facial recognition tool to build a line-up, but then confirming the suspect’s identification through the traditional eye-witness testimony. However, unless explicitly indicated in the case file, neither the judge nor the defendant will know that AI was used to ‘guide’ what is later presented in court as human analysis.  

This is what researchers have called the “lead paradox”, and the research team is working towards proposing new policy recommendations to lawmakers, Prof. Ligeti shares.

How do different countries balance regulation and innovation? 

In striking a balance between regulation and innovation when dealing with AI technologies, states tip the balance depending on national priorities. A comparative analysis shows that the United States is already reviewing new legislative proposals to adapt evidence rules and standards to the new challenges of AI-generated evidence. In addition, AI evidence admissibility cases are already being litigated before US courts.  

Europe takes a more cautious stance. The AI Act entered into force in August 2024 and will be gradually implemented by member states. It is a sweeping piece of legislation that provides legal definitions, forbidden use cases, and holds manufacturers and marketers to specific rules and regulations. In criminal proceedings, AI has primarily been used as a supportive tool confined to the investigative stage. However, following the US example, this approach may soon evolve.

What the future holds

Technology can also be a positive force in criminal justice, Prof. Ligeti points out. The use of AI to review evidence could remove human bias and therefore ensure neutrality in some situations. Virtual reality also may have its place in the courtroom of the future, as crime scenes could be recreated and viewed by all parties with VR headsets. An experience like that, says Prof. Ligeti, which only advanced technology such as AI can offer us, may even have the power to change the outcome of criminal trials.

In June 2025, the research group will publish their findings, providing the basis for comparative analysis and the development of concrete policy proposals.

“The role of academics is to show problem cases to policymakers, to help determining both the risks and advantages that artificial intelligence can bring to the criminal justice system”, states Prof. Ligeti. Evidence gathered with the use of AI must remain credible and reliable. This is not only a question of procedure but also a safeguard for fundamental and human rights, a cornerstone of criminal justice.