Financial crime is becoming increasingly complex, shaped by both rapid technological advances and evolving legal frameworks. In this edition of Across Disciplines, computer scientist Raphaël Frank and legal scholar Stanisław Tosza explore how artificial intelligence and criminal law intersect, and what happens when algorithms enter the world of compliance.
1. Your research fields approach financial crime from very different angles. How would each of you explain your work to someone outside academia?
Raphaël Frank: Our research group focus is technology-centric. We aim to use novel AI technologies to improve and accelerate legally required compliance tasks while respecting current regulations.
Stanislaw Tosza: My research group focuses on criminal law, compliance and enforcement, particularly in contexts shaped by digital technologies. We currently focus on how private actors – such as banks or funds – execute functions that are meant to detect and prevent financial crimes.
Two fields, one problem: Financial crime today is tackled not only through law, but also through data and algorithms. Effective solutions require both perspectives.
2. Has working closely with someone from a completely different field changed how you approach problems in your own discipline?
Raphaël Frank: Yes. It made me aware that it is important to regulate how technologies are used, and it made me reflect on the advantages and disadvantages of regulating first and innovating later, which is how the EU often likes to proceed. However, in my personal opinion, this might not always be the best strategy.
Stanislaw Tosza: Yes, working closely with computer scientists and technology experts has proven crucial for understanding the real challenges involved in implementing technological solutions for compliance and enforcement.Collaborating closely with computer scientists, as a legal scholar, means learning a completely different language, where familiar terms often carry entirely different meanings and technological implications. This experience – which is ongoing – has allowed me to have a much deeper understanding of legal rules concerning technology, and also better perceive their shortcomings.
What we learned: Interdisciplinary work exposes blind spots. It challenges assumptions and leads to more grounded, practical solutions.
3. Legal systems rely on clear reasoning and evidence. How do you reconcile that with AI systems whose decisions are often impossible to fully explain?
Raphaël Frank: The common perception is that AI systems are black boxes. However, the technology has evolved rapidly over the past few years. Modern AI systems are increasingly able to provide transparent, fact-based reasoning, reducing the risk of hallucinations.
Stanislaw Tosza: This is one of the central questions in our current research as black boxes clash with the general requirement of transparency and accountability of legal decisions, which is linked to the possibility of questioning them by those who are affected. We are exploring what legal requirements for explainability should look like, where the appropriate threshold should be set, and how legal reasoning can guide interactions between AI systems and human decision-makers.
In a nutshell: AI may improve detection, but without explainability, it risks undermining legal accountability and trust.
4. Can algorithms fight financial crime and who is accountable when they fail?
Raphaël Frank: From a technology point of view, I am convinced that technology can make the process more efficient and the investigation much faster through automation. However, it is important that humans remain in the loop, supervising the recommendations of the agentic system and making the final decision.
Stanislaw Tosza: Digital compliance in the financial sector increasingly relies on algorithms, which significantly improve efficiency in detecting and preventing crime, especially in data-intensive contexts. Accountability, however, must ultimately remain with human actors. At the same time, the increased delegation to automated systems raises new legal questions, which are now being addressed through emerging regulatory frameworks such as the EU AI Act. We are closely studying these new rules to understand how well they address those concerns, or whether entirely new regulatory frameworks will be needed.
Takeaway: Algorithms can flag, but humans must decide – and remain answerable for the outcome.
5. Is regulation a barrier or a catalyst for FinTech innovation?
Raphaël Frank: This goes somewhat in line with my previous answer. I see a trade-off here: too much regulation can kill innovation, while too little regulation might have severe societal consequences. Finding the right balance is difficult.
Stanislaw Tosza: It is ultimately a question of balancing innovation-friendly regulation with the protection of the rights of people who could be affected. Technology offers enormous opportunities to improve our lives, but as we saw in the past years, it can also be very detrimental to human well-being and democratic processes. This balance must be reflected not only in the substance of the rules but also in the methods, in how they are designed and implemented. In this respect, we see compliance as a key area where this balance needs to be effectively developed. Finding that balance is precisely what drives our research.
What this shows: Regulation is not just a constraint, it is a tool for shaping trustworthy innovation.
6. What is the one question you think your field is not asking loudly enough right now?
Raphaël Frank: Is the field of computer science sufficiently considering how humans will interact with these new agentic technologies ?
Stanislaw Tosza:
How should we design the rules to make sure we efficiently use agentic technologies for crime prevention and enforcement in a way that safeguards fundamental rights?
Good to know: The most urgent AI questions are not technical, they are human.
About FutureFinTech
FutureFinTech, founded by researchers from the Interdisciplinary Centre for Security, Reliability and Trust (SnT) and the Faculty of Law, Economics and Finance (FDEF) at the University of Luxembourg, takes a holistic approach to fintech innovation. By combining technical expertise with financial knowledge and regulatory understanding, FutureFinTech develops practical solutions that address real-world needs in our rapidly evolving financial landscape.
FutureFinTech is supported by the Luxembourg National Research Fund and the Ministry of Finance.