Artificial Intelligence (AI) is all the rage, but it is not without risk. Whether technical, societal or even economic, the threats to the use of AI are real. These tools can generate false, outdated or statistically biased results.
Initially working in software quality assurance, the Research Scientist at SnT Maxime Cordy has been concentrating on AI tools since 2019 and has seen their use move from the niche to the mainstream. Today, the challenges in terms of ethics, privacy, transparency and security are unavoidable, especially as the European authorities are regulating these tools. This is why SnT is committed to responsible AI.
“Today’s AI make decisions based on statistics. The problem is that since the amount of information to be injected into these AI is very large, there is a real risk of losing control. For instance, the system could take wrong decisions that we hadn’t foreseen,” explains Dr. Cordy. He is clear: totally secure or unbiased AI is impossible today because humans are not capable of anticipating all the possible reactions of AI. On the other hand, it is entirely realistic to limit these risks.
From the financial centre to other business sectors
This is the mission of his team within the Security, Reasoning and Validation (SerVal) research group at SnT. His industrial partners include BGL BNP Paribas. At the beginning of 2024, the Luxembourg bank presented this metamodel for monitoring AI solutions in the context of the regulatory framework specific to finance.
The SerVal team has developed a responsible AI solution capable of assessing the quality levels of AI implemented. “Our aim is that this tool can be used by all companies, whatever their field of activity,” adds Cordy.
His work focuses on three areas:
1) Fundamental research to improve AI system analysis methods
2) Applied research to target the specific needs and challenges of the players involved
3) Technology transfer to provide the most comprehensive tool possible to meet the broadest possible demand in the business world.
‟ Our work is entirely in line with the regulatory changes currently affecting Europe”

Research scientist
The research scientist has also benefitted from the Fonds National de la Recherche (FNR) JUMP programme for the Secure Valuables Against Malicious Neural Networks (SVALINN) project. This solution for combatting AI-fueled threats involves making small changes to files that are invisible to the human eye, so that the AI cannot extract information or maliciously alter them. So, for example, a photo shared on a social network cannot be reused by AI software to be modified into a deepfake.
“Discussions are currently underway with other potential industrial partners wishing to integrate this responsible AI solution developed at SnT,” adds Dr. Cordy. “Many fields are affected by the need to ensure the quality of AI systems, and our work is entirely in line with the regulatory changes currently affecting Europe,” concludes Cordy.