Young researchers shape our future. Bringing their innovative ideas into our projects, they contribute not only to the excellence of research of the University of Luxembourg’s Interdisciplinary Centre for Security, Reliability and Trust (SnT), but also to our impact in society. They take our research to the next generation.
In this edition of the series, we feature Dr. Salijona Dyrmishi and her research on securing artificial intelligence (AI) systems.
Dr. Salijona Dyrmishi, postdoctoral researcher at the Security, Reasoning and Validation (SERVAL), gave us some insights into the research projects she is working on, reflected on how their project will shape the future, and shared her future plans with us.
Salijona, what is the motivation for your research?
AI developments have disrupted many industries; however, several concerns are raised regarding the trustworthiness of these systems. These concerns are being addressed with regulatory frameworks in several countries and the EU is paving the way by adopting the EU AI Act in May, which is the first law in the world that regulates the use of AI. In addition to regulatory approaches, technical solutions are also demanded especially to assess and improve the robustness and security of AI solutions.
What are you working on in your research?
My research focusses on one of these security challenges for AI: robustness against adversarial attacks. Such attacks involve subtle disruptions to the input data that may go unnoticed by humans but can lead AI models to incorrect results. This risk is especially high for domains such as the financial sector, where many critical business processes are being automated by AI. For example, in credit lending, historical customer data are converted into feature vectors together with the current request using a mapping function. These are numerically structured so that the representations of real data can be easily understood by computers. Once faced with such a vector, the AI model decides whether to accept or reject the credit request. A well-trained AI model should be able to distinguish between these two cases. However, an attacker can exploit vulnerabilities in the model. By strategically manipulating their transactions, an attacker could aim to have their loan application wrongly approved, causing financial damage to the bank.
What are the solutions to this problem?
To mitigate such risks, it is essential to ensure that AI models are robust against adversarial attacks. To achieve this, we take on the adversarial role ourselves, rigorously testing our AI systems with thousands of adversarial examples. Later, successful adversarial examples are included in the training process of the AI models, so that the model learns how to recognise them properly. This process increases the robustness of the models and is commonly known as adversarial hardening.
One way of generating adversarial examples is using deep generative models like the famous Generative Adversarial Networks (GANs). However, such traditional methods often fail to consider domain-specific constraints and therefore generate irrelevant and unrealistic adversarial examples. For example, the number of total transactions cannot be less than the number of transactions in the previous month. These examples, which do not occur in reality, distract our robustness improvement efforts from the real adversarial threats. To address this limitation, we propose a novel approach: injecting domain knowledge into deep-generative models through a constraint repair layer.
How does your approach shape the future?
My work emphasises the importance of studying adversarial attacks in realistic settings beyond current optimised theoretical environments. Overlooking realism can lead to a skewed understanding of the actual threat level and the risk landscape against which models must defend. By having a proper robustness assessment of AI models, we can better defend against these threats and make them a bit more trustworthy for general use.
What inspired you to work in research at SnT?
I initially joined SnT for my Master’s internship in 2019, where I had the opportunity to familiarise myself with the work environment here. I was impressed by its multicultural atmosphere, the flexibility in conducting research, the strong support for research activities, and the alignment of research topics with my interests. Later on, together with my Ph.D. thesis supervisor Dr. Maxime Cordy, we applied for an FNR AFR grant that combined my interests in cybersecurity and data science. Fortunately, the research proposal was accepted, and this marked the beginning of my journey with SnT.
What are your future plans?
My future plans involve broadening my work on the trustworthiness aspects of AI and expanding my focus beyond just adversarial attacks. In this aspect, I envision for myself a blend of research and community engagement. Developing latest techniques to assess and improve the trustworthiness of AI systems while staying close to local AI community by fostering interactions through working groups, science communication, and joint events. Through these efforts, I hope to cultivate a collaborative environment that enhances understanding and implementation of trustworthy AI, making a meaningful impact both within and outside the academic sphere.
About Salijona: Salijona received her Ph.D. in Informatics in June 2024 from the University of Luxembourg, where she is currently working as a postdoctoral researcher. Her work explores ways to make AI systems based on machine learning more secure and less vulnerable to malicious attacks. Prior she earned a Master of Sciences degree from University of Tartu in Estonia and a Bachelor of Science degree from University of Tirana in Albania.
