Our Research
IRiSC conducts forward-looking theoretical and experimental research on cybersecurity and online privacy thanks to the interdisciplinary expertise of its members. IRISC acknowledges the complexity of reliable, secure, and trustworthy systems and the multiple factors leading to misuses and cyberattacks. It integrates methods from social sciences and legal analysis into computer science to address such complexity.
Understanding the
Disinformation
Phenomenon

Research projects
Our Projects
-
Duration:
2021-24
-
Funding source:
Luxembourg National Research Fund and AFR
-
Researchers:
Prof. Gabriele Lenzini, Amin Rakeei
-
Partners:
IRISC, SnT, University of Luxembourg, (https://irisc-lab.uni.lu)
LIG, University of Genoble Alpes (https://www.liglab.fr/en)
LIMOS, University Clermont Auvergne (https://limos.fr)
LORIA, University of Lorraine (http://www.univ-lorraine.fr) -
Description:
SEVERITAS aims to define threat models and security properties for e-T&AS, extending existed tools for formal analysis, designing and implementing run-time monitoring solutions, and developing new usable secure e-T&AS protocols and implementing them, to addresses security demands from players in the business of assesment systems and within education.
-
Duration:
2021-2024
-
Funding source:
European Commission – Marie Curie Innovative Training Network
-
Researchers:
Prof. Gabriele Lenzini, Xengie Doan, Soumia El Mestari, Dr. Maria W. Botes
-
Partners:
University of Luxembourg
Scuola Superiore Sant’Anna
Universite Toulouse III
Vrije Universiteit Brussel
Jagiellonian University
University of Piraeus Research Centre
Centro Nazionale Delle Ricierche -
Description:
The EU-funded LeADS project has the ambition to train early-stage researchers to become legally attentive data scientists (LeADS), who will be experts data science, law and digital ethics. They will be able to develop innovative solutions within the realm of law and expand the legal frontiers based on innovation needs. The project will create the theoretical framework and the practical implementation of a common language for co-processing basic notions for both computer scientists and legal experts. LeADS will also produce a comparative and interdisciplinary lexicon.
-
Duration:
2021-2024
-
Funding source:
Luxembourg National Research Fund (FNR)
-
Researchers:
Prof. Gabriele Lenzini, Dr. Monica Arenas, Dr. Huseyin Demirci
-
Partners:
-
Description:
NOFAKES Authentication Technology focuses on the development of algorithms to capture, extract, and process the encoded information of Cholesteric Spherical Reflectors (CSRs). CSRs behave as optical unclonable functions when coated in a transparent matrix, called CSR-tags.
CSR-tags can be potentially used for anti-counterfeit purposes because they produce unpredictable and unique optical reflection patterns that can be captured with unexpensive cameras. The features extraction as well the authentication protocol are key components to define whether a tag is authentic or fake.
-
Duration:
2021-2024
-
Funding source:
Luxembourg National Research Fund (FNR)
-
Researchers:
Prof. Gabriele Lenzini, Dr. Marietjie Botes, Emre KOCYIGIT
Prof. Vincent Koenig, Dr. Kerstin Bongard-Blanchy, Dr. Anastasia Sergeva, Lorena Sanchez Chamarro
Philippe Valoggia -
Partners:
Human Computer Interaction, University of Luxembourg; LIST
-
Description:
Dark patterns are manipulative designs aimed at influencing the decisions users take online about their purchases, use of time and disclosure of personal data. Most websites and applications employ dark patterns, thus exposing users to privacy harms and impacting collective welfare with repercussions on competition and consumer trust. The DECEPTICON team aims to 1) study dark pattern effects on user behaviours and choices online; 2) Develop data science techniques and formal methods to automatically recognize dark patterns; 3) Develop procedures and tools to assess the presence of dark patterns on online services and their compliance with regulations
-
Duration:
2022-2026
-
Funding source:
Luxembourg National Research Fund (FNR) – Belgian Fund for Scientific Research (F.R.S.–FNRS)
-
Researchers:
Prof. G. Lenzini
-
Partners:
UCLouvain, Université Saint Louis Brussels
-
Description:
Design automated means that question the origin and the integrity of a piece of information and reconstruct its information flow in a verifiable yet privacy-preserving manner.
Provide innovative regulatory frameworks and legally compliant socio-technical solutions to counter online disinformation and its effects. Analyse legal instruments that can regulate the phenomenon of fake news and balance them with users’ freedom of expression.
-
Duration:
2021 – 2023
-
Funding source:
Huawei Technologies Düsseldorf GmbH
-
Researchers:
Prof. G. Lenzini
-
Abstract:
The project aims to design and create a proof-of-concept prototype for a high-performance and configurable Trust Level Evaluation Engine (TLEE) for subjective trust networks for making decisions in uncertain environments. This is particularly beneficial for complex and dynamic environments consisting of potentially untrustworthy sources of information where situational knowledge is partial and subjective, such as automated driving.
-
Description:
The project Subjective Trust Network Evaluation Engine (STRUNEE), a collaboration between the Interdisciplinary Centre for Security Reliability and Trust of the University of Luxembourg (SnT/UL) and Huawei Technologies Duesseldorf GmbH (Huawei), aims to design and create a proof-of-concept prototype for a high-performance and configurable Trust Level Evaluation Engine (TLEE) for trust networks.
The project STRUNEE has two main objectives:
- Design and Implementation of Efficient Algorithms for Trust Evaluation: It is meant to study algorithms that can evaluate trust using a fusion of Bayesian and Subjective networks. Bayesian networks are probabilistic models that can represent and reason about uncertain relationships, while subjective networks incorporate subjective assessments and opinions into the trust evaluation process. By combining these two approaches, the project seeks to develop more robust and accurate trust evaluation algorithms.
- Design of a Sofware Architecture for Subjective Trust Networks: We aim to design Subjective Trust Networks that are capable of supporting a trust evaluation engine in Zero Trust scenarios. Zero Trust refers to the security concept where no pre-assigned trust is assumed between entities, and access is granted based on continuous verification and monitoring. The goal is to create Subjective Trust Networks that can effectively assess and manage trust in such environments, helping to make informed decisions about granting access and managing risks.