Our Research
CVI2 conducts research in real-world applications of computer vision, image analysis, and machine intelligence, with extensive development of AI approaches. Typical fields of application are space, Industry 4.0, surveillance, cybersecurity, healthcare, and automotive. The expertise of CVI2 spans all stages of computer vision, including acquisition, processing, analysis, and decision.
Can You Still
Trust Your
Eyes?

Research projects
Our Recent Projects
-
Duration:
36 months +12 – 01/09/2022
-
Funding source:
FNR CORE
-
Researchers:
Prof. Djamila Aouada, Dr. Enjie Ghorbel, Michele Jamrozik, Peyman Rostami
-
Partners:
LMO, Melbourne Space Laboratory (MSL) at University of Melbourne
-
Description:
The primary goal of the project “ELITE: Enabling Learning and Inferring compact deep neural network Topologies on Edge devices” is to investigate new ways to build compact DNNs from scratch by 1) using efficient latent representations and their factors of variations and 2) exploiting NAS based techniques for minimal deep architectural design. The final objective is to construct compact DNN models suitable for edge devices for space missions.
-
Project details (PDF):
-
Duration:
36 months +12 –01/01/2021
-
Funding source:
FNR Bridges
-
Researchers:
Dr. Vincent Gaudilliere, Mohamed Ali, Prof. Djamila Aouada (PI)
-
Partners:
LMO
-
Description:
Satellites autonomously meeting in a rendezvous approach is the next biggest revolution in space. This starts by endowing satellites with the capability of accurately and robustly determining their relative pose without cooperating with other spacecrafts. Existing solutions are still not accurate enough to be deployed in space. To enhance these approaches and enable their applicability, the MEET-A project proposes to use multi-modal fusion of passive Electro-Optical data based on thermal and visible-range cameras. With the appropriate fusion strategy, richer information will be used effectively to introduce two key innovations.
-
Project details (PDF):
-
Duration:
16 months – 01/11/2021
-
Funding source:
ESA
-
Researchers:
Dr. Arunkumar Rathinam, Dr. Leo Pauly, Prof. Djamila Aouada (PI)
-
Partners:
LMO
-
Description:
LMO in partnership with SnT (University of Luxembourg) is carrying out the Development of In-Orbit Servicing Space Situational Awareness. The Space Situational Awareness (SSA) payload autonomously derives the 6 Degrees of Freedom (DoF) pose estimation of a target space resident object under any illumination condition and is part of the spacecraft Guidance, Navigation and Control (GNC) system. The objective of this project is the design, development, verification and qualification of the SSA payload for space application.
-
Project details (PDF):
-
Duration:
48 months – 01/01/2020
-
Funding source:
LMO
-
Researchers:
Dr. Vincent Gaudilliere, Mohamed Adel Mohamed Ali, Prof. Djamila Aouada (PI)
-
Partners:
LMO
-
Description:
The general objective of this project is to develop computer vision solutions for Space Situational Awareness with a focus on spacecraft pose estimation using multiple modalities.
-
Project details (PDF):
-
Duration:
36 months +12 – 01/05/2022
-
Funding source:
FNR Bridges
-
Researchers:
Dr. Anis Kacem, Sk Aziz Ali, Ahmet Serdar Karadeniz, Prof. Djamila Aouada (PI)
-
Partners:
Artec3D
-
Description:
Recently, some efforts have been made for proposing AI algorithms that learn Computer-Aided Designs (CADs) of real objects. The idea of these methods is to scan objects using 3D scanners and conclude CAD procedures. However, current solutions either require the input of the designers or are limited to simple objects and are not compliant with modern CAD workflows. The FREE-3D project focus on automatically learning modern CAD procedures by learning parametric representations from 3D scans, leveraging their geometrical and topological structure, and exploiting the sequential nature of CAD procedures.
-
Project details (PDF):
-
Duration:
48 months – 01/12/2020
-
Funding source:
Artec3D
-
Researchers:
Dr. Anis Kacem, Sk Aziz Ali, Elona Dupont, Ahmet Serdar Karadeniz, Prof. Djamila Aouada (PI)
-
Partners:
Artec3D
-
Description:
The general idea of this project is to investigate recent advances of geometric deep learning to leverage raw 3D scans to higher level representations. One of the main goals is to infer Computer-Aided Design (CAD) models directly from 3D scans. To that aim, multiple aspects are being considered such as 3D scan refinement and parametrization, geometrical detail enhancement, etc. As geometric deep learning methods are the focus of this project, a unique dataset called CC3D dataset of more than 50k pairs of scans/CAD models has been collected and can be requested.
-
Project details (PDF):
-
Duration:
36 +12 months – 01/03/2022
-
Funding source:
FNR Bridges
-
Researchers:
Dr. Enjie Ghorbel, Kankana Roy, Prof. Djamila Aouada (PI)
-
Partners:
POST
-
Description:
With the fast advances in artificial intelligence, deepfake videos are becoming more accessible and realistic-looking. Their twisted use, constitutes a threat to society. Existing deepfake detection methods mostly rely on exploiting discrepancies caused by a given generation method. The goal of FakeDeTeR is to define a more generic approach that captures even deepfakes generated by unknown methods. To that end, discriminative learning-based spatio-temporal-spectral representations are investigated. Leveraging geometric, dynamic, and semantic models as priors will ensure that the smallest relevant deviations are captured. Coupling videos with sound in a cross-modal representation will further empower the proposed solution.
-
Project details (PDF):
-
Duration:
42 months – 01/05/2018
-
Funding source:
FNR CORE PPP
-
Researchers:
Dr. Anis Kacem, Kseniya Cherenkova, Prof. Djamila Aouada (PI)
-
Partners:
Artec3D
-
Description:
Being constrained to keep a straight face should no longer be a condition for a well-performing face recognition system. IDform proposes to robustly identify people from their faces in full dynamic conditions. The idea is to build on the success of today’s best performing face systems that use deep learning; however, instead of chasing the biggest datasets, the strategy is to use efficient facial models that can provide stable statistical information. .
-
Project details (PDF):
-
Duration:
36 +12 months – 01/09/2021
-
Funding source:
FNR IF
-
Researchers:
Nesryne Mejri, Dr. Enjie Ghorbel, Prof. Djamila Aouada,
-
Partners:
POST
-
Description:
Given the threat of deepfakes, significant efforts have been made for proposing deepfake detection methods. Nevertheless, these methods remain not sufficiently mature for real-world deployment. as they usually specialise in detecting one type of deepfakes, which limits their generalisation capability, and typically rely on very large models. Hence, UNFAKE aims to provide a more realistic deepfake detection framework that generalises across different types of deepfakes by using an unsupervised explainable and low-weight learning framework to learn richer deep representations.
-
Project details (PDF):
-
Duration:
12 months – 01/03/2021
-
Funding source:
ESA
-
Researchers:
Dr. Enjie Ghorbel, Dr. Laura Lopez Fuentes, Kankana Roy, Prof. Djamila Aouada (PI)
-
Partners:
POST, Intech, Codare
-
Description:
The objective of the Skytrust project is to build and improve the trust in any digital asset by securing the authenticity of its content from its creation through a novel and scalable solution associated with a blockchain infrastructure. Our specific role in this project is to design an artificial intelligence solution to detect anomalous behaviors using the aforementioned metadata collected from a mobile and detect any manipulation of the digital asset content
-
Project details (PDF):
-
Duration:
36 months – 01/09/2021
-
Funding source:
POST
-
Researchers:
Dr. Enjie Ghorbel, Dr. Laura Lopez Fuentes, Kankana Roy, Nesryne Mejri, Pavel Chernakov, Prof. Djamila Aouada (PI)
-
Partners:
-
Description:
In order to recover confidence in the growing digital ecosystem, especially in this new era of fake news and DeepFakes, the goal of this project is to strengthen the confidence in the integrity of a digital asset through its content (e.g., videos, photos, audio). The project will investigate the latest approaches for detecting manipulation of images and videos. New approaches will be proposed improving existing solutions.
-
Project details (PDF):
-
Duration:
48 months – 01/01/2020
-
Funding source:
Datathings
-
Researchers:
Joe Lorentz, Inder Pal Singh, Prof. Djamila Aouada (PI)
-
Partners:
-
Description:
The project objective is to investigate the use of Deep Neural Networks (DNNs) for on-the-edge image analytics, i.e., product control in the industrial domain. The ever-growing throughput and quality demands of modern manufacturing make it impossible to rely on the human eye for a rising number of quality assessment procedures. This development leads to the introduction of computer vision algorithms, widely used in different fields, e.g., food industry, production of printed board-circuits. This project will investigate ways to improve the explainability of DNN-driven classification. An additional target is to investigate methods enabling classifiers to adapt to varying production conditions.
-
Project details (PDF):
-
Duration:
48 months – 01/01/2020
-
Funding source:
FNR IF
-
Researchers:
Joe Lorentz, Prof. Djamila Aouada
-
Partners:
Datathings
-
Description:
The current evolution of the manufacturing domain towards the so-called industry 4.0 demands for more flexible solutions. Deep Neural Networks (DNNs) provide this by automatically learning high level features. However, its wide-spread application in industry is mainly hampered by two factors: high hardware demands and lacking explainability of classification decisions. Neural networks tend to rely heavily on features, which are unintuitive for human perception. This makes it difficult to justify decisions without profound knowledge of the technology. As a consequence, DNNs are currently unsuited for human-machine-interaction, which is a major design principle of industry 4.0.
-
Project details (PDF):
-
Duration:
42 months- 01/01/2019
-
Funding source:
FNR PSP Flagship
-
Researchers:
CVI2 reasearchers, Prof. Djamila Aouada (PI)
-
Partners:
LESC, SCRIPT, DataThings, Artec3D, FNR
-
Description:
SnT has partnered with the Ministry of National Education represented by its department for the Coordination of Educational and Technological Research and Innovation SCRIPT and the Lycée Edward Steichen à Clervaux (LESC) in defining the Smart Schoul 2025 project. The goal of this project is to create a fertile environment for pupils to be motivated to participate in designing digital tools and solutions. Being exposed to computer science at an early age could potentially trigger the switch and inspire a digital to become a digital creator or, at least, an ICT enthusiast.
-
Project details (PDF):
-
Duration:
24 months – 01/12/2020
-
Funding source:
Esch 2022
-
Researchers:
CVI2 researchers, Prof. Djamila Aouada (PI)
-
Partners:
FNR , Rockhal, LIST , Uni.lu
-
Description:
The Sound of Data explores new ways of creating, performing, and experiencing music and art by using multi source data as the building blocks in the creative process, i.e. remixing the scientific and artistic approaches herein. It is centered around the idea to use datasets obtained in different contexts as a core determinant of a musical composition. The developed concepts will be applied to different datasets (traffic data, historical data, crowd sourcing data and 3D body scan data) to focus on topics with high societal relevance of to the focus area and to address the artistic headlines of Esch 2022.
-
Project details (PDF):