Event

PhD Defence: Supporting Safety Analysis of Deep Neural Networks with Automated Debugging and Repair

  • Conférencier  Hazem FAHMI

  • Lieu

    Campus Kirchberg, JFK building, room E004 – E005

    LU

You are all cordially invited to attend the PhD defense of Mr. Hazem FAHMY on 3rd May 2023 at 09:00 am.

The PhD defense will take place in seminar room E004/E005 (JFK Building, Campus Kirchberg).

Members of the defense committee:

  • Prof. Dr. Lionel Briand, University of Luxembourg Chairman
  • Dr. Thomas Stifter, IEE S.A. Vice-Chairman
  • Prof. Dr. Fabrizio Pastore, University of Luxembourg Supervisor
  • Prof. Dr. Vincenzo Riccio, Università Di Udine Member
  • Prof. Dr. Andrea Stocco, Technical University of Munich Member

Abstract:

Deep Neural Networks (DNNs) are increasingly adopted in a variety of safety-critical systems, including Advanced Driver-Assistance Systems (ADAS), Automated Driving Systems (ADS), medical devices, and industrial control systems. DNNs are particularly useful in the perception layer of these systems, where they are used to analyze images and other sensor data to make decisions. However, ensuring the functional safety of DNN-based components remains a challenge because performance metrics for machine learning algorithms, such as accuracy evaluation, do not enable distinguishing the different scenarios leading to DNN failures (i.e., the unsafe execution scenarios of a DNN under test).

To address these challenges, we propose four approaches which leverage the intuition that unsafe scenarios might be cost-effectively identified by automatically clustering images that lead to DNN failures and present commonalities. We call such image clusters root cause clusters (RCCs). The proposed approaches are:

• Heatmap-based Unsupervised Debugging of DNNs (HUDD), a white-box approach that identifies RCCs by applying a clustering algorithm to heatmaps capturing the relevance of every DNN neuron on the DNN outcome. HUDD also composes a dataset for retraining and improving DNNs by selecting images from an improvement set based on their closeness to the generated RCCs.

• Simulator-based Explanation for DNN failurEs (SEDE), a search-based approach that automatically generates explicit descriptions for hazard-triggering events from real-world images. SEDE provides such descriptions in terms of logical expressions constraining the configuration parameters of the simulator used to train the DNN.

• Safety Analysis using Feature Extraction (SAFE), a black-box approach that generates RCCs without relying on DNN internal information. SAFE combines transfer-learning models with dimensionality reduction techniques to extract important features from failure-inducing inputs.

• A pipeline for the generation of RCCs that can integrate different machine learning components. We rely on such a pipeline to assess the effectiveness of composing transfer-learning models, autoencoders, heatmaps of neuron relevance, dimensionality reduction techniques, and various clustering algorithms to derive RCCs.