You are all cordially invited to attend the Doctoral defence of Qiang Hu.
Title: Label-Efficient Deep Learning Engineering
Members of the defence committee:
- Prof. Dr. Michail Papadakis, University of Luxembourg, Chairman
- Dr. Maxime Corey, University of Luxembourg, Vice-chairman
- Prof. Dr. Yves Le Traon, University of Luxembourg, Supervisor
- Prof. Dr. Paolo Tonella, Università della Svizzera Italiana, Switzerland, Member
- Prof. Dr. Lei Ma, The University of Tokyo, Japan, Member
Abstract:
Applying deep learning~(DL) to science is a new trend in recent years which leads DL engineering to become an important problem. The normal processes of DL engineering contain data preparation, model architecture design, model training and testing, model deployment, and model maintenance. Unfortunately, all of them are complex and costly. One of the factors that highly affect the efficiency of DL engineering is the huge data labelling effort. Even though collecting unlabelled raw data can be fully automated, labelling them needs huge human effort. Thus, how to efficiently build DL systems with less human effort~(label-efficient DL engineering) is a key question we want to answer. To tackle this problem, this dissertation makes the following contributions.
- For label-efficient model design, we propose LaF, a label-free model selection method based on the Bayesian model to infer the models’ specialty only based on predicted labels.
- For label-efficient model training, we conduct an empirical study to explore the limitations of existing active learning methods. Besides, we build the first benchmark of active learning for code models — active code learning to help software engineers train their code models with less effort.
- For label-efficient model testing, we propose a label-free model performance estimation method, Aries, to automate the entire testing process. Aries relies on the assumption that the model should have a similar prediction accuracy on the data which have similar distances to the decision boundary.
- For label-efficient model maintenance, we first conduct a systemically empirical study to reveal the impact of the retraining process and data distribution on model enhancement. Then based on our findings, we propose a novel distribution-aware test (DAT) selection metric that can produce robust models against distribution shift.
To sum up, this dissertation shows promising ways to efficiently build a DL system with acceptable labelling effort in different processes of DL engineering.