The Doctoral School in Science and Engineering is happy to invite you to Badr SOUANI’s defence entitled
Testing AI fairness and robustness in legal applications
Supervisor: Prof Yves LE TRAON
The intersection of artificial intelligence (AI), digitalization, and the legal field presents a dynamic and transformative landscape. AI applications have become increasingly pervasive in legal practices, offering efficiency and accuracy in various legal processes. However, this integration brings forth critical challenges related to fairness and robustness. This abstract focuses on the essence of my doctoral research, on the testing and assurance of AI fairness and robustness in legal applications.
The ongoing digitization of legal processes enhances the accessibility and efficiency of legal services. AI-driven tools have been employed for legal research, contract analysis, predictive analytics, and even judicial decision support. These systems often inherit biases present in training data and may lack the robustness required to perform effectively in real-world, adversarial, or novel situations. It can even be possible that some learning techniques used in training exacerbate potential fairness issues in AI models.
The initial phase of my research is dedicated to the examination of biases present in Language Model Models (LLMs) that have gained prominence in the legal domain. Prominent LLMs like BERT, LAMA, and CHAT-GPT, while renowned for their capacity to process and generate human-like text, frequently harbor inherent biases that stem from the vast textual data used in their training. These biases pose a significant concern when these models are deployed in legal applications. To address this issue, I aim to employ advanced techniques in natural language processing and machine learning to meticulously identify and quantify these biases.
Once identified, the subsequent pivotal phase of my research involves the development of strategies to rectify and mitigate these biases within LLMs. Exploring various techniques such as retraining with debiased data, implementing fine-tuning methods specific to mitigating bias, or adapting model architectures to minimize biases. The overarching objective is to create LLMs that not only demonstrate high accuracy and efficiency but also uphold the ethical and legal standards that are expected within the legal field.
In summary, my research comprises two essential stages: identifying biases in LLMs used in the legal domain and devising effective strategies to reduce these biases. The aim is to enhance the performance of LLMs while ensuring their compliance with the ethical and legal standards crucial to the legal profession.