Event

Doctoral Defence: Towards Trustworthy Artificial Intelligence in Privacy-Preserving Collaborative Machine Learning

  • Location

    JFK Building/Room E004/E005 (Nancy/Metz)

    L-1855, Kirchberg, Luxembourg

  • Topic(s)
    Computer Science & ICT, Finance
  • Type(s)
    Doctoral defences

You are cordially invited to attend the Doctoral Defence of Mary Rosziel.

Title: Towards Trustworthy Artificial Intelligence in Privacy-Preserving Collaborative Machine Learning

Members of the Defence Committee

  • Prof. Dr Gilbert Fridgen, University of Luxembourg, Chairman
  • Dr Vijay Gurbani, Illinois Institute of Technology, Chicago, USA, Deputy Chairman
  • Prof. Dr Radu State, University of Luxembourg, Supervisor
  • Dr Jean Hilger, University of Luxembourg, Member
  • Dr Andrey Martovoy, ABBL, Luxembourg, Member
  • Dr Beltran Borja Fiz Pontiveros, University of Luxembourg, Expert in an Advisory Capacity

Abstract

Artificial Intelligence (AI) systems are proliferating in our society due to their capacity to simulate human intelligence, behaviours, and processes. The increased utilisation of AI systems in society, especially in high-risk settings such as autonomous systems and healthcare, has been accompanied by an increased concern about the impact of AI systems on society. In recent years, vulnerabilities to algorithmic bias, adversarial attacks, and data breaches have resulted in the critical assessment of how AI systems can be designed to be inherently trustworthy.

This dissertation presents the key concepts of trustworthiness in AI systems, with a focus on identifying the challenges associated with designing, developing, and deploying collaborative AI. Towards this purpose, key elements of trustworthy AI are identified, culminating in a set of concise guidelines that developers can leverage in the development of trust-worthy AI. Further, this dissertation explores how techniques initially created solely for privacy, specifically federated learning, can be leveraged to build trust in machine-learning environments.

Federated learning is assessed for its implications on trustworthy principles, with a particular focus on how privacy is established to enable collaboration between participants without the sharing of private data. The security of federated learning is further assessed by demonstrating the impact of targeted model poisoning attacks and an assessment of Byzantine-tolerant defence mechanisms to prevent and defend against such attacks. Further, the potential for federated learning to be leveraged for compliance with regulatory requirements is assessed.