About
Artificial Intelligence is playing an increasingly important role in our lives: from recommending specific products and websites to us, to predicting how we will vote in elections to driving our vehicles. It is also being used in ethically and socially important domains like healthcare, education and criminal justice. AI has the potential to greatly increase our knowledge by helping us make new scientific discoveries, prove new theorems and spot patterns hidden in data. But it also poses a potential threat to our knowledge and reasoning by ‘nudging’ us towards some kinds of information and away from others, creating ‘internet bubbles’ , by reinforcing biases that are present in ‘Big Data’, by helping to spread and target political propaganda, by creating ‘deep-fake’ images and videos as well as increasingly sophisticated and human-like texts and conversations. The fundamental aim of this project is to investigate how we can rationally respond to the outputs of artificial intelligence systems and what is required to understand and explain AI systems. This is a topic that requires an inter-disciplinary approach, drawing on both computer science to investigate the details of recent AI advances but also on philosophy to investigate the nature of rationality, understanding and explanation. The issues here are especially important since many of the most powerful recent advances in AI have been achieved by training ‘Deep Neural Networks’ on vast amounts of data using machine learning techniques. This creates the unusual situation where even the designers and creators of these AI systems admit that they do not fully understand its internal processes or how the system will process new data. It is vital then that we investigate how we might produce explanations of the behaviour of these systems that humans can actually use and understand. It is also vitally important to investigate when and how it can be rational for human consumers to trust the outputs of systems trained via machine learning despite the fact that we lack full knowledge of their internal functioning or the data that was used to train them.
Organisation and Partners
- Department of Humanities
- Faculty of Humanities, Education and Social Sciences (FHSE)
- Institute of Philosophy
Project team
- Thomas RALEIGH, PI
- Leon VAN DER TORRE, PI
- Aleks KNOKS, Project member
- Johan LARGO, Project member
- Amro Najjar, Project member, LIST (external)