Research at the Institute of Philosophy
At the Institute of Philosophy we pursue research into Modern and Contemporary Philosophy with particular research strengths in: Kant & German Idealism, Epistemology, Philosophy of Mind, Philosophy & Ethics of Artificial Intelligence, Philosophy of Normativity. We typically have a number of major research projects operating at any time – please see below for more details.
Some of our projects
-
Start date
01/07/2019
-
Duration in months
48
-
Funding
FNR
-
Project Team
Frank Hofmann (PI); Yannick Kohl (PhD Researcher)
-
Abstract
The project’s main goal is to critically assess the powers and limitations of rational reflection, where ‘rational reflection’ is understood as the human capacity of rationally improving one’s view with the help of a reflective grasp (self-knowledge) of one’s own attitudes, reasons, and cognitive processing. The views we endorse ‘on reflection’, as the saying goes, are the focus rather than non-reflective perceptual beliefs or intuitive beliefs that we form spontaneously. The relevant self-knowledge involved in rational reflection can but need not be arrived at by introspection. It can also stem from other sources, such as testimony, most importantly. Rational reflection is not reducible to mere introspection. There are then specific challenges to understanding how rational reflection can work, such as questions surrounding its role for the proper functioning of defeaters, critical reasoning, and mental self-regulation. Some contemporary empirical research about human meta-cognition also needs to be considered. In general, one should be careful not to fall prey to a naïve, overly optimistic assessment of our reflective capacities. Rational reflection is also fallible, just as first-order cognition. Yet there is good reason to think that rational reflection enables specific and significant achievements. The overall aim is to develop an account of rational reflection that is conceptually and empirically up-to-date and adequate. The project’s orientation is primarily foundational. Furthering our understanding of rational reflection, arguably a particularly complex aspect of the human mind that characterizes humans as persons, is the overall goal.
-
Start date
01/09/2023
-
Duration in months
48
-
Funding
FNR
-
Project Team
Thomas Raleigh (PI); Leon Van Der Torre (Co-PI); Aleks Knoks (Post-Doctoral Researcher)
-
Partners
Luxembourg Institute for Technology (LIST)
-
Abstract
Artificial Intelligence is playing an increasingly important role in our lives: from recommending specific products and websites to us, to predicting how we will vote in elections to driving our vehicles. It is also being used in ethically and socially important domains like healthcare, education and criminal justice. AI has the potential to greatly increase our knowledge by helping us make new scientific discoveries, prove new theorems and spot patterns hidden in data. But it also poses a potential threat to our knowledge and reasoning by ‘nudging’ us towards some kinds of information and away from others, creating ‘internet bubbles’, by reinforcing biases that are present in ‘Big Data’, by helping to spread and target political propaganda, by creating ‘deep-fake’ images and videos as well as increasingly sophisticated and human-like texts and conversations. The fundamental aim of this project is to investigate how we can rationally respond to the outputs of artificial intelligence systems and what is required to understand and explain AI systems. This is a topic that requires an inter-disciplinary approach, drawing on both computer science to investigate the details of recent AI advances but also on philosophy to investigate the nature of rationality, understanding and explanation. The issues here are especially important since many of the most powerful recent advances in AI have been achieved by training ‘Deep Neural Networks’ on vast amounts of data using machine learning techniques. This creates the unusual situation where even the designers and creators of these AI systems admit that they do not fully understand its internal processes or how the system will process new data. It is vital then that we investigate how we might produce explanations of the behaviour of these systems that humans can actually use and understand. It is also vitally important to investigate when and how it can be rational for human consumers to trust the outputs of systems trained via machine learning despite the fact that we lack full knowledge of their internal functioning or the data that was used to train them. One of our main hopes for this project is that we will be able to develop new ways of measuring how explain-able or how trust-worthy an AI system is that could eventually be implemented by computers.
-
Start date
01/01/2014
-
Duration in months
72
-
Funding
Internal
-
Project Team
Dietmar Heidemann (PI); Sabrina Bauer (Post-Doctoral Researcher)
-
Partners
Berlin-Brandenburgische Akademie der Wissenschaften
-
Abstract
As part of the long-term project “Neuedition, Revision und Abschluss der Werke Immanuel Kants”, under supervision by the Berlin-Brandenburgische Akademie der Wissenschaften, this editorial project prepares a new critical edition of the Critique of Pure Reason, one of the key texts of philosophy. The project consists of 1.) producing a reader-friendly, scholarly accurate text of the first and second edition of the Critique of Pure Reason (A: 1781; B: 1787), 2.) producing a page-by-page and line-by-line critical apparatus (documentation of relevant textual changes, corrections, additions etc.), 3.) identifying page-by-page and line-by-line the sources Kant explicitly or tacitly used in the writing process of the Critique of Pure Reason. This information, provided in annotation format, takes the form of documentation rather than of a commentary; 4.) as a standard the edition provides additional information on the evolution of the text Critique of Pure Reason, its editorial history and development. The outcome will be a text that serves as the new standard edition of the Critique of Pure Reason (A, B) in the future.
-
Link