Articles

Artificial intelligence and history: an introduction 

  • Luxembourg Centre for Contemporary and Digital History (C2DH)
    14 January 2026
  • Category
    Explained
  • Topic
    Artificial intelligence, Digital hermeneutics, Digital tools

The use of AI for historical research can allow for more insightful access to sources, but many questions remain open about using AI to help create a historical narrative. C²DH assistant professor Frédéric Clavert and postdoctoral researcher Finola Finn share their thoughts on these challenges and their implications, both for current historians and those of the future.

A Microsoft Research study published in July 2025 listed historians among the occupations to face the biggest risk in terms of AI jeopardising their jobs. Frédéric Clavert has a strong opinion about this possibility: “It’s bullshit.” For now, he adds, it’s absolutely not true, but such discourse can create social or political risks. “If a chatbot is able to generate texts about history that aren’t good but good enough to fit the ideological point of view of one party, then there’s a risk,” he adds.

If a chatbot is able to generate texts about history that aren’t good but good enough to fit the ideological point of view of one party, then there’s a risk.”

Frédéric Clavert

Both Clavert and Finola Finn are interested in the way in which AI will impact the work of historians. Finn, a cultural historian who joined the C²DH in 2024, previously worked as the co-PI on the Machine Discovery and Creation project at Leibniz University Hannover, where she collaborated with philosophers to investigate the epistemological and ethical implications of using AI in historical and creative practices. The findings of this work, which she is continuing at C²DH, are available in several publications, including AI and Ethics, Philosophy of Science, and soon Cambridge Forum on AI: Culture and Society.

Clavert’s research is increasingly focused on AI as a producer of primary sources and he has been analysing the intersection of collective memory and AI via chatbots. The managing editor of the Journal of Digital History, he also co-founded the C²DH AI Working Group with his colleague Sean Takats, a full professor in digital history, over a year ago. This group uses a bottom-up approach to address AI issues and questions. Through a survey launched earlier this year, the group discovered that over 90% of the 60 respondents were using AI, while only four individuals stated they hadn’t used any AI platforms.  There’s a wide range of AI experimentation within the C²DH, so the group has met regularly to keep on top of the discussion and has written an AI Manifesto.

Implications for historians

With machine learning and generative AI, historians have the possibility to apprehend sources more globally, even if the question around reproducibility of results is one that remains unanswered. There are also open questions in terms of how to deal with probabilistic systems. “The important thing is documentation. If historians are going to use AI in any way, then it’s really important that we, as a community, come up with standardised ways of documenting what is done,” Finn says, adding that transparency in one’s findings is a foundational aspect when working in history, even when AI isn’t used. She also points out that clarity is key, both in the design and use of AI. If  chatbots are developed to help users access archives, for example, it’s important to be clear on whether the chatbot is intended for navigating an archive or analysing its contents.

There are also risks around the use of AI for writing, including considerations around authorship, even potential plagiarism. Additionally, both Clavert and Finn are curious about whether AI will eventually standardise the way everyone writes, although they believe it’s too soon to tell. Finn studies so-called conceptual distruptions, looking into how new technologies, including AI, put key concepts we use to understand scientific and creative practices under pressure. In her research, she has worked on a framework for how to attribute credit for images made using generative AI (later expanding the framework to texts and other outputs) and highlights the many agents involved along the way, from the user with a prompt to the model creating the output, the developers, plus all the individuals who created the training data. “It’s very interesting seeing in the different contexts how the understanding of ‘creator’ shifts,” she adds.

The researchers also note that most of the AI being used tends to be Western-centric, and minority languages aren’t yet well represented, although there is some experimentation taking place to try to make AI more inclusive and lingustically diverse.

Another potential impact of AI will be on public engagement, with AI-generated content providing “instant history”, which the researchers say could shift the public’s relationship with history and possibly their understanding about the complexity of how it is written.  As Finn notes, “The historian’s process involves many steps. It’s very multi-layered, and there’s very much a human element there in creating a narrative about the past.”

The historian’s process involves many steps. It’s very multi-layered, and there’s very much a human element there in creating a narrative about the past.”

Finola Finn

Future historians

Clavert describes the flipped classroom model as one way of working on critical thinking and problem-solving during classroom hours. While students may be using AI in varying degrees, “we have to find the right pedagological tools because the fact is that you can get pretty good results with a chatbot if you know how to write the question. But to write the question, you need to learn history in a good way. We need to teach students how to use AI, but it must be very progressive.”

Both Clavert and Finn wonder how historians of the future will look back at current times, with so much AI-generated content, and are curious how they’ll apply criticism to those sources, evaluate biases, etc. “I think our traditional methods of source criticism are quite challenged there,” Finn says.

Clavert agrees. “In 20 years, facing a primary source created today, it’s going to be a challenge to determine what’s the human or artificial part, what part was automated and what wasn’t,” he says. “I don’t know if we’re facing something that will change all our methods, or something we can face with simple adaptations of our methods.”

Author(s)

  • Assist. Prof Frédéric CLAVERT

    Assist. Prof Frédéric CLAVERT

    Assistant professor / Senior research scientist

  • Dr. Finola FINN

    Dr. Finola FINN

    Postdoctoral researcher