Articles

AI Manifesto

  • Luxembourg Centre for Contemporary and Digital History (C2DH)
    29 July 2025
  • Category
    Insight
  • Topic
    Artificial intelligence, Digital tools, Methodology

AI Working group report, synthesised by Frédéric Clavert and Sean Takats in December 2024. The AI working group is composed of members of the C²DH who are interested in AI. They discuss the major challenges posed by AI, particularly generative AI, to the practice of history.

We all already use (generative) AI in all kinds of ways. This is not an overstatement, and we are not the only ones. Of the 60 members of the C²DH who answered our survey on generative AI platforms some months ago, only 4 had not already used it, and even they had plans to do so. While text-oriented platforms like ChatGPT and Claude dominate public discourse around AI, our internal AI working group showcases the enormous range of AI experimentation and deployment within the C²DH, including software coding, creative writing, contextualizing and analyzing archival sources, and much more, including even generating folk embroidery patterns.

We must think about and discuss AI. Knowing how AI can be used or is already being used is not enough: our knowledge should be informed by usage, with critical reflection drawing on our own empirical experience. Generative AI today can already augment or even replace core operations at the heart of historians’ research practices, such as primary source transcription, quantitative analysis, proposal writing, and peer review, threatening to transform radically if not render obsolete much of the training that we ourselves had and that we continue to offer our own students. Understanding the benefits and risks of AI is essential for deciding how best it might be used in our own research, teaching, and public outreach efforts.

We need infrastructure. Our uses of generative AI platforms and of other AI-based tools are diverse and creative, but they are scattered. Most of us are directly using large AI platforms (mostly ChatGPT and Claude) which are efficient, but the way we access them is not:  C²DH and UL more broadly have no institutional subscriptions beyond Copilot, which is not designed for research, resulting in a patchwork of paid individual subscriptions, project-based team accounts, and free accounts. Compounding the issue, data management varies across commercial platforms and by account type — for example, “free” accounts typically allow for all data involved to be used for further training — raising privacy, copyright and research reproducibility issues. For now, as the market is rapidly evolving, we should pursue a hybrid approach which preserves options for the future, including using at least one major commercial platform collectively, but also keeping an eye (and an infrastructure) focused on open-weighted and open-source models.

We need humans. We must attract and retain people like data scientists, research engineers, and computer scientists. Different perspectives building on diverse expertise will bring to our team the innovative thinking that can address our needs to develop state-of-the-art tools and methodologies based on AI. To empower our colleagues to follow new paths in historical research, we must develop enough internal expertise to create customized AI-driven pipelines and software to support text analysis, image recognition, named entity recognition, predictive modeling, and network analysis, to name a few. To make best use of these new methods we should support an experimental research culture that promotes ethical use and benchmarkable outcomes, which may require new or adapted HR profiles.

We must learn. Using AI-based platforms is easy… at first glance. On closer inspection it’s anything but. Effective use of LLM chat interfaces, for instance, demands prompt engineering. For more intensive usage, it’s necessary to work with those platforms’ APIs, which in turn requires familiarity if not comfort with basic programming. Getting into fine-tuning, retrieval-augmented generation, and other (present & future) techniques to tailor a model requires knowing how to use platforms like Hugging Face or Ollama, coding skills, and (at least at this stage) frequent training as those technologies are rapidly evolving. We should be prepared to embrace failure as an integral part of learning, given the experimental nature of today’s state-of-the-art. Beyond its impact on our research practices, our learning will inform our training of bachelor, master, and doctoral students.

We need partners. By collaborating we can exploit our comparative advantage as a humanities research center and exchange good practices, tips, and tricks. We should leverage UL’s internal collaboration with other ICs, with the IAS or the newly founded Institute of Digital Ethics, and with faculties and their departments, because the challenges and opportunities confronting researchers, whatever their discipline, around AI are enormous and diverse. We should also identify partners in Luxembourg, the Greater Region and beyond. We need collaboration for research projects with AI, and we need collaboration for research projects on AI (including ethical and environmental considerations).

Author(s)

  • Assist. Prof Frédéric CLAVERT

    Assist. Prof Frédéric CLAVERT

    Assistant professor / Senior research scientist

  • Prof Sean TAKATS

    Prof Sean TAKATS

    Full professor / Chief scientist 1 in Digital History