News

Powering discovery: supercomputing at Uni.lu

  • Faculty of Science, Technology and Medicine (FSTM)
    25 March 2026
  • Category
    Research
  • Topic
    High Performance Computing (HPC)

Within the Maison du Savoir on Belval campus, an intense heat washes over any visitors to IRIS, while AION subtly hums, emitting a soft orange glow. These are not mythological figures, but the high-performance computers of the University of Luxembourg, constantly running and crunching numbers for their users. They are indispensable tools that help scientists from all faculties facilitate and accelerate their research.

Dr. Julien Schleich and his team maintain both systems and assist researchers in maximising their capabilities. In the past years, the computational resources at the University have grown substantially and will continue to do so in the future, underscoring the increasing importance of digital discovery.

Not your average machine

High Performance Computing means having powerful and specialised computers working together to solve extremely demanding calculations. This is also true for AION and IRIS. While they each have their own unique names, they are collections of computers that can be used to run calculations together or separately. 

How are they different from regular computers, like your laptop or smartphone? It comes down to their specialised components and processing power. HPC systems are designed to handle big datasets, complex simulations and calculations that would take ordinary computers years to complete or are otherwise downright impossible to perform.

The brains behind it all

At the heart of every computer lies a tiny but mighty component that does all the thinking: the Central Processing Unit (CPU). On your computer, the CPU handles most tasks that you want to perform; it’s basically the brain of the computer. Its two most important properties are its frequency and number of cores.

The frequency, or clock speed, determines how fast a single task can be processed. It defines how many problems can be solved per second. On the other hand, the number of cores describes how many workers are available in the CPU. Each of these cores can handle their own tasks independently.

Team effort

Multiple cores working together can be helpful when running different programmes simultaneously or when a single large task can be broken down into multiple smaller tasks. In that case, each core contributes to the overall result, significantly accelerating its completion. 

Think of it like a grocery store with a long queue at the checkout. One cashier can only handle so many customers at a time. The obvious solution is to open more checkouts, so customers are served simultaneously. That’s exactly how parallel computing works.

Now, take that idea to the level of a supercomputer like AION. While normal computers might have 4 to 8 cores (sometimes up to 16), AION’s CPUs pack up to 64 cores each. But there’s more: each compute node -a minicomputer inside the supercomputer- has two CPUs, giving it 128 cores. And when dozens or even hundreds of nodes work together, a single programme can use thousands of cores at once, solving problems that would take normal computers years to solve in just hours or days.

To make this work, all cores must communicate quickly with each other; otherwise, there is no benefit to using multiple cores. Even if a task is split across many processors, slow communication between them can become a bottleneck, limiting overall performance.

This is why High-Performance Computing systems are designed with extremely fast connections between CPUs and nodes, allowing them to work together efficiently as if they were a single, large machine.

Down memory lane

Supercomputers like IRIS and AION don’t just process data quickly. They can also handle an extraordinary amount of it. Part of that comes from their Random Access Memory (RAM), the short-term memory that lets processors access and manipulate data in real time. In scientific computing, this is crucial, as some simulations request large amounts of temporary memory to hold complex grids of numbers of intermediate results for example. Some nodes on IRIS even have multiple terabytes of RAM, which represents more than the entire storage space of most personal computers.

Then, there’s long-term storage, which acts more like a vast digital library. It’s where large amount of data is kept that doesn’t have to be accessed rapidly during calculations. Combined, IRIS And AION can store multiple petabytes of information. That’s enough to hold the data from thousands of everyday laptops. 

This capacity allows researchers at the University of Luxembourg to work with larger, more complex datasets and store their many results.

The right tool for the job

High Performance Computing is designed to handle extremely demanding computational tasks. Using such powerful machines for everyday applications would be a waste of resources. In research, however, it becomes indispensable, helping scientists tackle problems, run complex simulations, and explore questions that ordinary computers can’t manage.