News

Thinking out loud: AI can help restore communication for people with disabilities

  • Faculty of Science, Technology and Medicine (FSTM)
    26 March 2026
  • Category
    Research
  • Topic
    Artificial Intelligence

Imagine being able to communicate just by thinking, without needing to make a sound. 

Researchers Dr. Saravanakumar Duraisamy, Dr. Mateusz Dubiel and Prof. Luis A. Leiva from the University of Luxembourg, in collaboration with Dr. Maurice Rekrut (Saarland University, DFKI), have taken a big step towards making this a reality.

Making silent speech BCIs easier and more effective

Have you heard of Brain-Computer Interfaces (BCIs) before? In essence, they are systems that establish a direct communication pathway between the brain and an external device. They do this by capturing, analysing, and translating brain signals into commands which can be used to control external devices like computers, robotic limbs, or other technologies. This technology offers immense potential for aiding individuals with difficulties speaking.

The great news is that the team’s recent work has found a way to make BCIs for silent speech much easier to use and more effective. The traditional way to train BCIs for imagined speech requires many long and tiring sessions where users silently repeat words. This can be a slow and frustrating process. But University of Luxembourg researchers have come up with a clever solution: they trained an AI system on brain signals from when people were speaking aloud and then used that knowledge to understand their silent thoughts.

Training the brain’s translator

While BCI technology has been used before to correctly recognise what people were silently thinking, for the first time the performance of the system is within a usable range of accuracy. The team used advanced techniques to analyse brain waves (EEG), focusing on two important features called the Hilbert Envelope and Temporal Fine Structure. They then used a custom AI model BiLSTM, which learned to detect the patterns in brain activity related to different words.

The results were impressive. The AI could understand overt speech with 86.44% accuracy and, using the knowledge it gained, could also understand imagined speech with 79.82% reliability. This performance marks a significant improvement over previous attempts in the field.

By translating brain activity into speech, this technology turns silent thoughts into a usable communication signal, with no voice required.

Dr. Saravanakumar Duraisamy

Postdoctoral researcher

Faster training and new communication possibilities

This research is incredibly important because it reduces the need for lengthy training sessions for imagined speech BCIs. This makes the technology more practical and less burdensome for users, opening up new possibilities for developing communication tools for people with speech disabilities and even new ways for humans to interact with technology. 

For instance, this can be a new source of hope for people with locked-in syndrome, a neurological condition where a person is fully conscious and aware but cannot move or communicate verbally due to complete paralysis of nearly all voluntary muscles.This is because the core of this breakthrough lies in transfer learning, which allows an AI model trained on a wide database of other people’s brain patterns to be “pre-tuned” for a new patient who cannot speak. This means that even after an accident, a patient can use a generalized neural decoder that already understands the fundamental relationship between brainwaves and language, requiring only minimal “silent” calibration rather than hours of vocal recording, which could be impossible or very taxing for a person with neurological conditions. Consequently, the technology is designed to be accessible to those with sudden-onset paralysis, as the AI learns the “language of the brain” from existing datasets rather than relying solely on the patient’s past recordings.

So what are the future challenges for this research? Most importantly, decoding the brain’s highly personal response to words in EEG signals. Future research should explore why one person’s brain activity for a word looks different from another’s, and even how an individual’s own brain response can change from day-to-day. A specific project to work on challenge is already in the works. Ultimately, this deeper understanding will pave the way for brain-computer interfaces that are more adaptable and user-friendly, with the potential to make a real difference in people’s lives.

This study has been supported by the FNR and the European Union’s Horizon 2020 research and innovation programme under grant number CHIST-ERA-20-BCI-001 (BANANA), as well as the European Innovation Council under grant number 101071147 (SYMBIOTIK).

This work was presented during ICASSP (the top Conference on Speech and Signal Processing) on April 6-11, 2025. Building on this momentum, related research on imagined speech decoding has also been presented at INTERSPEECH 2025, and further advancements are set to be showcased at ICASSP 2026, including work on “Graph-Biased EEG Transformers for Silent Speech Decoding.”