‟ If my teenage self could see what I’m doing right now, he would be so surprised! For a long time, I thought I was going to be an accountant. But after talking to a family member about the challenges he was facing with his software company, I got curious.”
This is Alan’s story. From switching career paths to diving deep into the complexities of Artificial Intelligence (AI), his journey shows how unexpected turns can lead to fascinating destinations. Originally from Merida in Mexico, he now works on making AI systems more transparent and understandable at SnT. In a world where AI increasingly influences our daily decisions, his research couldn’t be more timely. Here’s what he shared about his work, his inspirations, and why explainable AI matters for everyone.

Relive the conversation—transcript below!
Tell us about yourself and what brought you to Luxembourg.
I have a bachelor of engineering in informatics and digital businesses from Universidad Anahuac in Mexico, and an Erasmus Mundus joint master degree in green networks and cloud computing. I arrived in Luxembourg in February 2025 to join SnT as a doctoral researcher.
What inspired you to pursue a career in tech research?
It was during my Master thesis. That was the first time that I had an actual deep, real, hands-on experience in research, and I found it fascinating. The technical challenges that push your cognitive skills to new heights, solving complex problems, that heart-racing moment when you finally get a breakthrough, the code works, the results align – it was truly rewarding. But overall, the idea that I was contributing to the advancement of my field made it meaningful. I got hooked.
‟ That heart-racing moment when you finally get a breakthrough, the code works, the results align – it was truly rewarding.”
What are you exploring in your research today?
I’m working on making artificial intelligence systems more understandable. More specifically: counterfactual explanations for multivariate time series. A counterfactual explanation replies to the question, “what minimal changes in the input would have led to a different model prediction?”
Right now, AI models can be like black boxes – they make decisions, but we don’t always know why. My research is about creating tools that help explain these decisions in a practical way. I I’m doing this work in collaboration with a manufacturing company called Ceratizit. It makes my research very interesting because I can jump from theory to a practical business case in an instant. So I get to see how these ideas can help solve real-world problems in industry.
‟ “Challenging” is the first word that comes to mind when I think about my research.”
Can you give an everyday example of how this works?
So for example, if you go to the bank asking for a loan and the bank denies it, your first question would be “why?” The bank could simply say “because you don’t earn enough,” but that’s not a sufficient explanation. A good counterfactual explanation would be “if you were earning 100 euros more per month, you would get the loan.” This gives you clear information about what needs to change. That’s the kind of practical explanation I’m trying to get AI systems to provide – not just what happened, but what could be done differently.
Why is this important today?
Well, Artificial Intelligence is now almost everywhere. But AI models are so complex that sometimes not even the developers or data scientists fully understand how the model came to its conclusion. Explainable AI helps developers debug their models better, and it helps people understand if they should trust the AI’s advice. If you’re watching Netflix and wondering why it’s suggesting a particular movie, it might not be so important. But in scenarios like healthcare, financial decisions, or safety systems, the stakes are much higher. That’s why we need to understand why these AI models are behaving the way they are.
Do you have a role model?
My role model is my dad. His resilience, his dedication, the way he handles problems with wisdom and integrity has always inspired me. He shaped my work ethic and my values, and I really strive to follow his example in both my personal and professional life.
What would you tell someone who thinks they don’t fit the mould of a tech researcher?
I would say that there is no such thing as a mould. We all come in different sizes, shapes, and backgrounds, and it’s precisely in these diversity-rich environments where innovation thrives.
‟ If you feel that you’re different, I’ll tell you to bring your uniqueness to the table because your background will never be your weakness. It will always be your strength.”
How has the SnT community supported your research journey?
Moving to a new country for research can be pretty overwhelming, but the support I found here made such a difference. I was lucky because I had already worked with some colleagues during my master thesis – they had this amazing “you can do it” attitude that really helped build my confidence. And when I joined full-time, everyone from HR to my supervisor to the whole team was incredibly welcoming and attentive. Having this kind of supportive community doesn’t just make the transition easier – it makes you believe you can actually do this work and contribute something meaningful.
Your research explained to a five-year-old in one word?
Exciting.
Supported by the Luxembourg National Research Fund