You studied Artificial Intelligence at Radboud University, where you later discovered the Cybersecurity & AI specialization. What appealed to you about this direction, and how did that eventually lead you to the German Aerospace Center?After taking courses in AI security during my Bachelor’s, I knew I wanted to pursue that direction for my Master’s. I discovered that Radboud University offers a specialization in Cybersecurity and AI — a rare combination, and exactly what I was looking for.My connection with the German Aerospace Center (DLR) followed naturally. They have an Institute for AI Safety and Security, and internships that bridge both fields are hard to find. When I came across DLR, I was immediately enthusiastic — not least because I’ve always been fascinated by space as a hobby.
What makes DLR an exciting place to work?
They work on a huge variety of projects that all have real-world impact, even though they’re embedded in research. I love that combination. I’m very interested in academia, but I think research can feel too abstract if there’s no application. At DLR, you always see why your work matters.
For your internship, you developed a new clean-label image-scaling attack on traffic-sign classification models. In simple terms, what makes your attack unique?
I developed an attack that is almost invisible to the human eye. During training, I subtly altered images at pixel level so that after scaling they change — but before scaling, nothing looks unusual. What makes this attack unique is that it also works in the real world. Placing one specific sticker on a traffic sign causes the classifier to misidentify it, but only that exact sticker works. Any other sticker — different colour, shape or placement — leaves the model functioning normally. This makes the vulnerability extremely difficult to discover.
Afterwards, you shifted to automatic speech recognition (ASR) for maritime radio communication. Why?
It was something entirely new — and that was exciting. Image classification has been studied extensively, but ASR systems for maritime communication hardly at all. I had a lot of creative freedom because no one had explored this attack surface before.
Maritime communication also has very specific constraints: messages must remain unencrypted in emergency situations. That makes them more vulnerable, and much harder to protect. I wanted to understand how to expose those vulner
abilities and, importantly, how to make the system more robust.
You were the first to describe a universal black-box attack on ASR systems, and you also proposed a new semantic defense. How does it work?
A universal attack uses one noise vector that can be applied to any audio sample, which often scrambles the transcript into non-English output. Instead of using heavy frequency-based defenses, I found that a lightweight classifier distinguishing English-like language from non-language was highly effective. It’s simple, efficient, and surprisingly robust — making it a very promising defense.
Are your findings alr
eady being implemented?
I’m not sure. My work has been published, and the relevant institutes are looking into it. But maritime systems are old and largely analogue — you can’t change them overnight. This week, we’ll present our findings to the Institute for Maritime Infrastructures, who could implement defenses in practice. I hope they will take it further.
Your research relates directly to real-world safety. How do you view the societal responsibility of data scientists?
I think we absolutely have a responsibility. My field is a constant cycle: every new attack requires a new defense, and every defense inspires a new attack. Researchers must stay ahead — otherwise malicious actors will.
If we don’t think about the vulnerabilities in the data we use, we leave systems open to manipulation with potentially harmful consequences. Data scientists play a crucial role in closing that gap.
You are also active as a hacker. Does that mean you break the law?
(laughs) No, of course not. I compete in Capture-the-Flag competitions. These involve breaking systems that are intended to be broken — isolated sandboxes with built-in vulnerabilities. It’s like solving complex puzzles with friends. These competitions taught me a lot about how systems fail, and I apply that knowledge constantly in my research.
What did you learn most during your research — technically and personally?
Technically, I learned how incredibly difficult it is to secure a system in the real world. There are countless attack vectors, especially when components interact with larger infrastructures. You have to break the problem into pieces and work step by step.
Personally, I learned a lot about how research works — writing papers, collaborating, and explaining my work clearly. Science communication is essential: if you can’t explain your findings to someone outside your field, you probably don’t understand them well enough yourself.
Finally: how do you see the future of AI security, and what role would you like to play in it?
The future will be exciting — and demanding. AI systems are being deployed at a rapid pace, and their security will become increasingly critical. For now, I see myself exactly where I want to be: finding vulnerabilities and helping patch them. I hope the field grows, and that companies start integrating security during development rather than afterwards. There’s so much work to do — and I’m excited to be part of it.
