← Back to overview

How Inti De Ceukelaire changed how I view AI

For years, we've been told that people are the weakest link in cybersecurity. But does that still hold true now that companies are deploying countless AI bots? Can artificial human-like intelligence also fall victim to phishing, scams, and manipulation?

Published on May 17, 2025
How Inti De Ceukelaire changed how I view AI

In today's world, artificial intelligence (AI) has become increasingly prevalent in various aspects of our lives. However, a recent talk by renowned Belgian ethical hacker Inti De Ceukelaire sparked my curiosity about the inner workings of AI and its potential vulnerabilities. During his presentation, he delved into the probabilistic approach used by AI to generate output, highlighting how this technology can be deceived and manipulated. This raised important questions: how does AI actually work, and what are the limitations that make it susceptible to exploitation? In this blog post, I'll explore Inti's findings and examine why we shouldn't always assume AI-based processes are foolproof.

Start of the event

The event was hosted in the newest campus building of Howest, BST A. At the reception, drinks were provided free of charge before and after the presentation while the actual event took place in a large auditorium. I was a bit surprised by its size — it was a massive space with four flights of chairs and two beamers. I guess that was also needed though: there were a lot of people attending. I don't know the exact number, but I'd say at least 250 people were there.

After a brief overview of what to expect that evening, the stage was set and Inti could start his talk. That is, if he was seemingly willing to start. He sat in the audience all the way on the left just until the last moment before he needed to get going. When the moment came, he suddenly bolted out of the room without a word and was gone for about 2 minutes. Nobody understood why and it was kind of awkward yet funny at the same time. It looked a bit as if he was shy to get going and the anxiety got the better of him.

The show was only starting though. As it turns out, he was cleverly getting ready to make his introduction count. Out of nowhere they turned off the lights. An instrumental version of Blue by Eiffel 65 started playing and he came in, excited as ever, with about the most dramatic opening of a presentation I've seen to date, straight up looking like a Power Ranger.

Predicting and manipulating behaviour

He started by explaining the great effects of marketing in today's society and noted that if he would ask someone for their favourite color at that moment, more people would tend to say blue because he intentionally picked the hit song to push people towards that answer. He claims that AI works in a similar way. If you were to give it subtle nudges in a certain direction, it might answer in a way that is predictable. It made me think of my own experiments where I tried overloading a chatbot with texts about fruits to see if it would react differently to the question of what its favourite food was. When I did that, it tended to jump to healthier and more natural foods rather than something like fries.

Inti then noted that AI often focuses too much on certain details like edges and colors, making it so that when a feature is not present or is slightly altered, that it might not recognize a subject or mistake it for something else entirely. He used the example of a stop sign with rectangular stickers added to it: while a human can still easily recognize the original object, an AI system might be fooled into thinking it's a completely different traffic sign than it actually is.

Not only is this a grave concern for when something is partially blocked, altered or otherwise distorted on its own, it can also be easily abused on purpose. I was amazed by the example that he gave where you could for instance trick a camera with license plate recognition by plastering printed license plates on your shirt. Even though the data of these cameras is always checked for mistakes, it shouldn't take long to find a system where these checks don't exist. This raised real concerns about the potential security implications of using AI recognition technology and shows that when you don't think outside the box when designing a system, you might easily miss an opportunity for someone to misuse it. I would recommend you to watch this short TikTok he made to emphasize this point.

Interactive activity

After exploring how humans can be manipulated into confessing their crimes, Inti demonstrated how these same tactics can be applied to AI systems. To illustrate this, he created an AI chatbot with a fake background story: a murder investigation. Our task was to extract three pieces of information from the chatbot: its pin code, password, and confession.

With some guidance from Inti, we successfully cracked the pin code and uncovered the password. Although we didn't have time to find out the confession, Inti provided insight into how it could be obtained by pretending to be the AI's lawyer, gathering information to build a case. This cleverly manipulated the AI into confessing its "crime".

We ended off the evening with a free drink at the venue entrance, talking for a while with the people there. It was a great way to cap off an engaging and informative event.

Final thoughts

The discussion on hacking artificial intelligence is crucial. Inti's presentation wasn't just about the limitations of the technology, but also highlighted the need to understand its potential vulnerabilities. It was nice to see a change from the usual hype or exaggeration surrounding it all and it left me with a fresh perspective on how AI actually works. It's a truly fascinating topic and Inti made that clear.