Lessons from a cyberattack with Troy Hunt

Troy Hunt, creator of Have I Been Pwned. Image courtesy of Troy Hunt.
- Advertisement -

Even the founder of Have I Been Pwned, a global database that lets users check if their credentials have been compromised, isn’t immune to phishing. For Troy Hunt, the experience underscored how human error, not just technical flaws, continues to expose enterprises to risk.

In this conversation with Frontier Enterprise, he discusses what organisations still get wrong about breach disclosure, why technology alone won’t stop cyberattacks, and how AI could strengthen both offence and defence in cybersecurity.

How is Asia-Pacific being targeted in the evolving era of generative and agentic AI?

What I find amusing about this issue is that if you’re on the internet, you’re already a potential target. Everywhere I go, every region is heavily targeted simply because so much value now exists online.

The reality is that cyberattacks aren’t limited by geography or sector; if assets are online, they’re exposed. That said, in places like Singapore, where there’s a high concentration of large multinational organisations and significant wealth, it makes sense that attackers would focus their efforts there.

What do credential dumps reveal about enterprise data privacy and security habits?

Enterprises continue to fall victim to a wide range of cyberattacks, with ransomware still among the most common. These incidents almost always trace back to a vulnerability within the organisation, and often, that weakness is human. People make mistakes that put their companies at risk.

Even beyond ransomware, many attacks exploit reused credentials from previous breaches. It’s common for employees to use the same password they might use for a gaming account, often tied to their personal email address, and sometimes even their corporate one. This overlap gives attackers multiple data points to connect, allowing them to target an organisation through individual lapses in security.

Why are enterprises investing in AI yet overlooking the human attack surface?

The first thing to recognise is that security is a shared responsibility. We want to build technical controls such as antivirus software, firewalls, and more advanced tools, including those powered by AI. But we also need humans to make sound decisions, particularly when facing social engineering attacks. This year, we’ve seen many incidents from groups such as Scattered Spider targeting major multinational organisations by convincing help desk operators to do something they shouldn’t.

Security, therefore, is shared. We can’t just blame people and say, “We should stop doing stupid things.” I’ve fallen victim to a phishing attack myself earlier this year, which I wrote about. It happens, but everyone has a role to play in maintaining security.

How are enterprises misjudging the real risks and defences of AI-driven attacks?

From my conversations with organisations, I haven’t seen many real, practical AI-driven threats yet, certainly not compared to traditional attacks. Many recent incidents still involve social engineering, often carried out by humans rather than machines. In several cases, these are young adults or even kids impersonating staff or help desk members and asking for information they shouldn’t have. That’s not AI; that’s a kid who’s creative and has got too much time on their hands exploiting vulnerabilities in the organisation.

So far, one of the things that sounds scary but hasn’t yet materialised is the idea that AI will make these attacks worse. At the same time, we’re seeing strong opportunities for AI to help defend against such threats. AI is extremely good at identifying patterns and deviations from those patterns. For instance, when an organisation or individual usually communicates in a certain way, and that suddenly shifts to include a request that might be malicious, humans might not notice, but machines can. I’m optimistic about the opportunities for AI to play a positive role in cybersecurity.

How should organisations change the way they disclose data breaches?

The main change I’d like to see from organisations is the recognition that data breaches are an inherent risk of doing business online; and with that understanding, the need to be prepared for when they happen. Most organisations respond poorly to breaches, but occasionally one does it exceptionally well, usually because it has planned ahead. Just as companies prepare for business continuity or a data centre failover, they should also prepare for a breach.

Good breach disclosure follows established best practices: reacting with urgency, being transparent about what happened, and genuinely prioritising the victims. That last part is a hard one, since organisations are typically accountable to shareholders first. Too often, this focus on protecting the company leads to delayed or lacking transparency in communication, which is detrimental to individual breach victims.

How did your own phishing experience shape your view on security policy?

The first thing it shows is that everyone has moments of weakness. In my case, it was a mix of being jetlagged, tired, and seeing a phishing email that triggered fear of losing something. It reinforces my earlier point about needing both technical and human controls. If we know that anyone can have a lapse in judgment, what safeguards will protect us when that happens? That’s something many organisations overlook.

In a perfect world, people would never click malicious links or enter credentials where they shouldn’t, and technical controls would always step in if they did. Looking at something like my Mailchimp account, for example, there were some technical controls in place, but I bypassed them because I was used to websites changing their URLs. My password manager didn’t auto-complete, as it often doesn’t. And in Mailchimp’s case, the platform doesn’t offer phishing-resistant two-factor authentication — no option for passkeys, for instance. That’s a good example of how, even as such controls become mainstream, the lack of them leaves a service at higher risk.