AI resistance more harmful than good: Qualtrics CSO

Assaf Keren, Senior Vice President and Chief Security Officer, Qualtrics. Photo courtesy of Qualtrics.

The introduction of AI into the cybersecurity equation has been both a blessing and a curse to organisations. On one hand, threat actors can now launch smarter and faster attacks, while on the other, businesses gain new capabilities to fight looming threats. In the end, it’s about data — attackers want to capture it, and organisations need to protect it at all costs. 

What then are the repercussions for businesses who still resist AI simply because they do not understand it? Assaf Keren, Senior Vice President and Chief Security Officer of experience management company Qualtrics sat down with Frontier Enterprise in Singapore to talk about the intersection of data, trust, and AI, and as well his role as a member of the Monetary Authority of Singapore’s (MAS) Cyber and Technology Resilience Experts (CTREX) Panel

You’ve previously been with PayPal as its CISO and had several other roles. Could you please explain your approach for creating a data protection framework?

For PayPal, specifically, I’ve had the privilege of standing on the shoulders of giants. When I joined PayPal right before the split from eBay, we were still running based on the principles of security capabilities and tooling that were built by some of the founders.

Initially, I think they had identified very correctly that online transactions in the financial environment are really based on trust, and if you don’t create security, then, then you’re actually going to create an environment where trust is not going to be operational. A lot of the baseline thinking that they had was really important and stands true up until today. I had the privilege to be a shepherd and a custodian of that for a few years before I moved on. 

I think the baseline understanding is that trust is woven into everything that the company does, because it’s part of the brand promise that PayPal offers to its customers. 

With regard to Qualtrics, do you think differently about customer data versus employee data?

In the basic sense, I think about our immediate customers’ data to be theirs. It’s not just customer and employee data. When I think about data management, how to protect data, first, our customers’ data are theirs, not ours. The less access Qualtrics and other people have to customer data, the better. Our role is to provide our customers with the tools and capabilities for them to manage their data in a robust way, including sensitive data. When you go back into that structure, that is how we are now. The security team is stepping into the world of what are the feature sets and capabilities that we need to have, or that we would like to have. 

We have shared accountability with our customers. I want to empower my customers to make the decision on how they manage their data on their own, so that it’s really something that their security teams and their compliance and governance teams are dealing with and not us.

How do you see the AI revolution within the context of security for Qualtrics?

What’s happening right now is a continuation of something I’ve been calling the condensation of the threat model. If you go back 20 years, the successful attackers were usually groups that had a multitude of capabilities inside of those groups, whether they were state-sponsored or non-state sponsored. You still needed people that could code and write malware. You also needed people that could create a phishing website as well as those who could exploit a network, and then take the data, monetise it, and then send it out. 

That has fragmented over time — we have marketplaces now — people sell specific services. You can get the needles for hire or you can just buy data. You can also buy access into networks. Therefore, it’s becoming easier to be an attacker. When LLMs came in, suddenly we have this environment wherein if you wanted to write a phishing email, then you just ask for a phishing email, and you get it. You can do it in any language you want, so the barrier for entry for attackers is much lower than it used to be in the past. The same truth today goes for deepfakes and voice cloning, and creating websites that look like really good websites that are phishing websites.

On the same axis of continuation, it’s going to be easier to be an attacker. I see large organisations are going to utilise this technology to become more efficient, to have more coverage, and people that have economic problems are going to step into this world. It will be easier for them to step into this world. You don’t really have to be a good developer anymore.  We still haven’t seen the agentic AI red teaming or self-exploitation capability. It will probably be two to three years in the future before we start seeing something like that, but it’s coming. 

On the other hand, you also have organisations starting to run to AI. I think that we can and should govern the usage of AI in the organisation. We should have controls around it, decide who’s using it, how they’re using it, and if there’s any sensitive data involved or not. Blocking actually invites people to bypass your controls. They invite people to use shadow AI, which is a thing, and you create more risk to the organisation. Therefore, partnering with the IT team, and with the teams that are already growing AI capabilities to ensure that we have the right controls in place — is a far better outcome. 

In my mind, that’s something that I want — going into the worlds of aspects like model risk management and model inventory, which are not very different from SaaS capabilities or cloud capabilities. It’s the same journey we’ve had with the SaaS migration and transformation. There’s also the threat of rogue AI capabilities in the organisation, of data flowing to the wrong place — all of those things which I think are solvable within the control sets that security teams have today. I don’t know if that’s a very common view among security teams, especially in regulated industries, but we’ll see how that evolves over the future. 

In the same vein, I think the solution for a lot of these things is utilising AI technologies by security teams to reduce the amount of manual work being done and focus people and human intelligence on big problems, against focusing human intelligence on repeatable, manual tasks that we’re doing on a day-by-day basis. We just just don’t have enough people in security around the world to tackle the coverage problems that we have and the rapid growth with AI. Therefore, we’ve got to pivot into this technology in order to make sure that we’re doing the right thing

You mentioned shadow AI. How serious is this organisational resistance to AI?

I think there’s two things here — one is we have an employee experience trend report, where about 60-something percent of employees use AI whether it’s approved or not by the companies. My advice to the world is lean in and give people the ability to use it in areas where you have controls versus letting them just run and go to ChatGPT and do things that are not approved. 

The other piece is, we are getting phishing emails generated by ChatGPT today, and if somebody isn’t using ChatGPT, they may not know how to identify that the email is AI-generated. By blocking access to AI technologies, we’re making a less resilient workforce.

Being an adviser to MAS as part of the CTREX panel, could you tell us more about how you see the evolution of cybersecurity, and what nations like Singapore need to do from a tech and policy perspective to ensure it has the necessary controls in place?

I really appreciate the MAS. We used to be regulated by the MAS when I was in Paypal, and now I’m advising them. I really appreciate their approach to how they operate, and how they partner both with regulated entities and with people around the world. The fact that they had an event like the one they had this week is great. The fact that they are seeking a conversation with the industry is quite appreciable. I think Singapore is doing a lot of really good things and bringing in varied voices from around the world and having conversation and getting that input and listening. 

I think Singapore, much like every other country, is dealing with this big shift that’s happening right now. How do you take the threats of the future and start creating resilience and robustness in the environment? How do you advise and then regulate local and global financial institutions to make sure that they’re doing the right things? As you can imagine a lot of conversations are around supply chain management, AI, and risk management — which are the right conversations to have right now. 

This is my point of view: we are living in such an interconnected world right now that the relationship between companies, their providers, and their third parties, their cloud providers, their AI hyperscalers is so intertwined. We need to start shifting into a place where we do a lot more continuous monitoring, a lot more robust relationships — and create a framework of trust between those parties — I’m saying this both as a supplier and as a consumer of third-party services.