The security gap is widening as AI-driven cybercrime surges, with about two-thirds (65%) of IT leaders surveyed admitting their defenses are outdated and unable to withstand AI-enabled attacks.
Lenovo research also found that just 31% of IT leaders feel confident defending against such attacks.
These are based on a survey of 600 IT leaders that took place in October and November 2024. The survey sample included respondents from the United States, Canada, United Kingdom, France, Germany, India, Japan, Singapore, Brazil, Mexico, Australia, and New Zealand. Respondents included IT leaders from companies with at least 1,000 employees and from a range of sectors.
Findings show that while AI is driving business improvements and efficiency gains, it is also fueling a new wave of cybercrime that most businesses are ill-equipped to defend against.
This underscores the critical need for enterprises to adopt AI-driven strategies that can counter threats that learn, adapt, and evolve in real time.
“AI has changed the balance of power in cybersecurity. To keep up, organizations need intelligence that adapts as fast as the threats. That means fighting AI with AI,” said Rakshit Ghura, VP and general manager,at Lenovo Digital Workplace Solutions.
“With intelligent, adaptive defenses, IT leaders can protect their people, assets, and data while unlocking AI’s full potential to drive business forward,” he said.
The advance of generative AI has supercharged cybercriminal strategies, enabling hyper-agile attacks. Modern AI-driven threats can mimic legitimate behavior, mutate to avoid detection, and span multiple domains – from cloud to endpoints, applications, and data repositories.
The report highlights the top concerns among IT leaders. First are AI-powered external threats – from polymorphic malware and AI-driven phishing to deepfake impersonation, AI attacks are faster, more convincing, and harder to detect.
Second are insider risks – 70% of IT leaders surveyed see employee misuse of AI as a major risk, and more than 60% say AI agents create a new class of insider threat they are unprepared to manage.
Third is protection of AI itself – models, training data, and prompts are now high-value targets that must be defended against manipulation and compromise.














