Despite wide adoption, generative AI is beyond firms’ full grasp

- Advertisement -

Organisations have heavily adopted generative AI within their teams and  those with advanced approaches have significant budgets, resources, and authority and are well-positioned to embrace cutting-edge tools and technologies, according to Splunk.

However, new research from the cybersecurity firm found that despite this widespread adoption, many organisations lack a clear generative AI policy or full grasp of the technology’s broader implications. 

Researchers surveyed 1,650 security executives during December 2023 and January 2024. Respondents were in Australia, France, Germany, India, Japan, New Zealand, Singapore, United Kingdom and United States.

Findings also show that cybersecurity leaders are divided on who will gain the upper hand in leveraging generative AI tools — cybersecurity defenders or threat actors.

Among security leaders, 93% said public generative AI was in use across their respective organisations, and 91% reported using generative AI specifically for cybersecurity operations.

Despite high adoption, 34% of surveyed organisations say they do not have a generative AI policy in place, and 65% of respondents admit to not fully understanding the implications of generative AI.

Also, 44% of respondents rank generative AI as a top initiative in 2024, surpassing cloud security as the top initiative.

Further, cybersecurity leaders are split over who has the advantage when it comes to generative AI. While 45% of respondents believe generative AI will be a net win for threat actors, 43% said generative AI will give cybersecurity defenders the edge.

“We are in an AI gold rush, with bad actors and security professionals both trying to seize the advantage,” said Patrick Coughlin, SVP for global technical sales at Splunk. “The introduction of generative AI creates new opportunities for organisations to streamline processes, increase productivity, and limit staff burnout.”

“Unfortunately, generative AI also presents unprecedented advantages for threat actors,” said Coughlin. “To combat this new threat landscape, defenders must outpace threat actors in the race to harness and securely deploy the power of generative AI.”

Cybersecurity hiring has proven to be a considerable challenge in recent years, especially for entry-level workers seeking to break into the industry. 

The research indicates that generative AI is a possible solution to this problem as it helps organisations discover and onboard entry-level talent more efficiently. Additionally, the majority of cybersecurity professionals anticipate that generative AI will enhance their speed and productivity.

Among cybersecurity leaders, 86% say generative AI can enable them to hire more entry-level talent to fill the skills gap and  58% say onboarding entry-level talent will be quicker thanks to generative AI.

Nine in every 10 believe entry-level talent can lean on generative AI to develop their skills in the Security Operations Centre (SOC) and  65% believe the technology will help seasoned cybersecurity professionals become more productive.

The majority of security professionals are also facing growing compliance pressures. The implementation of stricter compliance requirements has significantly raised the stakes, particularly for security leaders who may personally face repercussions for the organisations’ violations. 

This changing compliance landscape underscores the need for increased vigilance and accountability within the security sector.

More than three-quarters (76%) of respondents say personal liability has made cybersecurity a less attractive field, and 70% have considered leaving the field due to job-related stress.

Close to two-thirds (62%) of professionals report having already been impacted by changing compliance mandates requiring disclosure of material breaches. 

Meanwhile, 86% of security professionals say they will shift budgets to prioritise meeting compliance regulations over security best practices.

Many respondents also expect their organisations to be more risk-averse, with 63% expecting that organisations will err on the side of caution and overreport breaches as material to avoid penalties.