CyberArk sets up lab to bolster security for AI

Image courtesy of CyberArk

CyberArk has established an Artificial Intelligence Center of Excellence (CoE) and is expanding R&D and product development resources to advance the use of generative AI to improve security for its more than 8,000 customers worldwide.

According to the recently-published CyberArk 2023 Identity Security Threat Landscape Report, 93% of security professionals surveyed expect AI-enabled threats to affect their organisation in 2023, with AI-powered malware cited as the No. 1 concern.

The CoE works in close collaboration with CyberArk Labs, which has been researching the impact of generative AI on attacker innovation to help evolve AI-powered defenses. 

With a team of data scientists, software architects and engineers, the CoE is first exploring opportunities to embed AI into existing CyberArk products. 

Meanwhile, CyberArk continues to execute on a comprehensive long-term roadmap that builds upon its AI foundation and commitment to innovation.

CyberArk recently announced new AI-powered policy creation automation, combining its privileged access management expertise and comprehensive least privilege toolsets to automatically process the data collected by CyberArk Endpoint Privilege Manager for immediate risk reduction.

Also, CyberArk is working to harness AI/ML to transform identity security approaches in several ways, including supporting identity risk analysis, risk reduction plans and other recommendations; simplifying various user-intensive tasks through automation; and allowing users to interact with the system using natural language, easy access to documentation and more.

“The time is right to expand our efforts now, especially as we continue to see evidence of generative AI boosting productivity on both sides of the cybersecurity equation – for attackers and defenders,” said Peretz Regev, chief product officer at CyberArk. 

“One of our goals is to better enable customers to use in-product AI-based capabilities to create advantages for their defensive strategies” said Regev. “We see great potential for AI and its ability to influence use cases associated with areas like policy optimisation, risk reduction and threat detection.”