Security, privacy concerns make firms think twice on adopting AI

- Advertisement -

Companies are optimistic about AI, but AI adoption requires attention to privacy and security, productivity, and training, according to a survey commissioned by GitLab.

Responses were collected from 1,001 individual contributors and leaders in development, IT operations, and security across a mix of industries and business sizes worldwide in June 2023.

Among respondents, 83% said that implementing AI in their software development processes is essential to avoid falling behind. However 79% noted they are concerned about AI tools having access to private information or intellectual property.

Two in every five (40%) of all respondents cited security is already a key benefit of AI, but 40% of security professionals surveyed were concerned that AI-powered code generation will increase their workload.

Nine in every 10 (90%) reported using AI in software development or plan to, while 81% said they need more training to use AI in their work.

“The transformational opportunity with AI goes way beyond creating code,” said David DeSanto, chief product officer, GitLab. 

DeSanto said that only 25% of developers’ time is spent on code generation, but the data shows AI can boost productivity and collaboration in nearly 60% of developers’ day-to-day work. 

“To realise AI’s full potential, it needs to be embedded across the software development lifecycle, allowing everyone involved in delivering secure software, not just developers, to benefit from the efficiency boost,” he said. “GitLab’s AI-powered DevSecOps platform delivers a privacy-first, single application to help teams deliver secure software faster.”  

Results found that although organisations are enthusiastic about implementing AI, data privacy and intellectual property are key priorities when adopting new tools.

Among senior technology executives surveyed, 95% said they prioritise privacy and protection of intellectual property when selecting an AI tool.

Meanwhile, 32% of respondents were “very” or “extremely” concerned about introducing AI into the software development lifecycle.  Of those, 39% cited they are concerned that AI-generated code may introduce security vulnerabilities and 48% said they are concerned that AI-generated code may not be subject to the same copyright protection as human-generated code.

Also, security professionals worry that AI-generated code could result in more security vulnerabilities—making more work for security professionals.

Only 7% of developers’ time is spent identifying and mitigating security vulnerabilities and 11% is spent on testing code.

Almost half (48%) of developers were significantly more likely to identify faster cycle times as a benefit of AI, compared to 38% of security professionals.

More than half (51%) of all respondents are already seeing productivity as a key benefit of AI implementation.

Further, while respondents remain optimistic about their company’s use of AI, the data indicates a discrepancy between organisations’ and practitioners’ satisfaction with AI training resources. 

Despite 75% of respondents saying their organisation provides training and resources for using AI, a roughly equal proportion also said they are finding resources on their own, suggesting that the available resources and training may be insufficient. 

Four in every five (81%) cited they require training to successfully use AI in their daily work.

About two-thirds (65%) who use, or are planning to use, AI for software development said their organisation hired or will hire new talent to manage AI implementation.

When asked what types of resources are being used to build AI skills, the top responses were utilise books, articles, and online videos (49%); watch educational courses (49%), practice with open-source projects (47%); and learn from peers and mentors (47%).