If I were to choose two words to describe the impact of AI on the technology ecosystem, they would be “disruptive” and “revolutionary.” The numbers support this claim, as according to IDC, more than half (64%) of organisations in Asia-Pacific and Japan are actively exploring ways to accelerate AI adoption and maturity in applications such as chatbots, Q&A capabilities, and proactive notifications.
An uncertain economy coupled with workforce fatigue is increasing human expectations for AI, particularly in software development — a field often perceived as the exclusive domain of engineers and the technologically savvy. Singapore is emerging as the fastest-growing coding community in the region. Democratising this growth is the influx of AI code generators, which are gaining prominence by bridging tech skill gaps, enhancing productivity, and fostering collaboration. In fact, 70% of developers will use AI-powered coding tools by 2027, up from less than 10% today.
However, technology-related risks can arise as businesses strive to understand how developers can ensure the quality and security of their AI-generated code. While AI serves as a valuable complement to human developers, with the potential to empower them to maximise their time and focus on projects they are most passionate about, companies must implement appropriate safeguards to guarantee the effective use of this technology.
AI is complementary to human developers
AI code generators enable developers to outsource a portion of code writing, alleviating the mounting pressures to deliver large amounts of code quickly. However, with the increased use of AI comes the greater responsibility to conduct quality checks — ensuring that the generated output is as expected and does not cause security incidents or downtime.
Like any other technology, AI code generators have their own limitations and risk considerations. Lack of source knowledge and context is one of them; tracing and assigning ownership becomes another challenge since AI gathers data from various sources. Security can also be a potential issue. It can’t be guaranteed that the generated code is safe or glitch-free. Additionally, there is no guarantee that AI-generated code meets the quality standards of the existing codebase. In fact, a Cornell University study found that participants who had access to an AI assistant were more likely to write insecure code. All of these factors can contribute to growing technical debt, which companies have already been grappling with prior to the introduction of AI.
Adopting a “trust but verify” approach, where the deployment of AI is accompanied by human review of its output, can help companies harness the benefits of AI without excessive risk. The most effective way to achieve this is by having developers ensure that any AI-generated code they leverage adheres to clean-code attributes (i.e., consistent, intentional, adaptable, responsible), resulting in quality, secure, reliable, and maintainable software.
Responsible AI in software development
Provided that AI is used safely and effectively, and in tandem with a clean-code approach, developers can consider the technology a valuable asset. AI has the capability to produce code at a faster pace than ever before, supporting developers with their numerous projects and timelines, but only if leveraged with a focus on quality.
Teams must be equipped with the right tools to support the rigorous code review cycles necessary to ensure that AI code meets the required standards. Automating this process is crucial as developers are not intended to be copy editors; they’re meant to be innovators. Automation is also essential for addressing the growing accountability crisis: AI code is often trusted initially and pushed into production, but when issues arise, no one takes ownership, leading to further delays in resolving problems.
As AI becomes an integral part of daily work in software development, organisations that fail to harness it risk falling behind. However, there’s no doubt that to effectively incorporate AI, developers should verify accuracy before deploying AI-generated code into production.
Generative AI requires automated tools as safeguards
While progress has been made, it is still ambitious to believe that AI-generated code is entirely clean and fit for production. Human skills are still required to check the output. Implementing automated scanning and monitoring of AI-generated code enables developers to adopt AI in the development process effectively and in a way that empowers them to take ownership of the code.
Whether human-written or AI-generated, introducing automated scanning and testing into the continuous integration and continuous delivery (CI/CD) workflow can help identify bugs and issues in code. As developers worldwide continue to generate exponential levels of code with the adoption of AI, tools that support quality assurance with efficiency can help developers in prioritising the creative and strategic aspects of software development.
Effective adoption of AI will encourage business growth
We’ll see where AI takes us in the years to come, although people are making their predictions. What’s undeniable is the crucial role it will play in the future of software. While AI is already helping streamline processes and freeing developers from mundane tasks, we shouldn’t assume that AI-generated code is automatically of high quality or secure. Developers need a level of automation to deliver code that best takes advantage of AI while keeping their workload manageable.
With the right safeguards in place, developers can guide AI to create clean code that aids businesses rather than introducing more costly problems. A combination of human skills, AI, and rigorous code review to ensure quality gives organisations a winning advantage in the increasingly competitive business landscape.