How AI impacted data privacy and cybersecurity

“Have you tried using ChatGPT?”

A phrase so common these days – so much so that some might say it almost rivals the use of “googling” at work or school. After all, everyone from big tech to advertisers and media (even nation-states) are scrambling to establish their foothold in the inevitable era of AI.

The proliferation and increased accessibility of AI have undoubtedly opened up vast opportunities for organisations across various industries. It has enabled greater technological solutions and growth opportunities, further revolutionising the way businesses operate.

However, as with most innovation, it’s a double-edged sword: The AI boom has also escalated safety and security concerns, creating unprecedented complexities in the realm of data privacy and cyberwarfare.

Fueled by the AI boom, data privacy breaches and regulations will rapidly evolve

There is no AI without data. Thus, as AI continues to advance, data privacy breaches are similarly becoming more sophisticated. In response, regulators will enact stronger compliance and stricter governance frameworks on how organisations should tackle this issue.

Just recently, Singapore announced its National AI Strategy 2.0, updating its initial blueprint launched in 2019 for a more systematic way to harness the benefits of AI for public good, while mitigating its impact on jobs and livelihoods. Similarly, Australia released their strategy roadmap towards becoming a world leader in cybersecurity by 2030, to enable Australian citizens and businesses to prosper and bounce back quickly following cyberattacks.

Going into 2024, we can expect such regulations to be adapted, refined, and rolled out by other forward-thinking nations and organisations, driven by the need to stay ahead of the evolving threat landscape.

However, this rapid evolution of regulations will likely result in an increasingly complex regulatory landscape, posing challenges for organisations to navigate and comply with varying requirements across different regions. Consequently, moving into the new year, many established companies may find it difficult to comply with the different regulations across the globe, and may simply choose not to operate in certain regions due to the complexity and cost of compliance.

Power to the people: Threats will become more distributed and democratised

While the proliferation of AI has helped make it more accessible for legitimate actors, it has at the same time democratised cyberthreats, making it easier for hackers of all kinds to weaponise it, and enabling threat actors to diversify their portfolios.

This shift puts a heavier burden on security experts to keep up with not just more sophisticated hackers, but also a larger number of them. Similarly, it poses new challenges for organisations as they need to enhance their security measures against a larger variety of attacks.

Ransomware in particular has become an increasingly significant threat. More than 83% of security leaders in Splunk’s 2023 global CISO report indicated having paid their attackers in ransom attacks.

APAC was especially affected, as the region was found to be the most impacted by disruptive cyberattacks. Organisations here were also notably more likely to pay over a million dollars compared to peers in other regions, putting a very tangible number on the consequences of not taking the threat seriously.

In 2024, stakes will get higher as we can expect new types of cyberattacks to emerge, such as commercial and economic disinformation campaigns, which aim to damage the reputation and brand of companies. AI won’t be the only tool opening the door to new forms of attacks, or within a wider range of industries; 5G will also present opportunities for cybercriminals to advance both in scale and range.

To prepare for these challenges, organisations should start defining their responsibilities and strategies for their edge infrastructure and distributed networks within the next year, as part of their resilience plan.

They should also leverage the power of AI and data analytics to enhance their security capabilities and counter the threats posed by malicious actors. By doing so, they can harness the benefits of AI and 5G, while minimising the risks and costs of cyberattacks.

Fighting fire with fire: Countering AI with AI 

Experts are divided between excitement and reluctance when it comes to AI. Most agree that we are on the verge of major disruption, and will say that it cannot possibly be all good news, but the benefit of AI cannot be underestimated.

In this AI-driven era against a volatile economic outlook, we foresee CIOs and CTOs cutting back on their architecture and infrastructure spending, which means business leaders will need to cautiously explore smart AI technology to better manage increasingly sophisticated threats and stay ahead of potential bad actors, while addressing skills gaps and talent shortages.

Automation and AIOps (artificial intelligence for IT operations) tools, powered by AI/ML, can improve the efficiency and effectiveness of security operations. These tools aid organisations in detecting and responding to threats faster. Also, better visibility across systems and networks can provide security professionals with a clearer understanding of potential vulnerabilities, allowing for the implementation of proactive countermeasures.

However, it is also crucial for organisations to consider the ethical implications of deploying AI. Factors to consider include:

  • Embedding data governance, security and user privacy capabilities into their AI system.
  • Selecting AI tech which prevents biases and discrimination in their AI model algorithm.
  • Prioritising transparency, accountability, privacy, and fairness to ensure responsible and ethical use of AI technology.

While AI introduces its own set of complications and issues, it is ultimately the organisation’s responsibility to keep up with corresponding solutions.

It’s a given that organisations must invest in robust countermeasures, comply with evolving regulations, and prioritise digital resilience and adaptability to effectively navigate the challenges posed by AI in the modern world. However, to build our brightest future, business and technology leaders need to expect the unexpected.

Two decades from now, there will be a massive transformation in the human-to-technology interface, where systems will have the ability to self-engineer, self-heal, and self-automate tasks. In the era of AI, business and technology leaders must continue to influence and inspire with ingenuity and accountability. Only then will we be able to harness the full potential of current and upcoming innovations.