Before the advent of writing, the oldest method of transmitting human knowledge relied on memory — songs, stories, and oral instructions. Since then, data security, collection, and preservation have continued to evolve, each bringing its own challenges — decomposition, outdated technology (e.g., tapes), and human error.
Keeping data both safe and easily accessible is an age-old challenge that has only increased in complexity. As the sheer volume of data collected, stored, and used continues to grow with widespread enterprise adoption of AI tools and workloads, so will this challenge evolve.
In response, governments worldwide are scrambling to keep up by introducing new levels of regulations seemingly every year. This puts organisations under increased pressure to ensure data resilience as they come to grips with this new age of AI. They’ve been left to walk a tightrope between ensuring the accessibility of ensuring that data for business use while also keeping it secure and resilient.
Not all data is created equal
According to the latest McKinsey Global Survey on AI, 65% of respondents worldwide reported that their organisations are regularly using AI. It’s no secret that AI relies on data, and the more accurate and relevant the data, the better. As the adage goes: garbage in, garbage out. While some AI applications might only need to be trained once, most require live access to a data pool to analyse and react to changes in real time. These useful data are often sensitive, mission-critical, or customer data and, compared to anonymised data, can yield more personalised, targeted, and effective results.
This has given companies a competitive edge across multiple industries. For one, banks like Standard Chartered are leveraging AI to drive hyper-personalisation, using detailed client data such as investment portfolios, tax information, and insurance plans to deliver tailored insights and strategies that align with the specific goals and circumstances of each individual.
However, the sensitive nature of this data calls for careful management — specifically, in terms of resilience, security, and accessibility. Organisations must navigate a regulatory landscape that increasingly demands responsible data stewardship, especially in AI applications. The EU AI Act, as the first globally binding law, sets a precedent that APAC markets are expected to follow, building on frameworks like Singapore’s and evolving into more stringent regulations in countries like India.
There is an increased responsibility on organisations to ensure data security, and rightly so. This new wave of data regulation focuses largely on extending the line of custody that organisations have on their data, requiring them to consider how it will be secured when plugged into AI and other new technologies. While these new considerations fall primarily under the responsibility of chief information governance teams, achieving compliance with AI-related regulations will require effort across the entire organisation. This is all while ensuring that relevant teams have access to the data they need to innovate and grow.
No need to throw away the key
Organisations are walking a tightrope between ensuring a suitable speed of access to data and maintaining data resilience in line with evolving regulations. While this might seem like a Herculean task, they can return to the fundamentals for guidance.
- Collaboration across teams
Compliance is a business-wide priority, especially as AI becomes integrated into every facet of an organisation. From data governance to IT and production, it’s essential that teams collaborate to develop a new set of business risk assessments.
- Proactive monitoring and adjustment of risk levels
Organisations shouldn’t be reliant on new regulations to prompt a re-evaluation of their data security practices. Monitoring and adjusting risk levels should be a regular, ongoing process, especially when a new technology such as AI comes into the picture.
Take, for example, a guideline published by Singapore’s Cyber Security Agency (CSA) on securing AI systems. Outlining how AI should be designed with security in mind, it recommends conducting a thorough risk assessment before implementing an AI solution to evaluate potential threats, prioritise security resources, and customise defence strategies based on specific use cases. This is foundational to every business, which should be done regularly.
- Backups in AI data security and compliance
Ultimately, as in so many cases, it comes back to backups. Already a key aspect of modern data regulation in its own right, it will play a larger role in AI-specific regulation in the future. It will provide those teams developing AI and LLMs a much-needed anchor in a constantly changing environment.
Not only do they ensure that data remains accurate, secure, and usable at all times but they can also provide a comprehensive record for organisations to prove their adherence to regulations. An invaluable source of truth when dealing with AI as its very nature makes it difficult to account for how exactly it has used the data it’s been fed or trained on. But, by using data backups, organisations can account for the security of their data at any given time, no matter where it’s being used.
Of course, total security can never be fully achieved when dealing with data, and there will always be a balancing act between risk and reward for organisations. However, sticking to these fundamentals provides assurance that there is a safety net to fall back on, especially when venturing into the unknown.