When the pandemic struck, offices emptied, and IT had a clear mandate: Get employees online ASAP. We made great strides in doing so, but our region now is in flux. Businesses are debating whether workforces should return to the office, policies have changed, and our IT requirements are becoming increasingly ambiguous.
While careful data management was crucial during the pandemic, data mobility has now become the key to ensuring businesses can respond swiftly. In this environment, businesses must consider myriad options, such as moving their data from the public cloud to the private cloud, reassessing cloud strategies, or exploring alternative data storage providers.
While long-term planning has its merits, the pandemic has taught us the importance of flexibility in optimising a company’s financial, technical, and security requirements based on its current needs. By carefully evaluating the advantages and disadvantages of both cloud and ground-based solutions, the significance of data mobility becomes evident. It goes beyond facilitating smoother migrations; it has the potential to transform your business.
The cloud advantage vs covering all bases on the ground
In the post-“Great Relocation” era, the Asia-Pacific public services market saw a remarkable 36.3% growth in 2021. The demand for a continuous flow of services and data has surged. The cloud, offering unparalleled scalability, enables businesses to dynamically adjust their storage capacity based on demand. This flexibility not only reduces capital expenditure but also removes the limitations of physical infrastructure.
For companies with remote workforces or decentralised structures, the public cloud is increasingly advantageous. If businesses have already shifted away from physical infrastructure during the pandemic, the prospect of repurchasing and maintaining it may not be cost-effective. In such cases, companies aim to optimise costs within a cloud environment. This can be achieved by re-architecting their systems into more cloud-native solutions, such as platform as a service (PaaS) or managed database services. These solutions alleviate concerns about managing hardware, operating systems, and patches, allowing businesses to focus on core operations.
However, companies should be aware of certain pitfalls, including the risks of cloud “lock-in” and “lock-out.” Cloud lock-in occurs when integration with proprietary services and Application Programming Interfaces (APIs) becomes difficult to replicate. Relying on vendor-specific expertise can restrict a team’s ability to work with alternative cloud providers. Another factor is “data gravity,” where a company’s heavy reliance on a single cloud makes it challenging to migrate workloads to another platform en masse.
Additionally, IT teams may unintentionally lock themselves out of other environments by building architectures that are incompatible elsewhere. While it might be possible to remove a workload from its current cloud, it may not easily fit into another environment.
Some organisations are increasingly seeing the value in migrating applications back on-premises as employees return to the office. The escalating costs of cloud services may no longer seem justified, especially when idle physical servers are available. In such cases, it becomes logical for these organisations to move their workloads and data back on-site, leveraging existing hardware investments. These organisations may need greater control and security over their data and may have their own security measures, encryption protocols, and data management practices to implement.
The flexibility of on-premises solutions allows for custom hardware configurations and network setups that optimise performance and scalability while minimising latency. This customisation enables organisations to tailor their infrastructure to meet their specific needs and ensure optimal operation.
The transformative power of data mobility
When it comes to choosing the best data storage configuration, there is no one-size-fits-all approach. Organisations adopting hybrid or multi-cloud strategies can select the most suitable environment for each workload on a case-by-case basis. However, this task is not necessarily straightforward. Many businesses have faced challenges when first migrating to the cloud, even with a basic “lift and shift” approach. Finding the right balance between on-premises and cloud environments is crucial, and that’s where data mobility comes into play. It ensures organisations can move their workloads when needed. Think of it this way: while you might not move houses often, it’s beneficial to have the ability to quickly move furniture for renovations or upgrades.
Beyond facilitating easier migrations, data mobility is transformative. It allows teams to replicate and host workloads and applications in separate environments for activities like testing and analytics, without affecting daily operations. This capability enables businesses to unlock the value of their data more effectively. By embracing data mobility, organisations can enhance operational agility, optimise resource utilisation, and extract valuable insights from their data assets.
Recoverability is the linchpin of data mobility
As organisations reassess their cloud strategies, ensuring safe and seamless data movement and recovery between environments is vital to avoid potential loss or temporary unavailability of critical workloads.
Cyber incidents can vary from small-scale issues, such as a deleted virtual machine, to large-scale catastrophes like site-wide failures, natural disasters, or ransomware attacks. Regardless of the scale, the key question is, “Where will the recovery take place?” According to Veeam’s 2023 Ransomware Trends Report, 74% of APAC organisations plan to recover to cloud-hosted infrastructure or disaster recovery as a service (DRaaS), while 73% plan to recover to servers within a data centre. These percentages add up to well over 100%, indicating that most organisations’ disaster recovery and cyber resilience strategies include multiple location types, depending on the crisis.
To ensure comprehensive protection, organisations must also guard against reinfection during recovery. Scanning data at every step of the recovery process is imperative. Organisations adopting a combined approach of data verification and staged recovery can significantly reduce the risk of data compromise. Fortunately, the same report found that nearly half (44%) of organisations in APJ first restore to an isolated test area or “sandbox” before reintroduction to production, as a preventive measure against reinfection.
Ultimately, it’s crucial to have a prepared, seamless, and efficient plan for data movement to minimise downtime. The minutes and hours following a critical outage are not the ideal time for learning new lessons. Being prepared for any eventuality will serve as the contingency that protects your customers, mitigates damage to your brand’s reputation, and keeps your business running.