Managing and protecting a cloud-native future

Cloud-native adoptions and the use of Kubernetes are on the rise. 451 Research notes that nearly three-quarters of organisations globally are currently using or planning to adopt Kubernetes within the next two years. Many businesses, particularly in industries such as financial services — which produce and consume massive amounts of data — were already looking for ways to speed up development cycles before the pandemic. With so much business going online in 2020 and organisations looking to build or extend their digital offerings, this need is accentuated further still.

To understand why Kubernetes is experiencing such growth and what it means for businesses, we need to understand the pets-versus-cattle analogy that is so familiar to many in IT. At a fairly basic level, the idea is that some IT managers view the servers and systems within their organisation’s IT infrastructure as pets. They name them, care for them, and devote their working lives to keeping them happy, healthy, and alive. As organisations’ IT provisions scaled up, their menagerie of three or four servers became 10-20 physical servers, perhaps a few virtual machines (VMs), and a couple of different clouds.

What we now have is a herd of cattle rather than a few pets. Yes, we look after them, but as individual entities they’re replaceable.

If we wanted to continue this analogy, modern IT teams now manage something which is more akin to an industrial farming facility. We can’t count or see all our animals anymore. Vast amounts live on other farms and we pay other people to look after them, even though it’s still our responsibility if they get lost, stolen, or sick. In fact, nowadays it doesn’t matter where they are and what they’re being kept in. All we care about is what they produce — or returning to the world of technology — what they enable. This of course refers to the modern digital infrastructure, consisting of physical, virtual, and cloud workloads.

Containerisation accelerating DevOps

In recent years, we’ve added containers. Whereas VMs refer to hardware being run on multiple OS instances, containers enable multiple workloads to run on a single functioning OS instance. This makes them lighter, more agile, and faster to spin up than VMs, which run on their own OS and have larger storage footprints. While ITDMs are not as engrossed as the speeds and feeds of their storage infrastructure, they are very focused on the performance of their application and their end users (internal or external).

This is where Kubernetes as a platform becomes invaluable as they allow IT to group together containers that make up an application into logical units. Running Kubernetes offers IT teams the ability to accelerate and scale application delivery, reliably and with minimal risk.

They can also automate application delivery, reducing the risk of change, enabling continuous improvement, refreshment, and replacement while removing repetitive, manual processes. Kubernetes gives IT teams greater agility and flexibility when it comes to balancing capacity against demand fluctuations, continuously adding value to applications, and the ability to run several applications running on different platforms simultaneously.

Finally, Kubernetes strengthens the link between development, quality assurance and operations teams. DevOps is about facilitating collaboration and breaking down silos within these teams, uniting them to achieve a common goal — creating more value for the organisation and its customers. Ultimately, this is the very essence of what Kubernetes can deliver to a business: the ability to deliver applications faster, at greater scale, and with greater accuracy.

DevOps, at its very core, describes a process of doing things in a cloud-native manner, so Kubernetes fits like a glove into the broader aim of any DevOps organisation working towards a common goal. The potential benefits are beyond the imagination of many organisations. DevOps tapping into the automation and scalability that Kubernetes offers means faster development cycles. In layman’s terms, businesses can upgrade, patch and refresh applications far more frequently than they could before.

In financial services, for example, this is a key advantage. When bank branches across the world were forced to close in 2020, the vast majority were ready to service their customers digitally through online and mobile banking. This level of digital sophistication is partly due to the disruption of challenger banks, which has taken place over the past decade as companies like Monzo and Revolut have forced the hands of the global powerhouses. A consequence of these events is that banking apps and services now need to be updated and improved on a monthly basis rather than a few times a year.

Moving forward, technologies such as AI and machine learning will further automate how we bank, making managing our personal finance, saving money, and keeping track of spending easier. This is something that cloud-native platforms and DevOps will enable fast-paced and extensive innovation on, as banks compete to have the best apps and personalised services available.

Modern data protection

When we talk about the scalability that cloud-native platforms and Kubernetes provides, we can also refer to the repeatability and accuracy with which new containerised environments can be spun up.

Staying with the example of financial services, as we emerge from the pandemic, we will see physical retail branches change in their makeup, requiring more advanced digital and contactless systems. Introducing new technologies and devices in-store will become part of the new norm as people return to the high street but expect a digital-first experience. This will likely prompt some level of IT refresh across multiple branches to ensure customers can bank on getting a consistent experience across every location.

Approaches such as infrastructure as code (IaC) will therefore become vital to organisations looking to provide a consistent and inclusive ‘in-person’ experience across physical sites. IaC refers to the process of managing and provisioning infrastructure through machine-readable definition files rather than manual configuration, which can be hindered by human mistakes.

IaC provides the ability to take a repeatable task and run it the same way every single time. Looking back to the old days, replicating IT environments across multiple sites could really only be done by configuring and setting up one site, then using the exact same team and process for every single site. In reality, this isn’t achievable if you have over 100 retail banks on every high street in Hong Kong, let alone globally.

IaC means that the configuration method used for the first site is essentially defined in software code which can be lifted and used to create an exact replica over and over. Furthermore, for businesses with Platform Ops teams, which provide operational services to development teams in a way that allows them to self-serve, spinning up workloads is no longer a lengthy task. Whether those workloads are in the cloud, on-premises, virtual or containers, IaC offers greater speed and efficiency whilst also making the process repeatable. This not only speeds up the process of rolling out a digital infrastructure across multiple sites, it also reduces the possibility of human errors, which may not be malicious but can lead to system outages and cybersecurity vulnerabilities.

In financial services, as well as so many other industries, data protection is undermining digital transformation efforts, with backup failures and incompletions leaving 58% of organisations’ data potentially unprotected according to the Veeam Data Protection Report 2021. Kubernetes and cloud-native platforms are fundamental to organisations’ continuous digital transformation, but do not remove the requirement around data management. If anything, there are more nuanced challenges to data protection posed by using code to deploy version applications, as you have stateful data being written to them from external sources such as databases and end users. This data is not contained within code but is now stateful. It must be protected either as part of the CI/CD pipelines and a native API triggering a backup before any code change or a policy defined to take a backup using native tools made for Kubernetes.

As containers continue to grow in terms of both popularity and impact, businesses must ensure that they have the ability to protect and backup data across physical, virtual, cloud and Kubernetes environments. This is why businesses looking to take advantage of the agility, scalability, and automatability that Kubernetes offers cannot overlook the need to modernise their data protection strategies and capabilities in tandem.