Cloud models for a distributed future

In a time of major change, the cloud has been one of the key technologies that has helped keep business processes running amidst the massive lockdowns and disruptions of COVID-19. As more and more businesses look towards cloud solutions, how should they decide what cloud models would be most applicable for their businesses, and how should they support their transition to the cloud? In a fireside chat at the Frontiers of Work 2021 conference, Anthony Hodge, Head of Cloud Governance at Standard Chartered Bank, broke it down.

Before we talk about the different cloud models available, what are your thoughts about cloud migration and the different strategies for migrating into the cloud?

It depends entirely on where you or your organisation is in its journey. If you are starting a company now, you are going to start on a cloud, and you can buy on-demand capacity when you need it, and dial it up or down. Unless you are already a big company, you are not going to invest in a data centre.

Next, all the productivity tools that we use such as Microsoft 365 or Zoom are already backended on something that is on the cloud. If you are new, you are just going to be cloud native, and that means not just going on the cloud, but adopting it in the way your software development life cycle works. You will use modern methodologies, pipelines, deploy codes etc.

All of this also trickles back into the traditional organisations or those that have been around for a longer period of time. These organisations cannot make the entire shift in one go, but these tools are going to come in by the back door anyway, and so they need a strategy to deal with them, such as through a combination of process and technical controls around the perimeter. But because we are all using online kanban boards, videoconference, messaging tools, as an enterprise, the key is not to say no, but to embrace them, rationalize them, then figure out how to support them — because they are coming regardless.

If you have a lot of monolithic applications when migrating to the cloud, how do you deal with the type of containerisation issues that arise, or do you transform them into microservers in the process?

Containers are just another type of technology. I think the key is that it doesn’t matter whether you are deploying to the VM or to the container, but what you need is a modern software development life cycle. You need pipelines; you need to be able to write code, push it forward, get it checked, peer reviewed, everything all the way to testing, and all of these things are automated. It doesn’t matter if it is a large app or hundreds of microservices, as long as it is pipeline capable.

There is a great book by Kief Morris called Infrastructure as Code. You pull your software to your infrastructure via code, so if things go wrong you can roll it back to the last known good. If you are in a situation where you have a monolithic code base, you need to think about what you are doing to your pipeline. If you are still doing manual deployment, don’t bother, because you have just brought on another expensive vendor to manage, and a third-party at that. Get back to your basics, get your pipeline sorted, get your software development sorted, then think about how to progress — because your pipelines will work whether on-premise or off-premise.

What are some considerations that determine what cloud providers you go for? Are there any rules of thumb that can guide our decisions?

It is always use-case driven. Are you looking for a database? Compute? Data analytics? There is a lot of research out there, but I am not going to stand up for one Cloud Service Provider over another, as they all have their strengths and weaknesses. Many organisations tend to start with grid computing, because it is stateless and on demand. From that point, they dip their toes into the water and get used to the automation element, because dev-ops and cloud are two sides of the same coin. If you don’t have a modern software development life cycle and you have adapted your infrastructure engineering, all you have done is pay for another vendor.

The advantages come from having atomic consistency and scalability, where you can deploy one server or a thousand based on the parameter input rather than having to get more build engineers for your servers. And if you are starting now, why would you not start with a cloud? Your email, analytics, build pipelines are all there. One of the most sensible things Microsoft did was they  bought GitHub and the Developer Experience Tools, because they want you in the ecosystem. So go back to your use case and think about the cost and what you are trying to achieve, but always think cloud-first.

What factors should you consider when deciding whether to go to a multi-cloud approach?

At its most simplistic, it is practicality. With cloud, you can go for a specific SaaS-type solution, such as a finance platform, or an expense platform, and that is just third-party processing. When it comes to one of the big cloud IaaS providers like AWS, Google, or Azure, it takes time to onboard. It takes time to build a data centre. Your infrastructure engineering department changes to incorporate development engineers and infrastructure developers. There is a big shift in the kind of skills you need — that takes time.

When considering multi-cloud, I would recommend getting one cloud service provider bedded in first. Do that really well and start to take advantage of the scale. Then you can start to look at multi-region, because it is all the same API call and all you are dealing with is data sovereignty or data replication issues, which are known quantums for you to deal with.

I think there is a bit of a myth on the ability to burst from one cloud service provider to another, and multi-cloud is still nascent in terms of switching workloads. What you should be looking for when it comes to multi-cloud is probably best of breed — different providers for different services, and you can choose to allocate your work processes to particular ones based on their set of services.

At the same time, it is essential to look at exit management strategies as part of your contingency planning. You need to have a path out of the relationship, same as when you use any other third-party. If you remove some of the marketing, cloud is fundamentally third-party processing, outsourcing, and probably offshoring. So you need to think about what you are dealing with, what you need to do, and what contingency plans you have. Then, you can look at it with less hype and more practicality.

When looking at the big IaaS providers, much of the engineering talent is also being tied to a particular platform. If you do go multi-cloud, how would your hiring strategy change, and how do you balance the demands of the different platforms?

Portability is extremely important as a principle, but you need to think about whether it is portability in principle or actuality, and this is the same discussion people have been having with databases for years. Generally, we find people who spend time with Azure will tell you it is the best, people who spend time with AWS will tell you it is the best, and so on. A lot of it is familiarity, and you have to be practical about it when it comes to a multi-cloud base. The more you have, the more it will cost you in terms of engineering, support, maintenance, and just upkeep. There is a law of diminishing marginal returns, and you need to think whether you have spread yourself too thinly if you have many cloud providers and insufficient engineers.

I think the cloud is becoming the norm. So when we look at graduates or developers out of university, if they enjoy developing, and most of them do, they will already have an AWS account or an Azure account. We do not need to train them on the basics, as they all know it anyway. That changes the paradigm slightly, as your basic onboarding is already taken care of.

Let’s say you were on one cloud provider, but you really want some of the functions on another CSP. What should you consider when determining whether to make the leap?

The key to this is that you need a use case. Once you have a solid use case, you can start to think about what ‘good’ looks like, what your success criteria is, and how to measure expected results against actual results. Many times, people measure the outcome, and then try to tweak it to meet the results that they want — but it all comes back to what you are trying to do at the start. Have you thought it through properly? Do you have a definition of ‘done’? And do you have the test cases to meet that? Once you have that, then you can have a sensible data-driven conversation.

Where it gets a bit tricky is if you have 30 different teams playing around at the same time, and they all have slightly different use cases. That is an age-old problem with enterprise architecture. How do you herd those cats in the right direction? Because you will always have outliers and people telling you why one particular shiny new tool is the best. A lot of it comes down to the need to balance practicality with cost. If it is a new thing, will it generate some revenue in the near term, or is it a moonshot? Some moonshots can be justified, but you need to think realistically about what you can do, what is sensible, and bring it back to benefit realisation.

Do you think core banking will ever go to the cloud, and how does regulation impact your choice of cloud?

Firstly, core banking has already gone to the cloud. If you look at digital banks in the UK, they are all born with the cloud, and your core banking accounts and service accounts are already there. It’s not longer a matter of when, but how many.

Going back to my earlier point, cloud without the hype is basically third-party data processing, outsourcing, and offshoring. So the things you should be considering are whether you can take data from Country A, move it to Country B, and have a back office in Country C with access to it. Generally the answer is yes, but it is not true everywhere, and there is a huge amount of regulatory fragmentation. Developing economies are still not very comfortable with the cloud, and there is an increasing amount of data localisation requirement. For example, the Bank of India has some regulation on payments, Indonesia has onshoring regulations, and China does not like anything to leave its borders. But these are more geopolitical issues, and you have to find a way to develop a solution around them, even though there is no one answer to it due to the different nuances.

Looking at the distributed workplace and work-from-home policies that have been accelerated by the pandemic, how does the cloud impact that evolution of the future workplace?

It is a massive enabler. Now, I am sat at home livestreaming to you. I’m not in the office and I haven’t had to commute. Most knowledge workers are at home, and we are going to be for the foreseeable future, and the cloud has massively enabled this. From a user productivity and connectivity perspective, we are using video conferencing tools, emails, messaging, collaboration, online whiteboards, sticky notes, even wet-ink signature replacements. There is no need to go to the office anymore, and you can sign million-dollar invoices digitally. Two years ago, we thought this was a no-no. We needed to be in the office. There was a lack of trust, and a couple of non-believers in this space.

There will be a breach at some point. It is not if, but when. We see breaches happening all the time – but what kind of breach it is, how they are dealt with, and what the contingency plans are that will be more interesting. So, you have to assess your use of productivity tools and the criticality they have. But without the cloud, I think we would not be able to pivot to work remotely the way we have without a huge amount of pain and a massive increase in costs.