Modernising infrastructure for mission-critical IT services

As IT and data workloads grow for various organisations around the world, infrastructure has also had to develop in tandem. This has led to situations where never-ending layers of new technology and applications get added to an organisation’s infrastructure, with little being taken away or deprioritised. These new layers can lead management to forget about what exactly is the digital core that is essential for mission-critical workloads.

At a recent roundtable organised by Jicara Media and hosted by Hewlett Packard Enterprise, Srinivasan Narayanan, Regional Solution Leader, Pointnext APAC and Suresh Menon, General Manager, Mission Critical Solutions, explored the challenges and complexities of modernising infrastructure together with invited IT leaders from across different industries.

Defining the digital core

- Advertisement -

With the speed of digital transformation, many organisations are doing their best to find out how to tap onto the potential of new technologies such as cloud, artificial intelligence, or machine learning. However, these new developments should not change what the digital core and mission-critical services of the business are, and if this core is not clear and up-to-date, organisations may be unable to deliver value to customers and may even disappear. What is core may be different in different sectors, but they are essentially what helps to support and execute functions that help keep the operations lights on and available to customers, as well as other key functions such as disaster recovery.

A common challenge that came up amongst the discussants, especially those in larger organisations, was the need to create alignment across different markets and business units when it came to deploying new solutions. For example, Lee Kok Foong, Senior Vice President, Technology and Operations, from a major financial institution in Singapore shared that due to the huge size of the institution, different business units might be using a different suite of solutions that met their needs. Despite the large number of internal solutions, there was still a need within the institution to decide which solutions best fit the business, and which workloads were mission-critical in each line of business.

Sourabh Chitrachar, Regional VP (Asia), IT Transformation and Strategy, at Liberty Insurance Private Limited shared a similar sentiment in a different context, where the need for standardisation also applies when an organisation operates in different countries and different markets, with each country having its own system and ticketing tools. This creates technical debt for the company, as more work is then needed to be done to integrate the different systems to ensure that they speak the same language.

The ubiquity of infrastructure and increased expectations

Furthermore, the changes in work brought about by the rise of remote working has also transformed the concept of infrastructure and people’s expectations of what their infrastructure can and should be doing. 

Mohamed Saabir, Business Information Manager SESA, Business IT Management of AkzoNobel shared that connectivity as a digital core is seen as a basic part of day-to-day life in an organisation. “At the beginning of pandemic, there was a ‘Wow’ factor to let everyone work from home efficiently,” shared Saabir. “As time passed, it became a very common thing. It is like when you go home and switch on the lights. You are not excited if they switch on anymore.” This is despite how much the infrastructure has been scaled up, or the amount of stress involved to ensure connectivity for the increased number of remote workers.

This increased connectivity also brings with it its own challenges, as security management of devices and data becomes a key concern. “Having it is the easy part,” remarked Associate Director of IT at a real estate giant. “The harder part is how to restrict it, and where we can restrict”. With an increase in employees completing work on a variety of devices, this imposes an additional hidden cost to manage and police data flows and perimeters.

These security concerns apply not just to internal employees, but also to the partnerships that the organisation leverages with external parties and vendors. This can come in the form of outsourcing the hosting and management of infrastructure, but also integrations with third parties and service providers in their industry. For example, the Head of IT at one of the largest real estate companies in the region talked about how the push towards smart buildings meant that their Building Management Systems was increasingly integrated with other vendors. This improved tenant experience, but also created more concerns about security.

Assessing organisational needs through key metrics

Making sense of the challenges, Narayanan explained that different companies would have different focus areas, and that it was important for them to know how to support their businesses differently. This can be done by applying two sets of metrics when assessing their businesses. 

The first is operational metrics to understand if the digital core is supporting business needs. This is typically based around areas such as connectivity, security, performance, scalability, and other dimensions that affect the day-to-day operations of the business. It is important for businesses to understand what the baseline around each key metric is, so that they can assess what is manageable or what needs to transform to meet the changing needs.

The second metric centres around the design and building of their architecture, and how the organisation is managing their Day Zero and Day One. This includes the costs of operations, and the ability to adapt and change the design of the architecture to adapt to business requirements. 

With these two metrics in mind, it is then up to management to find an efficient way of identifying the metrics, and managing the baseline and deviations.

Stressors on traditional infrastructure systems

Building on the assessment metrics, Menon also identified the 3 key areas of stressors on traditional systems, which can help companies in deciding if their current infrastructure is sufficient. 

First, the amount of data that companies have to deal with is growing tremendously, with data coming from all the different business units and touchpoints that they have with customers. 

Next, the velocity in which data is gathered means that companies must be able to extract value from their data in real-time and solve business problems quickly. 

Lastly, companies must take into account the economical aspect of their decisions. “We cannot plan infrastructure solutions for situations 5 years down the line,” Menon explained. “COVID-19 has shown that business dynamics can change within weeks, and even days.” Knowing the baseline metrics and where the stress might come from, companies are then in a better position to make decisions concerning their infrastructure and digital core. 

This process was applied to the relevance of developing real-time solutions in their various industries, which is typically an application that requires huge amounts of data collection and processing power. 

Although they were seeing more use cases for real-time across different industries, Narayanan shared that the two main sectors which were deploying such solutions were in financial services, where they need real-time alerts for fraudulent transactions, and in the security and surveillance sector for threat recognition and quick action. 

Deciding on infrastructure deployment models

After discussing what the digital core is and how to assess it, the discussion then moved to the appropriate infrastructure deployment models that companies need to support such mission-critical workloads. 

Narayanan shared that one perspective to take would be to look at in a 3-layer fashion: the bottom layer would be the core mission-critical applications; the middle layer is the co-existing applications that reside with it; and the top layer being 3rd-party integrations. Each layer would have different deployments, with the mission-critical applications being the ones that most people would prefer to keep closer to them so that they can control, govern, and manage risk.

Once you have defined the layers, most customers would prefer an on-premise or hybrid deployment for their mission-critical applications. Although many think that on-premise means that you would own all the risk, it can actually be shared with your partner organisations. 

For hybrid deployment, you would have some apps that are connected back to your on-premise deployment, but also connectivity with third-party providers. In the past, many would not think of hosting their mission-critical loads on a hybrid environment, but many are now doing so through multi-channel interfaces, with things still primarily governed by in-house IT architecture and infrastructure teams. This allows them to let go of some aspects to be run by 3rd party applications, whilst still governing and managing risk on their own. 

Summarising the discussion, Narayanan shared that the key is to find the right mix of the right things to do, and to ensure that we study and understand what the proper things are. “We have to analyse what we deploy, where we deploy, and how we deploy”, he concluded. “This allows us to work together and get things done the correct way”.