In the past, customers bought infrastructure for key applications such as VMware, SAP HANA, Oracle, by buying servers from a vendor such as Lenovo, storage from a vendor like Hitachi and network switches from vendors like Cisco, Juniper. All three would be for the same application, and if something broke, they had to deal with multiple vendors.
This led to multiple hand-offs between vendors, resulting in delayed problem identification and resolution – and the operational efficiency of a DIY environment was low, because each infrastructure component required its own management console, with no deployment automation when the infrastructure was onboarded.
The Need for Converged
The solution put forward for this problem was Converged, which simply meant that for a commonly used application like SAP, Oracle or VMware, you didn’t have to worry about putting the infrastructure together. A single vendor could bundle it together for you in one stock-keeping unit (SKU), with certified network, server, and storage components. If something broke, you could go to the vendor to sort it out for you. The management plane was unified to provide visibility across the inventory, health, and monitoring aspects of the entire stack. Automation capabilities were incorporated for initial deployment, as well as for performing frequent tasks on the infrastructure.
A lot of customers preferred to walk down this path, dealing with a single vendor and gaining faster time to deploy as the entire stack was pre-tested and validated, better operational efficiency due to single management across different infrastructure elements, all resulting in a quicker resolution when things broke.
Birth of Hyper-converged Infrastructure
A desire to further reduce the number and complexity of vendors gave birth to hyper-convergence. Hyper-converged simply meant having storage disks and software inside the server, that would bind and manage those servers. There would be no external storage.
The customer liked this even more, as they had fewer components to deal with. The entire storage life cycle was managed at the software layer, and they had resiliency against hardware failures. In some cases, even the underlying hardware life cycle was capable of being managed by the software either through plug-ins, or natively.
Hyper-converged also brought flexibility so that you could incrementally scale your compute and storage together by adding nodes in a modular fashion, thus avoiding “guess estimates” that resulted in big iron purchases upfront.
Customers go for hyper-converged because it’s easy to use, easy to manage, and easy to scale. A server has compute power, performance and storage capacity because there are disks inside the server. If more is needed, nodes are added – which makes it very easy to scale in a linear manner. It’s truly software-defined where underlying hardware does not matter much. Customers like this.
If the application the customer is running scales linearly, compute and storage requirement grow together and that’s a perfect workload for hyper-converged.
But hyperconvergence has some limitations.
If an application says ‘I don’t want more storage capacity – I just need more compute power to serve an increasing number of user requests,’ hyper-converged cannot do that, because every time you add a server you have to add both – compute & storage. It is not a good fit for use cases where one needs to add storage and compute independently. Though some vendors offer storage- only nodes, in my view this is more of a workaround.
Let’s look at the opposite scenario. And that’s where I have seen it put a serious dent in the customer’s budget. When a high-investment application such as Oracle Database – which is priced based on the number of CPUs a customer has, is running on hyperconverged, and the customer only needs more storage, to add capacity, they will also have to add CPU power. This means additional Oracle software licence costs as the number of CPUs goes up – even though it was not required.
With Converged, storage is external and one can add storage and compute independently of each other. For the Oracle Database, the customer can simply add storage capacity without beefing up CPU cores.
Also, converged does well when latency is an important consideration. and Environments requiring zero recovery-point objective (RPO) and recovery-time objective (RTO) -such as core banking applications, prefer to utilise external storage systems.
And in some storage-intensive workloads like archive, converged scores well on data centre footprint as compared to hyperconverged. Even though converged infrastructure offers some unique benefits, it is not truly software-defined, and management is not as easy as hyper-converged.
Converged or Hyper-converged or Both?
There is clearly a huge adoption wave for hyperconverged, with spend on it growing between 40-70% year-on-year in Asian countries. Most customers I see today, have both converged and hyper-converged environments running in separate silos. Depending on the workload that comes in, they will make an evaluation and deploy it on one or the other.
But the more islands one has, the weaker the utilisation. There’s a cost–leakage happening simply by maintaining two separate environments. And sometimes workload behaviours change. When the workload initially came in, it was a good fit for hyper-converged and maybe two years later the nature of it changed and it became a good fit for converged. How do you make that move from an application running on converged, to one that isn’t?
Yet today, customers are more-or-less forced to run two separate environments and they are forced to decide when a workload comes in, where they should put it. Therefore, the best answer to whether to choose hyperconverged or converged, is ‘have both integrated.’
Best of Hyper-converged and Converged, together in One Architecture
There is an option to seize the best of both worlds.
A composite architecture can be provided by acquiring hyperconverged network switches (IP & SAN) and externally connecting a storage system to them, robust software unifies the management and life cycle of all the solution elements, along with a rich set of eco-system integrations.
If you are using VMWare vSAN, you will find composite architecture very useful.
An administrator will have different storage pools available. One pool comes from external storage, another from vSAN. Depending on whether hyper-converged or converged is required, you can assign the storage pool from the respective storage area. On the VMware environment, you could have 100 VMs for example, some of which may be getting storage from the external SAN because they are running a workload optimised for converged, while others may be getting storage from internal disks, because they are a good fit for hyper-converged.
Composite architecture offers the best of both worlds in one architecture, whilst avoiding silos and continuing to provide a common interface for management. vCenter remains the software used to manage internal and external storage, which many customers appreciate.
Interestingly enough, despite composite architecture being available today, organizations have yet to become aware of the option, with many vendors either not offering this capability, or not actively recommending it. Composite architecture offers some very compelling use cases, and customers no longer have to choose between converged and hyper-converged.