Disaggregated Hyperconverged Infrastructure – Scaling Compute and Storage Independently

By Bill Jones, Enterprise Architect

Way back in April of 2016, we took a look at hyperconverged infrastructures in our blog post Server-side Flash & Hyperconvergence. Since that time, Hyperconverged Infrastructure (HCI) has undergone significant changes. The change that I’m most excited about is the launch of Disaggregated Hyperconverged Infrastructure.  In this blog post I will review HCI and dive into Disaggregated HCI.

What is Hyperconverged Infrastructure?

Traditional virtualization environments are built with physical servers which share access to a separate and distinct centralized storage solution. With the growth of software defined storage (SDS), virtual environments can combine servers and storage together, which can ease administration and simplify deployments. When the storage is integrally managed within the virtualization solution and when the servers and storage are sold together as a combined solution, we call this a Hyperconverged Infrastructure (HCI).

HCI is more than just using SDS with virtual workloads. It is more than overlaying SDS across existing hypervisors. HCI includes features and tools allowing administrators to manage the compute and storage resources more seamlessly (and often without administrators needing awareness of the underlying storage). For clients with a dedicated virtualization admin team, HCI can allow the virtualization team to manage their own storage. For clients where virtualization is managed by admins who “wear many hats,” HCI reduces the number of components which administrators need to manage.

Hyperconverged Infrastructure Challenges

As sci-fi author Robert A. Heinlein points out, “There’s no such thing as a free lunch,” (a.k.a. TNSTAAFL). Hyperconverged Infrastructure (HCI) does solve several challenges, but it is not a panacea. The challenges listed below are the ones I most frequently encounter as I work with clients. As a result, the order in which the challenges are listed may differ from other people’s experiences.

The biggest technology hurdle is the “virtualization only” design of most HCI solutions. Since the servers and storage are designed to work together to support virtualization, a “virtualization only” approach makes a lot of sense. However, when centralized storage is also needed for bare-metal servers, clients may still need to maintain both a storage solution and the HCI solution – which diminishes the simplification that an integrated solution is designed to deliver.

The second largest technology hurdle is how an HCI solution is scaled. Part of the beauty of HCI is the simplicity to scale the virtual environment using cookie-cutter components. Running out of RAM or CPU? Add a node or two. Running out of storage? Add a node a node or two. Growth is simple! Unfortunately, environments which are running low on RAM or CPU aren’t necessarily also low on storage, and vice versa. Being required to add more of both resources when the environment is only low on one can be costly and frustrating because your company is investing in unused resources.

The third hurdle is encountered when clients schedule hardware refreshes. Virtualization makes it easy to upgrade servers and storage independently. So, server and storage refreshes are often scheduled for different quarters. With HCI, the server and storage need to be refreshed at the same time. Bringing these refreshes into the same quarter can often introduce budget timing and implementation challenges, even when the value gains to the organization are compelling. In addition, many HCI solutions have a minimum requirement of 10GbE networking, which may bring a network refresh/upgrade cycle into the equation.

As a result, HCI is most often implemented for specific use cases. For example, some HCI products use all-flash storage, which makes them especially appealing for VDI projects. Other HCI offerings incorporate replication or backup features, which can make them ideal for ROBO infrastructure. Since both VDI and ROBO projects often require their own dedicated server and storage resources, the technology and budgetary hurdles are easier to mitigate. For example, when HCI is used for ROBO projects, the ability to manage a consistent environment across all remote locations can be very valuable to an organization.

Once companies experience the benefits of HCI, many choose to use HCI as their virtualization standard. Put another way, when HCI is the right fit for an organization, it is a very, very good fit.

What is Disaggregated Hyperconverged Infrastructure?

Hyperconverged Infrastructure (HCI) brought storage and compute resources together both physically and administratively. With Disaggregated HCI, the compute and storage are managed together and are scaled independently. In addition, some Disaggregated HCI solutions allow non-virtualization workloads to access the storage. These two innovations address both technology challenges I discussed above.

Below is an overview of three current offerings in the Disaggregated HCI space.

  • HPE Nimble Storage dHCI leverages HPE Nimble Storage and HPE ProLiant servers. “dHCI” stands for disaggregated Hyperconverged Infrastructure.
  • NetApp HCI uses multi-node chassis to offer a Disaggregated HCI solution with a minimum amount of rack space. The storage is built on NetApp’s Solidfire all-flash storage.
  • Datrium DVX uses in-server flash and an external storage resource (the Datrium NetShelf). Datrium pioneered the idea of Disaggregated HCI solutions and coined the term “Open Convergence.”

Summary and Conclusion

Disaggregated Hyperconverged Infrastructure (HCI) addresses the major technology challenges of traditional HCI. I predict the Disaggregated HCI space will become more crowded in the coming years.

If you have further questions about HCI, Disaggregated HCI, or questions about virtualization in general, please contact Dasher Technologies at [email protected].

 

 

This post is powered by Mix Digital Marketing