Share this post

December 08, 2015

Convergence and the Software-Defined Data Center

Scott D. Lowe - ActualTech Media

Avatar

This post is part of a series based on the forthcoming book, ​Building a Modern Data Center​, written by Scott D. Lowe, David M. Davis and James Green of ActualTech Media in partnership with Atlantis Computing.

In a previous post, we saw a history of data center storage and ended by discussing the fall of the monolithic storage array. In this post, we’ll take a deeper look at the data center trends and architectures that are enabling that transition away from monolithic storage arrays. First, we must step back and look at how we got here.

The Emergence of Convergence

As the challenges for IT have grown in equal proportions with the ever-increasing scope of their responsibilities, IT decision makers have often looked to outsource parts of their operation. A notable trend for data center “outsourcing” of sorts is now referred to as convergence. Put simply, convergence is the idea of multiple pieces of the infrastructure being assembled prior to delivery to the customer. Convergence saves time and frustration during the deployment phase and provides decreased time to value after procurement.

An example of a common form of convergence might look like this: a rack is delivered to the data center already containing a storage array, a blade chassis populated with blades, and a few top-of-rack switches. Everything is cabled up, and all the configuration of the switching and storage has been done prior to delivery. At the moment the converged stack is delivered, the data center team can roll into place, deliver power and upstream network connectivity, and the pod will be up and running. This model of growing the infrastructure in a data center is substantially faster than the traditional model of having parts delivered, assembling them, hiring consultants, troubleshooting, and so on.

Speaking of troubleshooting, there’s another important facet to this approach: the pod that comes pre-built is based on a tested and validated reference architecture. This means that the customer doesn’t need to tinker with exactly which configuration of the parts available will work for them; that design work has already been done. Also, when the pod is built at the factory, the technician building it actually makes sure that the connections are good and the infrastructure is highly available and performing as designed.

The value in convergence comes not only from the fact that the solution comes pre-assembled, but also from the fact that it includes all the pieces necessary. Half the challenge in traditional piecemeal solution building is getting all the right parts and ensuring interoperability. Convergence guarantees that with the purchase of a certain SKU, all the components contained within it will be compatible with one another, and all the necessary parts will be included.

Convergence helps the data center scale faster. But the next evolution of scale and flexibility is being created by leveraging software to define data center resources rather than hardware.

The Emergence of SDDC

As the data center continues to evolve, there's an emerging need for flexibility, agility, and control. With "web scale" comes challenges that aren't found in legacy or smaller infrastructures, and require new ways of approaching the data center. The current approach to address these issues is the "software-defined" approach which refers to the idea of abstracting a physical data center resource from the underlying hardware and managing it with software. An example most IT professionals would be familiar with is the virtualization of compute resources. No longer allowing physical servers to be the container for data center systems, while providing and manipulating their resources with software, is the new normal. The ability to create a new "server" with a few clicks or migrate a running workload between physical servers is the essence of the software-defined approach.

The software defined approach took hold with compute, but is now starting to encompass all areas of the data center, which has led to the term software-defined data center (SDDC). The SDDC isn't any one thing specifically, but rather a way of describing a data center where as many pieces as possible are abstracted into software. The SDDC is characterized by automation, orchestration, and abstraction of resources into software and code. By nature, code is more reliable than humans, which means that compared to a legacy data center, the SDDC is more secure, more agile, and moves more rapidly. The fallout of abstracting physical resources across the data center is that all of a sudden, the hardware is substantially less important to the big picture.

Commoditization of Hardware

Historically, computing has been enhanced by the creation of specialized hardware, created to serve a specific purpose. Application-specific integrated circuits (ASICs) are developed, as the name suggests, to serve one specific purpose. In other words, they have one primary application. While this model of computing can lead to increased performance, lower latency, or any number of desirable metrics as compared to commodity hardware, it also comes with substantial costs that must be weighed. Some notable costs of ASIC-based hardware are: increased manufacturing cost, dependence on specific manufacturers, inability to recycle hardware for dissimilar projects, and incompatibility across systems. Which is actually better? ASIC-based or commodity hardware?

Examining the cost is more of a business exercise than it is a mathematical one. The cost of custom hardware in terms of capital is generally more; that piece is simple. But what is the cost (or risk) to an organization of becoming tied to the one particular vendor that makes the custom silicon? What if the manufacturer goes out of business? What if there’s a blizzard and the parts depot can’t get a replacement delivered for 6 days? If it was commodity hardware, it could be supplied by a different vendor who is closer or not impacted by the severe weather. Commodity hardware is inexpensive and widely available, which are both significant advantages to an IT organization.

How does this relate to the software-defined data center? Well, because the SDDC’s goal is to abstract as much physical function into software as possible, the physical equipment becomes less important. This means that platforms which would previously have required special hardware can now be emulated or replaced with software and run on commodity hardware. We'll discuss some more specific examples of this later.

Commoditization allows for standardization. When many players in the market make products to server the same purpose, there often becomes a need to create standards for everyone to follow so that all the products are interoperable. This is a win-win situation because the customer experience is good and the manufacturers learn from each other and develop a better product. In the IT industry, standards are almost always a good thing. ASIC-based computing isn’t devoid of standards, as electrical engineering in general has many, many standards. However, when only one vendor is creating a product, they have free reign to do as they please with the product.

All of this is not to say that there isn’t a case for specialized hardware. There are times when it makes sense based on the application to use custom silicon created just for that task. Hardware being a commodity also doesn’t mean certain vendors can’t set themselves apart. One vendor may beat another to integrating a new technology like NVMe, and that might make them the best choice for a project. But again, something like NVMe is a standard; it is meant to replace proprietary manufacturer-specific ways of doing things.

When hardware resources are abstracted into software allowing commodity hardware to run the workload, IT is afforded more flexibility, choice, longevity of hardware, and likely a lower cost as well. This abstraction opens the door to the architecture of the modern data center: hyperconvergence.

Sign up for your free eBook today: Building the Modern Data Center: Principals & Strategies of Design

What Is Hyperconverged Infrastructure?

Hyperconvergence is an evolution in the data center that’s only just beginning to take hold. The past couple of years have seen hyperconverged solutions developing at an incredibly rapid pace and taking hold in data centers of all sizes. Hyperconvergence is a data center architecture, not any one specific product. At its core, hyperconvergence is a quest for simplicity and efficiency. Every vendor with a hyperconverged platform approaches this slightly differently, but the end goal is always the same: combine resources and platforms that are currently disparate, wrap a management layer around the resulting system, and make it simple. Simplicity is, perhaps, the most sought after factor in systems going into data centers today.

A common misconception is that hyperconvergence means servers and storage in the same box. Pooling locally attached storage is a good example of the power of SDS, which itself is a part of hyperconvergence, but it is not the whole picture. Hyperconverged infrastructure aims to bring as many platforms as possible under one umbrella, and storage is just one of them. This generally includes compute, networking, storage, and management. Hyperconvergence encompasses a good portion of what makes up the SDDC.

One Platform, Many Services

Convergence, which was discussed in Chapter 2, took many platforms and made them one combined solution. Hyperconvergence is a further iteration of this mindset in which the manufacturer turns many platforms into one single platform. Owning the whole stack allows the hyperconvergence vendor to make components of the platform aware of each other and interoperable in a way that is just not possible when two different platforms are integrated. For instance, the workload optimization engine might be aware of network congestion; this allows more intelligent decision to be made on behalf of the administrator. As IT organizations seek to turn over more control to automation by way of software, the ability to make intelligent decisions is critical, and tighter integration with other parts of the infrastructure makes this possible.

What characterizes hyperconvergence is the building-block approach to scale. Each of the infrastructure components and services that the hyperconverged platform offers is broken up and distributed into nodes or blocks such that the entire infrastructure can be scaled simply by adding a node. Each node contains compute, storage, and networking; the essential physical components of the data center. From there, the hyperconvergence platform pools and abstracts all of those resources so that they can be manipulated from the management layer.

Simplicity

Makers of hyperconverged systems place extreme amounts of focus on making the platform simple to manage. If managing compute, storage, and networking was complicated when they were separate, imagine trying to manage them at the same complexity but when they’re all in one system. It would be a challenge to say the least. This is why the most effective hyperconvergence platforms take great care to mask back-end complexity with a clean, intuitive user interface or management plugin for the administrator. By nature, hyperconvergence is actually more complex that traditional architecture in many ways. The key difference between the two is the care taken to ensure that the administrator does not have to deal with that complexity. To that end, a task like adding physical resources to the infrastructure is generally as simple as sliding the node into place in the chassis and notifying the management system that it’s there. Discovery will commence and intelligence built in to the system will configure the node and integrate it with the existing environment. Because the whole platform is working in tandem, other things like protecting a workload are as simple as right-clicking and telling the management interface to protect it. The platform has the intelligence to go and make the necessary changes to carry out the request.

Software or Hardware?

Because hyperconvergence involves both software and the physical resources required to power the software, it’s often confusing to administrators who are learning about hyperconvergence. Is hyperconvergence a special piece of hardware, or is it software that makes all the pieces work together? The short answer is that it’s both. Depending on the hyperconvergence vendor, the platform may exist entirely in software and run on any sort of commodity hardware or the platform may use specialized hardware to provide the best reliability or performance. Neither is necessarily better, it’s just important to know the tradeoffs that come with each option.

If special hardware is included, it dramatically limits your choice with regards to what equipment can be used to run the platform. But it likely increases stability, performance, and capacity on a node (all else being equal). The opposite view is that leveraging a VSA and no custom hardware opens up the solution to a wide variety of hardware possibilities. While flexible, the downside of this approach is that it consumes resources from the hypervisor which would have served virtual machine workloads in a traditional design. This can add up to a considerable amount of overhead. Which direction ends up being the best choice is dependent on myriad variables and is unique to each environment.

The Modern IT Business

IT exists to serve the needs of the business. As such, defining business requirements and subsequently meeting them is critical for an IT organization’s success. In the next post in this series, we’ll take a look at exactly what some of those requirements are, and how they’re being addressed today. Don't forget to visit the Building a Modern Data Center eBook site now and sign up for your free copy.

SCOTT D. LOWE Contributor

​Scott is Co-Founder of ActualTech Media and serves as Senior Content Editor and Strategist. Scott is an enterprise IT veteran with close to twenty years experience in senior and CIO roles across multiple large organizations. A 2015 VMware vExpert Award winner, Scott is also a micro-analyst for Wikibon and an InformationWeek Analytics contributor.
12345
Current rating: 3.8 (9 ratings)