Share this post

November 10, 2015

Data Center Evolution: The Rise & Fall of the Monolithic Storage Array

Scott D. Lowe - ActualTech Media

Avatar

This post is part of a series based on the forthcoming book, ​Building a Modern Data Center​, written by Scott D. Lowe, David M. Davis and James Green of ActualTech Media in partnership with Atlantis Computing.

In our last post, we looked at the challenges and opportunities in front of IT organizations as they move into 2016. We kicked off that discussion by considering the organizational shifts happening in IT, as well as the new cost models for procuring and operating IT in the data center and how this changes what enterprises expect from their data center vendors.

Today, we’re going to begin taking a deeper look at how the data center is evolving to meet the needs of the business in a more responsive way. A clear understanding of how these essential technologies have developed and changed, IT can more easily grasp and anticipate the impact of the changes developing through 2016. To start, let’s look at the rise and fall of the ‘monolithic storage array’.

The Rise of the Monolithic Storage Array

The inefficiency at scale actually had two components. The first is that servers very commonly only used a fraction of the computing power they had available. It would have been totally normal at this time to see a server that ran regularly at 10% CPU utilization, thus wasting massive amounts of resources. The second problem was that data storage utilization had the same utilization issue. With the many, many islands of storage created by placing direct attached storage with every server, there came a great inefficiency caused by the need to allow room for growth.

As an example, imagine that an enterprise had 800 servers in their data center. If each of those servers had 60 GB of unused storage capacity to allow for growth. That would mean there was 48 terabytes of unused capacity across the organization. Using the lens of today’s data center to look at this problem, paying for 48 terabytes of capacity to just sit on the shelf seems absurd, but until this problem could be solved, that was the accepted design.

This problem was relatively easily solved, however. Rather than provision direct-attached storage for each server, disks were pooled and made accessible via the network. This allowed many devices to draw from one capacity pool and increase utilization across the enterprise dramatically. It also decreased the management overhead of storage systems, because it meant that rather than managing 800 storage silos, perhaps there were only 5 or 10.

These arrays of disks (“storage arrays”) were connected on a network segregated from the local area network. This network is referred to as a Storage Area Network, or SAN, as shown in Figure 2. The network made use of a different network protocol more suited for storage networking called Fibre Channel Protocol. It was more suited for delivering storage because of its "lossless" and high-speed nature. The purpose of the SAN is to direct and store data, and therefore the loss of transmissions is unacceptable. This is why the use of something like TCP/IP networking was not used for the first SANs.

Since the storage area network provided access to the storage array, and likely because of a misunderstanding of the whole architecture by some administrators, the term “SAN” came to be used colloquially to mean a storage array providing block-level storage. “Data is written to the SAN…” would refer to the storage array ingesting data, rather than the storage network providing transit services for the data. This incorrect usage of the term “SAN” continues to this day, and is so common that it’s accepted nomenclature by all but the most academic storage administrators.

As the industry matured and more organizations adopted a shared storage model, the value of the architecture continued to increase. Manufacturers added features to the management platforms of the storage arrays to allow operations like storage snapshots, replication, and data reduction. Again, rather than 800 places to manage file system snapshots, administrators could make use of volume-level snapshots from just a few (or even one) management console. This created new possibilities for backup and recovery solutions to complete backups faster and more efficiently. Storage systems also contained mechanisms for replicating data from one storage array to another. This meant that a second copy of the data could be kept up-to-date in a safe location, as opposed to backing up and restoring data all the time.

Perhaps one of the greatest efficiencies achieved by adopting the shared storage model was potential for global deduplication of data across the enterprise. Even if deduplication was available in the Direct Attached Storage (DAS) model, deduplicating 800 silos of data individually would not result in high consolidation ratios. However, deduplicating data across all 800 systems that were likely similar would result in much higher consolidation.

By the mid-2000s, average data centers had the efficiency of using shared storage across servers and applications, combined with the added efficiency of being able to globally deduplicate that data. Performance of the shared storage systems grew as manufacturers continued to improve the networking protocols, the physical disk media, and the file systems that governed the storage array. Due to its size and scope in many organizations, managing the storage network and the storage arrays became a job for entire teams of people, each with highly specialized skill sets.

Using shared storage allowed more agility and flexibility with servers than was known with direct-attached storage. During this time, many organizations chose to provision the operating system disk for a server on the storage array and use a “boot from SAN” model. The benefit of deploying operating systems this way was this: if one physical server failed, a new server could replace it, be mapped to the same boot volume, and the same operating system instance and applications could be back up and running in no time. Blade servers helped make it even easier to replace failed servers.

As effective as all of this consolidation was at driving down costs in the data center, there was still the problem of compute resources. CPU and memory resources were still generally configured far above the actual utilization of the application the server was built for. Eliminating this problem was the second frontier in solving inefficiency in the modern data center.

Sign up for your free eBook today: Building the Modern Data Center: Principals & Strategies of Design

The No-Spin Zone: The Move from Disk to Flash

Magnetic storage media has been the dominant choice for data storage for the majority of data center history. Spinning disks have served as primary storage, and tape-based storage systems have served higher capacity longer term storage needs. However, the performance of spinning disk eventually leveled off due to physics-induced limitations. The speed by which data on a spinning disk can be accessed is based upon a few factors, but the one that is the biggest problem is the rotation speed of the disk platter. Eventually, the platter can't be spun any faster without damaging it.

Based on what the storage industry has produced in the past few years, it would appear that 15,000 rotations per minute (15K RPM) is the fastest speed that manufacturers have been able to maintain while keeping the disk economically viable to the customer. A 15K SAS drive is a high-performing disk, to be sure. However, the number of I/O operations that any spinning disk can perform in a second doesn't seem to be changing all that much. The fastest, most efficient spinning disks can deliver less than 200 random I/O per second (IOPS). While this is beyond adequate for a use case like a PC, it has left something to be desired when serving I/O to a dense, mixed workload virtual server or virtual desktop environment. The numbers get even trickier when RAID write penalties are factored in; depending on the RAID configuration a number of disks may be needed to achieve 200 IOPS rather than just one. For example, for a system running RAID 6

There's also the issue of latency. Due to the mechanical nature of a spinning disk drive, latency (the time it takes to retrieve or write the data in question) can't be pushed below a certain threshold. Tiny bits of latency added together across many drives becomes an issue at scale.

The solution to both the IOPS problem and the latency problem is found in flash storage. In short, flash storage media makes use of non-volatile memory to store data as opposed to magnetic platters.

Although the use of flash storage was initially troublesome due to durability issues, the performance has always been quite attractive and often worth the risk. Because flash storage is not mechanical in nature, it doesn't suffer from the same limitations as spinning disks. Flash storage is capable of latency on the order of microseconds as opposed to spinning disk's multiple milliseconds. It's also capable of far more I/O operations per second than a handful of spinning disks. The issue of durability has been solved over time as manufacturers improve the physical memory, storage controllers use intelligence like wear leveling, and different types of flash cells are developed, like single level cell (SLC), multi-level cell and enterprise-grade multi-level cell (MLC/eMLC), and triple level cell (TLC). A typical eMLC drive on the market in 2015 is warrantied for 10 full writes per day over a period of 5 years. Alternatively, some manufacturers specify simply a total amount of data written. The same eMLC drive would probably be warrantied for something like 3.5 PB of data written.

Lastly, because of the non-mechanical (or “solid state”) nature of flash storage, it requires much less power to operate when compared to spinning disk. As data center power bills have always run high, any way to reduce power consumption is attractive to the data center manager — and the CFO! In some countries, governments offer substantial incentives for making environmentally friendly changes like reducing power consumption. In some cases, purchasing boatloads of flash storage to reduce power consumption may be cheaper than the cost of the fine for failure to comply.

Flash storage becoming widely available has been a huge win for the data center industry. It allows much higher performance with substantially less power, and within the next 3 to 5 years, flash is expected to cost less per gigabyte than spinning disk. This maturing of flash storage has led data center architects to reconsider the way storage is accessed in the data center yet again. Just as the utilization and management issues of direct-attached storage gave birth to the monolithic storage array, performance issues and power/environmental concerns have birthed a new storage design. The data center of the future will likely see less of the monolithic storage array in favor of a return to direct attached storage…but with a twist.

The Fall of the Monolithic Storage Array

Monolithic storage arrays solved many of the data center's problems and allowed IT to achieve greater efficiencies and scale. Unfortunately, the things that made this architecture so attractive also eventually became its downfall. The virtualization of compute led to densities and performance requirements that storage arrays have struggled to keep up with ever since.

One of the primary challenges that manufacturers of monolithic storage arrays have been trying to solve for a number of years is the challenge of the "mixed workload." By the nature of virtualization, many different applications and operating systems share the same physical disk infrastructure on the back end. The challenge with this architecture is that operating systems, and especially applications, have widely varying workload requirements and characteristics. For example, attempting to deploy VDI on the same storage platform as the server virtualization has been the downfall of many VDI projects. Due to the drastically different I/O characteristics of a desktop operating system versus a server operating system and the applications running on them, they require almost completely opposite things. An average Windows server might require 80% reads, 20% writes, whereas on the exact same storage array, with the same disk layout, same cache, and so on, a virtual desktop might require 20% reads, 80% writes. Couple this problem with the fact that hundreds — perhaps thousands — of virtual machines are trying to perform these operations all at the same time and you have what the industry has dubbed “the I/O Blender.” This is a comical metaphor, but quite accurate at describing the randomness of I/O operations coming into the array.

As application performance requirements go up, it has also become increasingly important to provide very low latency. So which storage model is likely to have lower latency: the one where storage is accessed across a network and shared with all other workloads, or the one where storage is actually inside the server doing the processing on the SATA/SAS, PCIe bus, or even in memory? Of course, the answer is the model where the storage is local to the workload. Bus speeds versus network speeds are on totally different orders of magnitude. With that in mind, some new ideas have started popping up in the data center storage market over the past few years.

One idea is the concept of server-side caching. This design is less radical than others, because it continues to make use of existing shared storage. One of the first really well done implementations of this technology in the enterprise was a solution that used a virtual machine to consume local DRAM and use it as a high-speed cache. Another early option was a very expensive but high-performing PCIe SSD that accelerated remote storage with local caching. These designs solved common problems like boot storms in VDI environments, because the VMs on each host were able to retrieve hot blocks from local DRAM before ever traversing the network. This technology was mimicked and improved on, and today a number of options exist for caching with local SSDs and DRAM in front of shared storage arrays.

A different, more radical architecture is becoming more common, however, and enables continued use of and pooling the resources provided by the monolithic storage array for general purpose workloads in the future. This design is called Software-Defined Storage (SDS). The data center of the future looks (physically) a lot more like the data center of the past, in which a number of servers all contain their own direct attached storage. The difference is that all of this locally attached storage is pooled, controlled, protected, and accelerated by a storage management platform running on the hypervisor. Local storage is just a bunch of (SSD) disks (JBOD) rather than being configured in a RAID group, and fault tolerance is controlled at the node level rather than at the storage controller level.

The resilience could be thought of like a network RAID, although it's more complex than that. The performance and scale implications of this model are massive: because each node added to the cluster with local storage contributes to the pool, this means that the storage pool can grow to virtually limitless heights. Each server that is added has its own storage controller, meaning that throughput never becomes an issue. Increasing capacity of the pool is as easy as adding disks to existing servers or adding more servers overall. The control of all of this is done by either virtual machines (VSAs) or by kernel-level software, and the administrator typically manages it from the hypervisor's existing management interface (like vCenter or SCVMM).

SDS is changing the data center in tangible ways, and as more organizations begin to adopt this architecture, vendors of monolithic storage arrays will have to innovate or pivot in order to stay relevant and survive.

What About Convergence?

In our next post, we’ll consider the emergence and impact of converged infrastructure approaches and how they helped move us forward to the changes happening now in the hyperconverged infrastructure landscape. Don't forget to visit the Building a Modern Data Center eBook site now and sign up for your free copy.

SCOTT D. LOWE Contributor

​Scott is Co-Founder of ActualTech Media and serves as Senior Content Editor and Strategist. Scott is an enterprise IT veteran with close to twenty years experience in senior and CIO roles across multiple large organizations. A 2015 VMware vExpert Award winner, Scott is also a micro-analyst for Wikibon and an InformationWeek Analytics contributor.
12345
Current rating: 5 (1 ratings)