Share this post

March 09, 2016

The Future of the Modern Data Center

Scott D. Lowe - ActualTech Media

Avatar

This post is part of a series based on the forthcoming book, ​Building a Modern Data Center​, written by Scott D. Lowe, David M. Davis and James Green of ActualTech Media in partnership with Atlantis Computing.

The evolution of the data center and IT as a business unit has been more exciting than ever in the past few years. But the coming few years are likely to bring even more excitement than before. Some of the technologies that are currently in development and could be in data centers in the next few years are simply mind boggling. Enabled by ever-increasing compute power, the amount of data that will be stored and acted upon in the next decade is difficult to comprehend. New application architectures will be required, and dramatically enhanced hardware will be created to support the new requirements.

The type of storage that has changed the data center in the past few years (namely NAND flash) will likely be displaced by a successor even more suited to the performance and capacity requirements of future IT. Soon, most organizations will adopt a cloud-focused model for provisioning services, so the pressure on cloud providers to perform will be intensified. There will be abundant opportunities in the services industry for helping customers migrate to the cloud. Migrating to the cloud has garnered a lot of lip service lately, but relatively few organizations are actually doing it compared to the number that are talking about it so far.

Some of the trends that will grow in the next few years are the increased uptake of a container-based service provisioning approach, improvements in storage capacity and performance, and different ways to pool and abstracting physical resources. Also, advancements in the way IT organizations interact with cloud services will make the future data center (3 to 4 years from now) look quite different than it does today.

Containers

The last couple of years have seen the rise of software products that leverage Linux Containers (LXC) to deploy many instances of an application on one operating system. LXC is not a new technology; it was first released in 2008 and is viewed as relatively mature. In fact, containerization as a concept existed earlier in Solaris Zones (2005) and AIX Workload partitioning (2007). Running applications in LXC is an alternative to running applications in a virtual machine.

Containers are easy to deploy and consume very little resources as compared to a virtual machine. Where virtual machines abstract operating systems from the underlying hardware, containers abstract applications from the operating system. This makes it possible to run many copies of the same or similar applications on top of one operating system, thus using a single copy of all the operating system files and shared libraries. From an efficiency standpoint, this is obviously a huge win; it also has the benefit of ensuring consistency across the applications since they’re sharing the dependencies rather than each application having a potentially disparate version of the dependency.

LXC developers are now expanding on the platform with LXD, a new way of interacting with containers which exposes a REST API. This will allow much greater orchestration of containers moving forward.

As containers become more popular, the distribution of data center software will likely focus more on containers than on virtual machines. LXC is respected as a great project, but also has the potential to be circumvented by projects like libcontainer due to its efficiency and potential for going cross-platform.

Open Source tools

The pervasiveness of open source software in the modern data center is really interesting. What may be surprising to some is that even a large majority of proprietary software implements or relies on open source software in some way. For example, many vendors choose to distribute their software in an OVA file. This is a prepackaged virtual machine with their software already installed, and it usually runs a Linux variant. Right there, the proprietary software has already leveraged an open source project: the Linux distribution the virtual machine is running. A second example is when many proprietary software projects turn to OpenSSL for generation of SSL certificates. Although the main product is proprietary, it leans on open source projects to get the job done.

If open source software is so great, why isn’t it just used for everything? Well, that’s exactly what many of the web giants of the past decade have asked as well. Google, Facebook, and the like are responsible for some of the most popular open source projects being developed at present. Open source software, due to its collaborative nature which spans verticals and geographies, is commonly produced at a much faster pace and with higher quality than a competitive proprietary project started around the same time. For this reason, many IT organizations are choosing to focus their new initiative on open source first, and proprietary software second if no open source option fits the bill. Because of the highly competitive environment at the top of any industry, being on the bleeding edge with open source software can give a non-trivial advantage over a competitor who is at the mercy of a proprietary software developer.

Moving into the future, open source software will be the default mode of most organizations. It’s easy to see from examples like Docker and Hadoop that the power of the community that can form around an open source project can extend well beyond the capabilities of any single organization. That community of contributors and committee members not only helps to develop the project with exceptionally high quality, but they act in the best interest of the user base. One of the primary problems users of proprietary software face is that they’re at the mercy of the software vendor. With open source software, on the other hand, the direction of the project is directly influenced by the user base. Changes won’t be made that aren’t in the best interest of the users and new features will be focused on with priorities defined by the users.

Try getting that level of input with most pieces of proprietary software and you’ll be sorely disappointed.

The final differentiator in the open source versus proprietary software debate in many organizations turns out to be cost. No big surprise there. Community-supported editions of open source software are commonly available for free or for a nominal fee. At scale, this difference can really add up and make open source a no brainer. Flash Capacity NAND flash has set new expectations for performance in the data center, but has left something to be desired in terms of capacity. When NAND SSD drives first became available to consumers, a 30 GB SSD wasn’t small. However, that’s far too small to be of much practical use to enterprise data centers storing (potentially) petabytes of data. Over the last 5 years, the capacity issue has been addressed to some degree, but the breakthrough in flash capacity is just happening now.

Flash memory based drives store binary data in cells. The traditional approach to increasing flash capacity, given that the form factor of the drive cannot change, has been to decrease the size of those cells. Using smaller cells means that more of them can fit on a given chip. Unfortunately, at this point the cells have become so small that creating smaller ones is becoming quite challenging. With new technology referred to as 3D NAND, flash memory manufacturers have begun stacking multiple sets of cells on top of one another to achieve greater density in a single disk.

Intel and Micron jointly developed a technology known as 3D XPoint (3D NAND is a generic way of describing the technology without proprietary names). The technology stacks flash cells vertically, one on top of the other. This is the origin of the ͞3D͟ portion of the name. Previous implementations of flash memory were planar, or two-dimensional. By stacking multiple layers, the third dimension is added. With this third dimension, the need to reduce the size of the cells is removed, and flash manufacturers can actually use slightly larger cells while still increasing capacity. The capacity increases that have been possible since the development of this 3D NAND technology are substantial to say the least. Samsung announced a 16 TB SSD in 2015 that is powered by this technology and should be available to consumers in the near future. In the future of the data center, technology advances will allow the size of flash cells to continue to decrease. This, in combination with the 3D design, will allow SSDs to have acceptable capacities to all but eliminate the need for spinning, magnetic disks. Technology improvements in memory and storage seem to alternate between performance and capacity.

The Cloud

The concept of cloud computing has completely revolutionized the industry and changed the way many companies do business. It has enabled agility and growth that would have very likely been impossible in a legacy environment. But cloud adoption is not done. There are still many phases of the migration to the cloud that businesses are still working through. This is likely to be ongoing for a number of years because, just like any major transition in architecture, it’s not typically a lift-and-shift sort of move. It’s a gradual, opportunistic process where the changes in infrastructure design begin to incorporate cloud components where it makes sense.

As a part of this gradual shift, a large number of data center infrastructures in the coming years will take a hybrid cloud stance. Hybrid cloud means that a portion of the IT resources exist in an on-premises data center and a portion exists in a third party cloud data center. But even in that hybrid approach, there are questions still being answered. How do you handle security in the cloud? How about making sure that data is protected and can be restored in the event of a loss?

Conclusion

This post wraps up the series. In this series we’ve looked at the current state of the data center, the technologies and mindsets that are changing it, and the ways it will be changing in the future. We took an up close look at the software-defined data center, and in particular, software-defined storage and hyperconvergence. We looked at some potential starting points for your own data center transformation, and discussed some strategies for getting started. Good luck, and all the best in your transition towards the modern data center!

SCOTT D. LOWE Contributor

​Scott is Co-Founder of ActualTech Media and serves as Senior Content Editor and Strategist. Scott is an enterprise IT veteran with close to twenty years experience in senior and CIO roles across multiple large organizations. A 2015 VMware vExpert Award winner, Scott is also a micro-analyst for Wikibon and an InformationWeek Analytics contributor.
12345
Current rating: 0 (0 ratings)