March 02, 2016
This post is part of a series based on the forthcoming book, Building a Modern Data Center, written by Scott D. Lowe, David M. Davis and James Green of ActualTech Media in partnership with Atlantis Computing.
The world’s largest automobile manufacturer, the Toyota Motor Company, is well known for producing an extraordinary amount of cars. Their high production capacity is thanks to the “Toyota Production System” which is studied worldwide and is regarded as an engineering marvel. In 1924, Toyota’s founder Sakichi Toyoda invented the automatic loom which spun thread and weaved cloth automatically. The machine ran nonstop, unless it detected a problem, in which case it stopped to allow the operator to correct the issue. Because it detected the issue, defective products were not created. Of course, as one could guess, this methodology evolved to create the Toyota auto manufacturing line that is known and respected today.
As IT evolves in the coming years, the industry will be characterized by a greater focus on removing the human element from workflows. Just as Toyota is able to achieve great production value by utilizing automation and orchestration to enable their just-in-time manufacturing model, data centers are looking to automation to decrease their time to value and increase the scope of how much infrastructure a team member can be responsible for. On the Toyota production line, one operator can be responsible for many machines because his or her job is to monitor and provide the “human touch,” not to sit there and operate a single machine. In the same way, less engineers are required to operate a world-class data center if many of the processes and operations are left to computers and the humans just provide oversight.
The number of disparate systems in the data center has grown to an unmanageable level. There’s a product (or a few products) for nearly every problem. In many environments, the burden of the number of moving pieces causes serious inefficiency on the part of the IT staff. The concept of “orchestration” is the methodical linking of related systems in such a way that a higher level construct can be the point of management, and the orchestration system controls the linked systems. A real-world example of orchestration would be provisioning servers without administrator intervention. The orchestration system might kick off a virtual machine deployment from a template, customize the guest OS with an IP address pulled from the IPAM system, register the machine with the monitoring system, update the virtual machine via the patch management system, and resolve the ticket, notifying the requestor that the build is complete and the server is available at a given address. This is a simple example, but imagine how much time the staff saves managing the data center each year by not needing to dedicate time to this task. All they need to do at this point is monitor the orchestration system to be sure tasks are completed as expected, and fix it if something is not working correctly.
Due to the way that cloud-based IT infrastructure tends to be more suited for ephemeral workloads, automation and orchestration will become increasingly important. The amount of resource creation and destruction mandates this. Since the primary purpose of data center infrastructure is to serve the application that runs on it, a new culture is emerging that emphasizes the community and teamwork of the development staff and the operations staff as a single unit. These roles have been segregated in the past few years, but experience has shown that there’s much more value in the teams working together. As such, the DevOps methodology will experience dramatic uptake in the coming years.
DevOps is the idea of pulling the operations team into the development methodology to use their unique expertise to help the development process be more rapid. It also includes implementing tools and processes that leverage all the different skill sets available. Because iteration is so frequent, DevOps culture has a heavy focus on automation. If administrators had to manually deploy new versions of code each time (as was likely done in the waterfall era) it would be an overwhelming task and would take away from the productivity of the team. Deployment is an example of a task that will likely be automated right away in a DevOps-focused organization.
As much as IT organizations are being forced to change by advancing technology, there are other variables. Business is the motivation that drives the change in the data center, but there are business requirements that are also challenging the status quo. Performance and capacity can only go so far in making a product a good fit for an organization. Beyond that, it needs to fit in to the existing ecosystem of the IT environment. It needs to be a sensible business decision from a budgetary perspective based on the cost of the product and the desire outcomes to the business.
Historically, a large amount of technology business has been conducted based more heavily on an executive’s personal network than on the technical merits of a solution. Unfortunately, this will never go away completely, but the environment is changing and more transparency and justification is required these days. This puts added pressure on manufacturers to show precisely how their solution helps the business and what specifications the solution will provide.
From a manufacturer’s standpoint, the IT solutions market is growing more saturated all the time. Advances in technology will perpetually open up new markets, but even so, it’s becoming nearly impossible to be a one-man show in the data center. As IT organizations adopt various products to meet their business needs, it’s vital to the overall health of the IT ecosystem that the products from different vendors are interoperable. With these needs in mind, there are a minimum of three objectives that the IT industry as a whole needs to move toward.
Open standards and interoperability — The networking segment of IT has really shown the rest of the industry how it should be done by contributing large amounts of knowledge to open standards bodies like the IEEE, IETF, and ISO. The rest of the industry should follow suit more closely.
Transparent costs — Vendors need to be transparent regarding the cost of their product. The industry (channel included) is driven by meaningless list prices and deep discounts, leading to a real price that is unknown to the customer until they’re being smothered by six sales people. The industry is also notorious for sneaking in licensing and add-on fees that aren’t initially clear. The next generation of IT professionals has no time for these games and just want a reasonable number from the start. Companies that do this today (freely publishing the true cost of their product) are having great success in attracting customers who don’t want to play games for an extra 3% off.
Performance benchmarking standards — The industry needs to agree on performance benchmarking standards that will mean the same thing across all manufacturers and to all users. The enterprise storage industry has become infamous for publishing numbers regarding IOPS — a number which is easily manipulated to mean whatever the publisher wants them to mean when the context isn’t included.
The next and final article in this series will take a look at the future of the data center. Understanding where the data center has been and where we are today is vital, but only for the purposes of planning and making good decisions about where we’re headed. Stay tuned!
SCOTT D. LOWE – Contributor
Scott is Co-Founder of ActualTech Media and serves as Senior Content Editor and Strategist. Scott is an enterprise IT veteran with close to twenty years experience in senior and CIO roles across multiple large organizations. A 2015 VMware vExpert Award winner, Scott is also a micro-analyst for Wikibon and an InformationWeek Analytics contributor.
Current rating: 0 (0 ratings)