datacenter-complexity

How Complexity is Killing Efficiency In Your Data Center

Share:
Share with your friends










Submit
When it comes to operating a data center – be it a private cloud infrastructure for a global enterprise or a small server closet in an SMB – there are a few things that are always true. One of these constant truths is that the leaner the operation can be without compromising SLAs, the better.

One of the fastest ways to take a data center operation from lean and mean to bloated and tame is to sacrifice efficiency. Giving up efficiency happens little bits at a time, like when a developer lazily does a full table scan instead of efficiently querying the database. But efficiency is also piddled away in bulk, like when an eager sysadmin over-engineers an environment, or when a CIO chooses a substantially less functional product to get a small price break.

Because running an efficient data center operation is critical to being profitable, every reasonable measure should be taken to increase efficiency. Hyperconvergence can help improve this metric in almost any data center by aiding in the removal of complexity. Complexity, as it turns out, is often the enemy of efficiency.

Difficult Troubleshooting

The opacity of a complex environment can turn a fairly straightforward configuration issue into a needle in a haystack. With each additional bolt-on software suite and with each one-off Powershell script comes another cog in the machine – another possible place to look for misconfigurations.

Because of the way a hyperconverged system (more specifically, the software platforms running on the system) are designed, complexity is often dramatically reduced. Consider for instance the traditional method of replicating a virtual machine to a remote site. One option would be to replicate the entire underlying volume where the machine is stored. This is inefficient assuming the goal is to replicate this one workload. An alternative would be to install a third party tool like a backup product and a replication tool. This approach requires another system to be deployed and integrated, and causes the ‘haystack’ to grow.

While both of the above options are acceptable solutions, a third more preferable solution would be if the platform used to run the workload also included a method to replicate the virtual machine.  All major hyperconvergence platforms include a VM-based replication mechanism where a single VM or groups of VMs can be easily replicated to a remote site without the need for any third party tools, and without the need to replicate an entire volume.

Increased Operating Expenses

Unfortunately, complexity often breeds complexity. When there’s a complex problem, engineers often react with a complex solution. Steve Jobs is quoted as saying, “Simple can be harder than complex: You have to work hard to get your thinking clean to make it simple. But it’s worth it in the end because once you get there, you can move mountains.” As the environment becomes more and more complex, more engineering resources are required to maintain it. More systems are required to monitor it. More support subscriptions are required to protect it. And so the story goes until the IT organization is buried beneath the weight of its OPEX costs.

This cycle can sometimes spin even more dramatically. Because in many organizations capital expenses are easier to get approval for, a perceived problem is often solved by buying an additional piece of hardware or an additional software suite to address the problem at hand. Although this is sometimes the proper solution, it is equally likely to only be adding complexity to an already complexity-laden infrastructure.

Hyperconvergence, in removing complexity, can serve to reduce the operating expenses in play in an environment. This releases resources to be dedicated to other projects, to proactive improvements, and to capital projects that will support future growth.

Unacceptable Time to Value

The pinnacle of success for an IT organization is to be seen by the rest of the business as an enabler, and not as a hindrance. The goal of an IT Director should be to establish the department as a problem solver – as a catalyst to drive business for the rest of the company. One of the main reasons this has been backwards in the not-too-distant past – with IT being seen as a cost center and a roadblock – is because of a lack of agility.

Time to Value in the infrastructure space has been growing longer and longer as systems become more complex, security requirements become more stringent, and integration with other systems becomes more necessary. It’s not uncommon in larger organizations for the time from project approval to resources becoming available to span months. In a world that is changing by the moment, months of lost time can limit the success of a project.

With the reduced complexity of a hyperconverged infrastructure model, Time to Value for projects requiring additional resources can be dramatically reduced. Increasing agility in the data center can in turn allow the business unit requesting resources to execute the project quickly and achieve success in the marketplace.

If an IT organization will take an honest look at where solutions might be over-engineered, or where three products could be replaced with one, they may begin to find that hyperconvergence is a valuable tool at the very least. Decision makers should take the hard road and aggressively search out and remove complexity, because “…it’s worth it in the end because once you get there, you can move mountains.”

David Davis
david@actualtechmedia.com

David Davis is a partner at ActualTech Media. With over 20 years in enterprise technology, he has served as an IT Manager and has authored hundreds of papers, ebooks, and video training courses. He’s a 6 x vExpert, VCP, VCAP, & CCIE# 9369.

1Comment
  • ST
    Posted at 04:55h, 12 July Reply

    David,

    Great article. That’s exactly what we’re working on: simplifying. Complex and elaborate PowerShell scripts allow us to save tens of thousands of dollars thanks to extreme automation. Problem is, when it breaks due to environmental factors, it costs dozens of expensive hours to fix those same scripts; and let’s not talk about the learning curve for our junior admins who are not both virtualization sysadmins and OS programmers at the same time. So this highlights the fact that the right balance has to be achieved… Great and timely article!

    Looking for your next MegaCast,

    ST.

Post A Comment

Web Analytics