How the Cloud is Changing IT
Business units and IT departments can acquire services simply by using a credit card.
Major cloud service providers like Google and Facebook are changing expectations of how a data center should operate, because their massive environments are nothing like legacy data centers. Even though most organizations don’t need anything of that scale, the best architectural design elements from these clouds have been brought to the hyperconverged world and packaged for affordability. For hyperconverged infrastructure, only the second trend is really pertinent and is what this chapter is about.
Scaling & Shared Resources
The hallmarks of Google’s and Facebook’s environments are, among other things, sheer scalability and reasonable economics. Many of these cloud principles have been adapted for use in smaller environments and packaged in hyperconverged products that any company can buy.
As we saw in the article on the software-defined data center, software overtaking hardware in the data center has the potential to lead to very good things. Companies such as Google discovered this potential years ago and tamed its hardware beast by wrapping it inside software layers. A data file inside Google is managed by the company’s massively distributed, software-based global file system. This file system doesn’t care about the underlying hardware. It simply abides by the rules built into the software layer that ensures that the file is saved and with the right data protec- tion levels. Even with expansion of Google’s infrastructure, the administrator isn’t concerned about where that file resides.
Economies of Scale
In a legacy data center environment, growing the environment can be expensive due to the proprietary nature of each individual piece of hardware. The more diverse the environment, the more difficult it is to maintain.
Companies such as Google and Facebook scale their environments without relying on expensive proprietary components. Instead, they leverage commodity hardware.
To some people, the word commodity, when associated with the data center environment, is a synonym for cheap or unreliable. You know what? To a point, they’re right.
When you consider the role of commodity hardware in a hyperconverged environment, however, keep in mind that the hardware takes a back seat to the software. In this environment, the software layer is built with the understanding that hardware can — and ultimately will — fail. The software-based architecture is designed to anticipate and handle any hardware failure that takes place.
Commodity hardware isn’t cheap, but it’s much less expensive than proprietary hardware. Also, it’s interchangeable with other components. A hyperconvergence vendor can switch its hardware platform without recoding the entire system. By leveraging commodity hardware, hyperconvergence vendors ensure that their customers get affordable hardware without disruption because change is so quick and easy.
Think about how you procure your data center technology now, particularly when it comes to storage and other non- server equipment. For the expected life cycle for that equipment, you probably buy as much horsepower and capacity that you’ll need, with a little extra “just in case” capacity.
How long will it take you to use all that pre-purchased capacity? You may never use it. What a waste! But on the other hand, you may find it necessary to expand your environment sooner than anticipated. Cloud companies don’t create complex infrastructure update plans each time they expand. They simply add more standardized units of infrastructure to the environment. This is their scale model; it’s all about being able to step to the next level of infrastructure in small increments, as needed.
Some converged-architecture options have extremely large building blocks. This requires massive leaps in resources per step, resulting in hard-to-reckon-with economics for many.
Hyperconverged infrastructure takes a bite-sized approach to data center scalability. Customers no longer need to expand just one component or hardware rack at a time; they simply add another appliance-based node to a homogenous environment. The entire environment is one huge virtualized resource pool. As needs dictate, customers can expand this pool quickly and easily, in a way that makes economic sense.
Do you think that the typical cloud provider spends an inordinate amount of time rebuilding individual policies and processes each time it makes changes or adds equipment to its data center? Of course not. In the cloud, change is constant, so ensuring that changes are made without disruption is critical. In the world of enterprise IT, things should work the same way. A change in data center hardware shouldn’t necessitate reconfiguration of all your virtual machines and policies.
Abstracting Policy from Architecture
Since hardware isn’t the focus in the software-defined data center, why would you write policies that target specific hardware devices? Further, because enterprise workloads leverage virtual machines (VMs) as their basic constructs, why should a VM be beholden to policies tied to underlying infrastructure components?
Consider a scenario in which you define policies that move workloads between specific logical unit numbers (LUNs) for replication purposes. Now multiply that policy by 1,000. When it comes time to swap out an affected LUN, you end up with LUN-to-LUN policy spaghetti. You need to find each individual policy and reconfigure it to point to new hardware.
Policies should be far more general in nature, allowing the infrastructure to make the granular decisions. Instead of get- ting specific in a LUN policy, for example, policies should be as simple as, “Replicate VM to San Francisco.”
Why is this good? With such a generalized policy, you can perform a complete technology refresh in San Francisco without migrating any data or reconfiguring any policies. Since policy is defined at the data center level, all inbound and outbound policies stay in place. Beautiful.
Taking a VM-Centric Approach
The workload takes center stage in the cloud. In the case of enterprise IT, these workloads are individual VMs. When it comes to policies in cloud-based environments, the VM is the center of the world. It’s all about applying policies to VMs — not to LUNs, shares, datastores, or any other constructs. Bear in mind the plight of the VM administrator, who is VM-centric. Why wouldn’t the administrator assign backup, quality-of-service, and replication policies to a VM?
The need to apply policies across individual resource domains creates fundamental operational issues in IT. In the cloud and in hyperconvergence, policies are simply called policies. There are no LUN policies, caching policies, replication policies, and so on — just policies.
Understanding Cloud Economics
Cloud providers and enterprise IT organizations operate their environments using very different economic models. Chief Information Officers (CIOs) expect enterprise IT infrastructure to last many years, so they buy enough capacity and performance to last that long. In many cases, however, the full potential of the infrastructure buy is never realized. CIOs often overbuy to ensure that capacity lasts the full life cycle.
Conversely, by design or by mistake, however, CIOs often underbuy infrastructure. The organization then needs to buy individual resources when they begin to run low. This leads to watching individual resources constantly, reacting when necessary, and hoping that your existing product doesn’t have end-of-life status.
Now consider cloud vendors, who don’t make one huge buy every five years. Doing so would be insane in a few ways: A lot of hardware would have to be purchased up front. Accurately planning three to five years’ worth of resource needs in these kinds of environments may be impossible. Instead, cloud organizations pay as they grow. Operational scale and homogeneity are core parts of cloud providers’ DNA, so they simply add more standard resources as needed.
The public cloud is highly appealing to the enterprise. The instant-on service is elastic and only costs a few cents an hour. What could be better? It’s not for all applications; it presents major challenges for many, particularly when it comes to cost predictability. The true cost of public cloud increases dramatically when compared against the cost of predictable storage performance, high availability, backup, disaster recovery, private networking, and more. IT ends up paying for a server that is running at 15 percent utilization, and the cloud provider is benefiting from packing those VMs onto a single host.
IT applications are designed expecting high-availability infra- structure, disaster recovery, backup and recovery, and other necessary services (this is why internal IT isn’t like Facebook and Google). IT applications place a different set of demands on the infrastructure. Therefore, any hyperconverged infrastructure must deliver on these requirements.
Business units may not understand these nuances and may be compelled to buy cloud services without having a grasp on the full picture. This rise of Shadow IT — where non-IT units create their own systems — is real, and the cloud is complicit in enabling this trend. Shadow IT exists, however, either because IT isn’t able to provide the kinds of services business units demand or because they aren’t responsive enough. So, these units turn to the cloud to meet individual needs, giving rise to fragmented services, potential security breaches, and overall data-quality issues.
Hyperconvergence brings cloud-type consumption-based infrastructure economics and flexibility to enterprise IT without compromising on performance, reliability, and availability. Rather than making huge buys every few years, IT simply adds building blocks of infrastructure to the data center as needed. This approach gives the business much faster time-to-value for the expanded environment.