7 Hybrid Cloud Lessons From 7 Leading Cloud Computing Vendors
Cloud computing has revolutionized the way businesses operate. It has transformed infrastructure, networking, storage, backup and recovery, performance analysis, troubleshooting, development and testing, and so much more. And today, there’s much more to cloud than what AWS and Azure have to offer. There’s an immense variety of cloud products for businesses of all shapes and sizes.
Cloud solutions can offer unique benefits, allowing organizations to scale on demand, improving their agility and elasticity while shifting focus and resources to application development instead of managing the infrastructure and keeping the lights on.
Cloud’s consumption-based cost models allow showback and chargeback capabilities so organizations have more visibility into resources used by which business unit. In many cases organizations can leverage advanced features that are built-into the cloud-based solutions, such as high availability, disaster recovery, and automatic updates, which otherwise would not have been available. And most importantly, the right cloud tool—implemented the right way—has the potential to make a business more competitive, improving efficiency and time to market, while reducing costs associated with on the ground operations and maintenance. And with rapid advancements in the cloud space, there’s always something new on the horizon.
NOTE: ActualTech Media’s inaugural 2016 Cloud MegaCast, moderated by seven-time VMware vExpert and industry veteran, David Davis, featured presentations from seven cloud computing vendors offering products that solve a wide range of business problems. Each of the vendor’s presentations held a valuable lesson for cloud users, which are explored below through an in-depth recap of their presentations.
Lesson 1: Deep Hybrid Cloud Visibility Is Possible With The Right Approach
Ixia focuses on providing network testing, high-performance visibility and security working across physical and virtual infrastructures. Ixia’s solution handles data aggregation, filtering, load balancing, SSL Decryption, NetFlow generation, geolocation and more, sending the information to the data analysis tool of your choice.
As Christophe Olivier, Senior Product Manager for Ixia, points out, monitoring traffic in the physical world is much different from monitoring VM to VM traffic. It presents unique challenges and blind spots, which can compromise application performance and security.
How do you get to the traffic between two VMs in the same host? Or when those VMware are located in different servers? How do you keep track of VMs when they move from one host to another?
These are just some of the problems that Ixia hopes to solve for virtualized, private and public cloud environments.
Ixia’s CloudLens solution utilizes virtual and cloud taps that can access data from multiple hypervisors, like Hyper-V, ESXi and KVM, and public cloud providers, like AWS and Azure. It’s capable of combining physical and virtual data, accessible from a single unified interface. Once the monitoring data is collected, businesses have a choice of where to put their data for further analysis, including physical and virtual analysis products and existing tools like Splunk, FireEye and many more.
Another challenge to monitoring virtual traffic is the sheer volume of data that needs to be monitored. According to Olivier, in most situations it’s not feasible to capture full packets as it would be a huge burden on not only the network, but also the monitoring system and the tools analyzing the data. Thus, Ixia’s approach is to limit data collection to 10 percent or less by carefully selecting what information needs to be sent. However, when deeper investigation is necessary, Ixia’s solution is capable of sending additional information, including the full packet stream.
In short, Ixia’s goal is to provide businesses with “the right data, in the right format, at the right time,” crossing the physical, data center, private, hybrid and public cloud barriers. To accommodate scaling needs, Ixia offers self-service deployments and cloud portability options for organizations utilizing multiple cloud providers.
Lesson 2: Intelligent Operations Is The Next Frontier Of Hybrid Cloud
As more businesses trend towards increasing automation and implementing more intelligent operations strategies with the goal of becoming DevOps-ready, VMware has been developing its Cloud Management Platform (CMP). As Hicham Mourad, Senior Technical Marketing Manager for VMware CMP, explained, the CMP is meant to function as the control plane for managing hybrid cloud environments with end-user computing, applications, compute, network, storage, public cloud, as well as third-party extensions and other pieces of the software-defined data center (SDDC).
VMware’s CMP, which is made available through the vRealize suite of products, focuses on three main components:
- Automation – allows IT to achieve higher levels of automation and orchestration, as well as managing workloads throughout their lifecycle
- Business – allows businesses to get better transparency and visibility into the costs associated with different components of the SDDC
- IT Operations – allows IT to manage the entire stack with visibility into performance and availability
The vRealize suite, part of VMware’s vCloud suite, aims to give IT the tools to run and manage infrastructure and applications across their on-prem data centers and public clouds, enabling more intelligent operations strategies.
But let’s back up a bit, what do we mean by intelligent operations? During his presentation, Mourad demonstrated a number of different intelligent operations capabilities found in the vRealize suite, such as performance, availability and workload balancing across the environment, proactive remediation through actionable alerts, troubleshooting using log data analysis, capacity planning, modeling, forecasting and rightsizing, storage and network management and extensibility, cost transparency across private and public clouds, and more.
In vRealize under the Business Management tab, you can see an overview of your private, hybrid and public infrastructure costs, consumption metrics including showback, reports based on data sets, as well as planning tools with cloud comparison and data center optimization capabilities. You can also drill down deeper into the individual areas to get more insight and granular information. And as Mourad pointed out, you’re able to see the true costs because the tool is querying the APIs and capturing the actual cost information.
For workload balancing the vRealize Operations Manager works in conjunction with the Distributed Resources Scheduler (SDRS), allowing you to work across clusters in your vSphere environment. The Operations Manager also allows you to proactively optimize your environment, alerting you to possible issues before they become a problem along with suggestions for remediation. What’s more, the tool also allows you to automate common alerts.
There’s a lot more to VMware’s powerful Cloud Management Platform and for organizations utilizing vSphere in a hybrid cloud environment, it’s definitely worth taking a closer look.
Lesson 3: An AWS-Like On-Prem Cloud Is Easier To Achieve Than Ever Before
There are a couple of common misconceptions when it comes to the cloud, according to John Mao, Director of Business Development at Stratoscale: Cloud is not a location and it is not virtualization. As Mao explained, cloud resources can exist in multiple locations and the cloud really represents a process in which IT can provide services in a self-service manner. On the other hand, virtualization is not a process but a technology, however it is often the necessary component for creating an on-demand infrastructure.
Mao advocates that we reshape the conversation around cloud in order to dismiss some of these common misconceptions, providing several examples of what cloud computing is today.
So how do you build your own cloud? That’s where Stratoscale comes in. The company enables organizations to deploy a public-like cloud on-premises using a software-only approach that is hardware agnostic and doesn’t require any additional third-party software. Customers can run their Statoscale clouds on traditional rackmount servers, hyper-converged platforms and even traditional converged infrastructure. Mao explained that Symphony, Stratoscale’s cloud solution, targets service providers and large “techy” enterprises that aim to offer a production-ready, AWS-like experience to their customers. Symphony pools your compute, network and storage resources to effectively deliver a cloud experience.
Symphony’s key benefit is that it’s extremely easy to deploy and manage, requiring no professional services. The solution is built to scale, both from a performance and capacity perspective, to hundreds of nodes. Symphony includes real-time monitoring and analytics of physical and virtual resources, automation capabilities, a policy-based self-service portal, along with a number of enterprise features, like multi-site data protection and replication, and authentication through LDAP. It’s also multi-tenant ready and compatible with OpenStack APIs.
Stratoscale’s approach utilizes a single-stack architecture – a single software image that you install on each of the servers that you want to add to the cluster. The solution includes everything you need to deploy—operating system, compute and network virtualization, storage manager, along with the cloud management services. It’s a single install that’s completely self-contained, according to Mao.
Stratoscale positions itself as the best of both worlds when it comes to private and public cloud options available today, being able to offer the pros of both public and private clouds (Amazon-like experience, on-prem deployment and complete IT control).
To learn more about Stratoscale’s Symphony version 2 visit www.stratoscale.com where you can take the Symphony dashboard for a test drive and get information about a proof of concept evaluation. And check out Mao’s demo of the solution presented in the Cloud MegaCast.
Lesson 4: Object Storage Is The Answer To Huge Storage Demands In The Cloud
NetApp offers a number of products that aim to solve hybrid cloud implementation, data management and storage challenges, specifically object storage, working with partners like AWS, Azure, IBM Softlayer and others. The reason object storage is so relevant is that we continue to be faced with data management challenges due to major shifts in technology, explained Ingo Fuch, Senior Manager of Hybrid Cloud Solutions at NetApp. Take for instant the Internet of Things and new applications that use rich content; this is where object storage, like Amazon’s S3 service, can be the answer.
Organizations working with IoT are having to deal with hundreds or millions of devices driving not only the need for more storage but also decentralized data creation, consumption and analytics, according to Fuchs. The challenge becomes how to keep data readily available to analytics tools and systems while accommodating all of the applications that are driving data into your decentralized infrastructure and at the same time, retaining that data for long periods of time, if there is a need for that as well. Beyond IoT, new forms of applications that use rich content are impacting data management since they require a different approach to scaling performance and capacity, Fuchs explained giving healthcare as the perfect example of this shift.
Fuchs went on to offer a real-world example of a web 2.0 company utilizing object storage for their photo repository with NetApp’s StorageGRID Webscale solution. The company decided to make the switch to object storage as their primary storage to enable cloud scale, allowing them to handle increased demands during peak times and continue to grow at 2.5PB per year or more with minimal IT intervention. The company’s media repository challenge, where companies require an active archive with huge files, is just one use case for object storage. Fuchs explained that object storage is still heavily used for archival purposes as well as web data repositories that accommodate rich content, metadata, high transaction loads and billions of objects.
However, NetApp’s StorageGRID Webscale solution is just one option for organizations with huge storage demands. The company also offers a product called AltaVault for on-premises backup and archive purposes where you can move on-prem data to the cloud. AltaVault is able to be transparently integrated into the customers’ current backup products, according to Fuchs. Backup and archival data can be placed into a single repository that’s built on top of StorageGRID (the two support the same APIs). AltaVault will take care of integrating into the backup application, compressing, deduplicating and encrypting the data and then storing it in StorageGRID. The data is then stored securely across many locations and can expand into the public cloud.
AltaVault can be deployed in as little as 30 minutes, said Fuchs, with another hour to set up StorageGRID, which offers customers a great reduction in TCO. The solution is ready for an impressive 100 billion objects across 16 data centers in a hybrid environment. NetApp’s StorageGRID allows customers to easily move data across geographies and storage tiers, optimizing for efficiency and performance over the entire data life cycle. Deployment options allow customers to mix and match VMs, containers and appliances, with support for third-party bloc storage arrays.
Lesson 5: Traditional Backup & DR Is Dead. Cloud Enabled Continuity Is The Future
Legacy backup and disaster recovery solutions are just not cutting it these days. As much as the digital world has changed, much of the backup and recovery business has remained stagnant. Feedback from industry customers shows that traditional backup and business continuity approaches are no longer sufficient in today’s environments. As a results, many organizations are looking to improve their processes and systems around backup and DR, with surveys showing that this is one of the top 10 priorities for IT organizations in the next year.
Getting around some of the challenges of traditional backup, especially some of the manual processes that are still in place at many businesses, as well as the common struggles of disaster recovery, like confirming defined RTOs/RPOs, is what Unitrends is focusing on. LeClair advocates that IT organizations should look at backup with a continuity standpoint: shifting from seeing backup as a task-based, functional, low interest and IT-heavy burden to a solution-focused business benefit that extends to LOB. That’s where cloud-empowered backup comes in and it’s what Unitrends is focused on.
Unitrends’ Connected Continuity Platform offers a suite of business continuity services that protect from data loss, downtime and disaster. The suite includes planning services, disaster recovery as a service (DRaaS), cloud migration and cloud bursting tools, with the core being Unitrends’ backup appliances.
The backup appliances are available as both virtual and physical appliances and can be used for local backups as well as gateways to the cloud, both public and Unitrends’ purpose-built cloud offering. The final piece is Unitrends’ recovery assurance technology which guarantees that you will always be able to recover from you backups and that your DR environment is always operating the way that you expect it to.
As LeClair explained, the Connected Continuity Platform is built with the goal of protecting all of the data that an organization deems important to maintaining business operations, with support of over 200 operating systems and hypervisors, on-prem physical and virtual systems, and cloud workloads, working across different cloud options (including on-prem data center and private clouds, hybrid and public clouds, managed service providers). Unitrends’ guarantee around its recovery and continuity is the key part of the solution. DR testing has been streamlined and automated so IT can frequently and easily test and validate RPO/RTP objectives. Finally, Unitrends’ solution is delivered as a single platform with ease of use being the top priority, according to LeClair. It’s intuitive and doesn’t require a backup expert to implement, run and manage.
While customers have plenty of cloud options, Unitrends’ Forever Cloud was purpose-built for providing business continuity. The unique part of this solution is that you only pay for the data that you’re protecting; you don’t pay for the amount of storage that’s consumed in the cloud. And with Forever Cloud you can use Unitrends’ DRaaS, which offers a guaranteed one hour recovery SLA for your entire environment, regardless of how many physical and virtual systems you have running.
Unitrends offers another solution that’s called Boomerang, which allows you to replicate VMware VMs in AWS or Azure for backup, DR, migration and cloud busting—and then move the workloads back on-prem when the need arises.
Lesson 6: Freedom From Cloud Vendor Lock-In Is Possible With The Right Solution
The cloud era is upon us, enabling business and IT benefits like continuous innovation, faster time to market, a cost-effective consumption-based payment model, instant delivery, simplicity and scale. But as Brian Suhr, Senior Technical Marketing Engineer for Nutanix points out, the cloud should also offer organizations frictionless control with options to rent, buy and own the infrastructure, maintain good data governance and SLAs tailored to individual apps, as well as the freedom from vendor lock-in—both in the hypervisor that you choose to run and the cloud that you want to deploy and consume.
Nutanix has been innovating in this area since 2011, today offering faster performance, better features, data services and cloud-based functions through its Enterprise Cloud Platform. Nutanix’s solution starts with its hyperconverged infrastructure and scale-out model, which has built-in virtualization and management and integrated compute and storage.
Nutanix’s scale-out distributed architecture is node-based and focused on seamless application and data mobility, with features like Sizer (which helps customers size their workloads depending on the applications that are running). The One-click Hypervisor Conversion and Cross-hypervisor backup and DR features are unique to Nutanix, allowing customers that are running different hypervisors across their clusters to be able to backup and recover from one platform to another. So for example, if you’re running VMware in one cluster, you can back it up to a cluster running Hyper-V, and also recover it. And that feature also extends to the public cloud with AWS support. Nutanix also prides itself on its consumer-grade management console that’s intuitive and insightful.
Prism is the company’s management and visibility interface for the entire Nutanix platform, offering IT admins a simplified way to manage virtual environments by streamlining common workflows. Prism Central can manage distributed environments with hundreds of clusters and thousands of VMs, and it works with multiple hypervisors, maintaining the same look and feel whether you’re using just one or three hypervisors. Finally, Acropolis handles the data fabric for storage, compute and virtualization, enabling high availability, live migration and other common management capabilities.
Nutanix continues to evolve its platform, recently adding capacity planning, which allows customers to predict the growth of their clusters and get a better understanding of how resources are being consumed. The company is also focused on giving IT organizations a variety of control options—“Control without limits,” as Suhr called it—enabling the Nutanix command line, PowerShell, REST APIs and OpenStack drivers for additional options and added control of your data center.
The next step for Nutanix is a self-service portal that will allow different business units within your organization to self-provision workloads based on granular access controls and quotas set by IT, further improving on providing a frictionless service delivery. Nutanix is also integrating VMware’s vRealize automation platform, allowing customers to provision intelligent workflows (see VMware’s section), which is planned for release later this year. Finally, Nutanix has also been working closely with Microsoft to deliver the company’s cloud platform system (Windows Azure, System Center and Windows Server) natively on a Nutanix appliance, shipping with Hyper-V already installed on the nodes.
Lesson 7: Unified Data Protection In The Cloud Doesn’t Have To Mean A Lack Of Choices
Arcserve has been around for over 20 years, but in the last couple of years the company has evolved from providing backup to offering an award-winning unified data protection solution that aims to simplify data protection and availability.
Available as software and hardware-based options, Arcserve can back up data to your primary site, remote and branch offices, MSPs, colos and DR sites, as well as private, hybrid and public clouds. The solution caters to organizations with limited bandwidth by using WAN optimization, which can reduce bandwidth and scheduled replications that occur during off hours. And all of this is accomplished through a single web-based unified management interface that handles all of the functions, including backup and replication
As Mark Johnson, Sr. Principal Consultant explained, Arcserve is capable of handling multi-point replication—many to one, and one to many. Replication is built into the core product; data can be replicated to multiple RPS servers, and it supports compressed, encrypted and de-duplicated backups. Today, Arcserve supports agentless backups for VMware and Hyper-V host environments (integrating through APIs), allowing customers to low-impact backups and easily recover individual files and folders from within each VM, according to Johnson.
Arcserve can create virtual standby copies of VMs and update them periodically, with manual or automated failover scenarios. Instant VM and BMR recovery is also available to customers, giving IT the option to recover failed VMs from a previous recovery point in as little as two minutes. The feature supports recovery from both agent-based and agentless backups as well as virtual to physical restores.
Arcserve’s differentiator is its deduplication technology, which can significantly impact the amount of storage customers require to protect their production environments. Some of the overall data reduction rates that Johnson noted from Arcserve’s customers go as high as 92 percent, averaging in the 50 percent range.
The interesting part comes with what Arcserve can do in the cloud. Arcserve has its own cloud offering but can also work with private, hybrid and public clouds. The solution offers file copy and file archive to the cloud, including advanced file and folder filters and scheduling options. Optimized replication is possible, even when backing up data to the cloud. And global deduplication ensures that data transfer costs are low when restoring from the cloud because more recovery points are retained on-prem.
High availability and failover are offered on complete systems, application availability or application health, giving IT a lot of flexibility and configuration options. And testing recovery and replication can be done without impacting the production environment.
Arcserve’s branded cloud functions as a single integrated solutions for backup, recovery and archival purposes. It offers remote virtual standby of emergency application failover and failback and is a good option for long-term backup as an alternative to local disk of take, for compliance requirements.
To learn more about Arcserve visit http://arcserve.com.