ATMCOMIO

TSO Logic Aims to Bring Order to the Chaos of Cloud Pricing

The public cloud is different than the on-premises infrastructure that IT departments and their organization’s accounting departments know well, and from long experience. To start with, the technical differences are substantial. Applications need to be architected or configured for scaling out rather than for scaling up. Apps can be configured so they can demand resources as they need them, rather than having to provision hardware ahead of anticipated demand. Administrators must also become accustomed to living in a world where their virtual machines (VMs) are beyond invisible. There’s not even the security blanket of being able to feel the heat, see the blinking lights or hear the steady hum of a physical data center where those VMs run.
But those technical portions are only half the difference with public cloud. A major reason to endure the technical hurdles, training requirements and internal reorganizations necessary to transition workloads to a public cloud infrastructure is the promise that there are big savings! to be had on Amazon Web Services (AWS) or Microsoft Azure or Google Cloud Platform (GCP). The bean counters there are able to claw those savings out of their world-class infrastructure by running at near unimaginable scales, wringing volume concessions out of hardware vendors, reaping management and automation efficiencies, and employing top data center talent.
Yet many organizations are frustrated in their moves to the public cloud, with the expected cost savings turning out to be illusory, or even undermined by unanticipated usage spikes. Still others do find savings, but the process is a nail-biter as they wait for confirmation in their public cloud bills. For almost everyone, even those who do save money, there’s always the nagging suspicion that if they’d picked a different cloud provider, or chosen different options, they could be getting more cloud service for less money.
Those are all scenarios that a cloud planning platform can address, senior executives from TSO Logic told ActualTech Media in a recent briefing. The TSO Logic Platform combines baselining technologies, deep knowledge of the options and pricing of the public cloud offerings, detailed information on Microsoft licensing options and predictive analytics to help customers find the highest performing and most cost-effective cloud configuration.
The first challenge, TSO Logic executives find, is that most customers don’t have a particularly solid handle on how much they’re spending in their current environment. Most of the incentives for IT involve uptime, redundancy and performance, with cost as an afterthought that the accounting department occasionally weighs in on.
The beginning of an engagement with the TSO Logic Platform involves a lot of hand-holding from the vendor. The company sets up the customer in its Platform-as-a-Service (PaaS) model and configures collection and reporting for the customer’s environment. It takes about a week for the platform to have the customer’s environment correctly modeled; the TSO Logic Platform then agentlessly begins reading data from the customer’s virtualization stack, other infrastructure, or any of the popular configuration management databases or other monitoring tools the customer might be using. Time is also built in to allow for some back and forth with the customer over any cost-related data kept in company spreadsheets or other flat files.
Once the correct model is set up, it collects data for about 2-3 weeks, and becomes the customer’s UI to start running “what-if” scenarios around various cloud scenarios. The data from this modeling/baselining phase can be eye-opening for customers.
As the TSO Logic Platform creates a model of the customer’s infrastructure in the graphic- and chart-rich UI, various truths about the infrastructure come to light. The tool will separate out different areas of infrastructure, such as production, disaster recovery, staging, development and testing. Customers can see what each of those broad areas are costing them currently, and also drill into their specific applications along each of those areas. So for example, an automated billing system application might cost $600,000 a year across production, development, staging, DR, and test, but the production implementation may only make up $200,000 of that. TSO Logic finds that organizations often start making decisions about how they’re utilizing their IT budget based solely on the data they’re getting back from this initial phase.
Of course, the real intent of the tool is not in looking at existing infrastructure spending, but in looking forward to what that spending could look like in the cloud.
That cost modeler functionality of the TSO Logic Platform works for Azure, AWS, and GCP. TSO Logic also intends to add support for private cloud hardware upgrades into the cost modeler later this year.
Using algorithms, machine learning, and detailed information about each platform, TSO Logic makes recommendations about the optimal way to deploy the infrastructure in the cloud. It might include a basic lift and shift of, say, the 215 VMs in the current infrastructure to 215 VMs on AWS. The secret sauce involves recommendations for configuring those VMs differently to optimize them for cloud. Customers can then start running their own what-if scenarios to further customize and optimize what the tool is telling them.
Another way TSO Logic recently expanded the value of its platform was with an upgrade that brought in detailed costing scenarios involving Microsoft licensing. It takes into account Enterprise Agreements details, MSDN subscriptions for dev/test workloads, the Azure Hybrid Use Benefit, the effect of Software Assurance on SQL Server license porting, Azure Reserved Instances, and other Microsoft licensing-related factors.
While it may at first glance seem like a one-time tool, TSO Logic is banking on its own value as a subscription service. Options, services and offers are constantly changing in the cloud, making cost modeling an ongoing value, the executives say. Meanwhile, customers are typically rolling out their cloud migrations in multi-year waves: Year One might be an on-demand scenario with optimization, while Year Two might bring Reserved Instances, and in Year Three a customer might start putting buckets of applications in different clouds.
The bottom line is that the public cloud is a tricky place. The more detail you can build into your cost projections, the more likely you are to hit your cloud savings goals. You can request a demo from their website.