CIO, Cloud, Data Center, Opinion

PanelCast Recap: The Enterprise IT Landscape in 2019

Those working in the world of enterprise IT maintain a delicate balance between what is and what will be. Most CIOs have one foot in the data center and one in the future, and that can be tricky when it comes to anticipating not only new services and new technologies to implement, but also the climate of the organization you work for.

Our inaugural PanelCast event addresses these issues in a discussion moderated by our very own Scott Lowe, along with four industry experts, including:

Through a series of 8 questions that we sourced from the audience, the panel and I discuss key trends expected to impact your company in 2019, the challenges of implementing some of these new technologies, and how you can help your company prepare for (and take advantage of) the changing face of IT next year.

The event is available to watch here, but if reading is more your style, what follows is a recap of the important points that were discussed among the panel’s participants.

Question 1: The speed at which innovation is expected is driving the need for containers, serverless, etc. What does this mean for how organizations view the role of IT?

The pace of change is steadily increasing, and it’s not going to slow down in the foreseeable future. As technology continues to evolve, it’s become increasingly clear that the de facto architecture will be a hybrid cloud model. We also expect to see greater adoption of edge computing. Security and data protection are increasingly becoming boardroom issues. We’ll see more widespread adoption of containers. And, perhaps most importantly for many people, we expect a continued shift for IT professionals into more of a hybrid role of tech/business.

Raghuram acknowledges that many new applications are going to be designed with the intent to leverage containers and cloud technologies, or serverless. The trick is supporting these new technologies without leaving the old ones behind. “These platforms need to be supported in a hybrid context,” says Raghuram. Ultimately, the responsibility for effectively managing these platforms in a consistent manner lies with IT, and how well this is done will be a key measure of success in 2019 and going forward.

“What’s interesting,” says Wronski, pointing out the technology-first backgrounds of most IT professionals, “is we sometimes forget the business process side of this, and there is a lot of adaption a business has to go through to even allow IT to help manage these two distinct groups.”

Miller acknowledges that it’s not an easy thing. “IT is leaner and becoming more generalist, yet there’s still these specialized areas that people need to support, so it’s going to be tricky.” Communication is the key here, and figuring out how IT can integrate with the business.

Ultimately, it comes down to what the organizations will expect, and according to Ready, this will be flexibility. “If there’s one thing I’m hearing out there, it’s that IT needs to be much more flexible.”

Scott's Take
It wasn’t that long ago when IT was managing servers and was called “computer services” or the “management information systems” department.  There was a relatively fixed context in which those early manifestations of IT operated.  The walls of these fixed constructs have been demolished and technology is spilling out into the entire organization in ways that were just dreams a few years ago. Between new architectural opportunities, such as containers, cloud, serverless technologies, and many other items, it can be tough for IT pros and CIOs to get their eyes off the infrastructure ball.  However, as the business continues or undertakes digital transformation efforts, the view of what IT is about will change dramatically.  In fact, it’s already happening and it’s going to get more pervasive in the coming years. The role of IT and of the CIO, in the eyes of business leaders, has always centered around the business, but IT hasn’t always been able to have the luxury of that view, leading to divides and a view of IT as somewhat separate from the business with a Venn-diagram like overlap area.  The view of IT and the reality for CIOs is that the circles of these Venn diagrams are continuing to merge and, eventually, there will be but one circle, with IT and the rest of the business finally working in lockstep to ensure success in transformation efforts and a full-court press on ensuring that the needs of customers and their experience – both digital and analog – with the business are met.

Question 2: Edge computing appears to be the next big area for growth. How do you think it will play a role in enterprise architecture in 2019?

Loosely speaking, edge computing refers to any non-cloud IT system that exists outside the four walls of the data center. “Ultimately, when you talk about flexibility, it’s all about running the application in the way that the application makes sense and where the application makes sense,” says panelist Jeff Ready (Scale Computing). Putting compute closer to the data opens up a world of new opportunities, but this new model comes with its fair share of challenges as well. New systems demand new people to run those systems, and most organizations are not sending a bunch of staff out to these Edge locations.

Mike Wronksi (Nutanix) adds that, in addition to making sense, it needs to be simple. “When you don’t have skilled IT in the field, and you have thousands of retail locations, you really want that to be as simple as possible for a non-skilled worker to do something as basic as reset, power on, power off.”

Security is a factor, too, and it’s an area where edge computing might offer an advantage. “From a security perspective, there is value in keeping that stuff at the edge and not bringing it back to the data center,” says Theresa Miller (Cohesity).

“It’s about pushing new experiences to customers at new speeds,” says Sirish Raghuram (Platform9), who agrees that creating simplicity around the complexity of the edge is key to its success. “People want a way to give developers the freedom to be able to innovate without getting locked into the specifics of edge locations vs. data center locations vs. public cloud locations.”

Scott's Take

On its own, the “edge” is almost the antithesis of the cloud.  It’s a highly distributed physical presence, but, like cloud, it has incredible potential.  We used to talk about “the cloud” and “on-premises.”  On-premises could, technically, include edge devices, but it’s more commonly applied to describe systems that are outside the formal data center.  Edge services can encompass the various Internet of things (IoT) devices strewn about the enterprise, but the term more accurately describes some level of formal computing services and data generation.

Edge computing has become a significant force for a number of reasons.  First and foremost is the issue of data generation and use without significant latency being introduced, particular as organizations continue to generate growing volumes of data in real time. As data generation grows, it’s less palatable to introduce latency into processes by shipping that data to a central data center or to the cloud.

There are a number of use cases that demonstrate this, but I’ll focus on just two here.

First, consider a retail environment with hundreds or thousands of locations.  Every day, there is a massive volume of data generated at each of these locations.  There is a period of time during which that data needs to be closer to the user so it makes sense for it to be handled locally.  After a transaction is complete, however, it makes sense to push that data to the central data center for long-term analytics and archival needs.  For these kinds of edge environments, there isn’t always stable and redundant network connectivity and there aren’t generally dedicated IT personnel at each site and each site doesn’t have a hardened data center.  Operating a locally robust edge stack that can serve the site’s local needs for a period of time and then transfer data as needed to a more robust HQ environment or to the cloud makes perfect operational sense.

The second use case revolves around mobility, but not in the devices sense.  Think autonomous vehicles.  The computing needs in such vehicles is intense as there are all kinds of recognition systems in use to help the vehicle differentiate between a tree, a traffic light, and a human.  These vehicles generate massive amounts of data that has to be processed instantly so that an appropriate decision can be made.  Can you imagine what would happen if the vehicle needed to make a decision only after uploading gigabytes worth of data to the cloud to have it analyzed there?  At the same time, though, data centers and cloud environments are critical components of an autonomous vehicle’s network.  What each vehicle “sees” and learns is essential information to other autonomous vehicles.  As such, the combination of edge and cloud-centric operating environments is crucial to their success and to the safety of these fleets.

Question 3: Hyperconvergence has become a force unto itself. What comes after HCI? Is it extensions that turn HCI into a broader platform or is it a whole new architecture altogether?

Taking into account the evolution of technologies, HCI seems to be at the forefront for now, but what is that going to look like in 2019 and beyond?

When you consider the ultimate goal of homogenizing the experience across all services and platforms, that is what HCI is likely to become. According to panelist Mike Wronski (Nutanix), It’s going to be “the true version of software-defined everything in a full stack.”

Jeff Ready (Scale Computing) believes that the reality of software-defined everything will drive hyperconvergence. “As you start to move to everything to be software defined, that opens up the opportunity to do a lot more orchestration and automation.”

While hyperconvergence is clearly the future, it’s still a ways off for some organizations, says Teresa Miller (Cohesity). “Not everyone has the budget to install it today.” Either way, it’s good that the conversation has started now, so that companies will be prepared once they are ready to take the leap.

Scott's Take

Really, we’re already seeing the beginnings of what a post-HCI world looks like.  In the beginning of HCI, we saw software-defined conglomerations of the hypervisor, compute, and storage layers spark something of a transformation in how people thought about their data center environments.  Today, we’re seeing HCI what I see as three ”blurry” paths for hyperconvergence.  I say “blurry” here because these are not mutually exclusive options.

First, I see HCI expanding beyond the confines of the central data center and expanding to consume the edge and, to a degree, the public cloud.  However, the public cloud component is typically handled through either some kind of overlay or as an extension to on-premises HCI environments.  This edge-centric vision of HCI makes a lot of sense for a number of reasons.  As we look at the common retail edge use case, it’s clear that individual stores can’t be staffed with hordes of IT support staff and they often barely have room for inventory, let alone a rack of servers.  With a hyperconverged solution, some of which can operate with as few as two nodes, each retail location can enjoy a robust hardware stack that can ensure that the site’s operations don’t get impacted due to hardware solutions built without availability in mind.

Second, early incarnations of HCI were touted as “linearly scalable”, which is a good thing.  However, the challenge that some organizations saw was that they were essentially forced to buy more hypervisor licenses each time these needed more storage.  Or, they had to buy more compute even if they just needed storage.  Today, we’re seeing interest in a segment of the HCI market that takes a disaggregated resource approach.  You might be tempted to think of these as composable systems, but they’re not quite there in all cases.  Regardless, the benefit of this disaggregation is that organizations can granularly size and scale individual resources to meet the demands of their applications.  If they need more storage, they simply add a storage block.  The HCI part comes in at the management layer where the resources are still managed in a singular way.  There is debate in the industry around whether these should be called HCI, but my personal take is that the terms we use to describe things need to evolve with the technology, so I’m comfortable putting these solutions into the HCI bucket.

I hinted at composable infrastructure in the previous paragraph.  This is the third path that HCI has taken and it’s sometimes considered the next evolution of HCI, but with some key differences.  In general, composable systems can support bare metal, virtualized, and containerized workloads, making them suitable for any application. HCI doesn’t usually support bare metal since the hypervisor is so integral to those environments.  Composable infrastructure, as a term, has gotten a slow start, but the outcomes – namely, the ability to weave together any combination of resources on the fly to support an application – are powerful.

Question 4: Security incidents seem to be the daily news these days. As multi-cloud environments take center stage, how can enterprises ensure a standard security architecture across both the private and public cloud?

With security breaches becoming more frequent the more digitized we become, the issue of data security is likely to take center stage in boardrooms everywhere. But the technology is not always the source of the problem. Panelist Jeff Ready (Scale Computing) points out that “when you really dig into a lot of these security breaches, a huge portion of them are a result of human error as opposed to IT dropping the ball.”

Edge computing only compounds the issue through the increased risk of physical security exposure. Flexibility is important, but so too is the need to control users who have access to the data.

There is the question of AI and how it might mitigate some of the risk. Ready sees AI as a potential boon to threat detection, but he’s not convinced it’ll be easy to develop.

Sirish Raghuram (Platform9) blames the problem on security as an afterthought. “Security needs to be brought further up in the design and development process.”

Automation may be one way to do this reliably, Mike Wronski (Nutanix) points out. “As we get to more API-driven, programmable data centers, now the security component gets to be somewhere where we can tie that in.”

Scott's Take

Security is absolutely a board room issue now and in 2019, it’s going to continue to rise in importance.  As we see infrastructure and data explode out of the safe (ish) confines of the data center, there is more opportunity for malfeasance and the attack surface of organizations grows exponentially.

Over time, a number of trends will have to come together to help organizations and, frankly, society, deal with security:

  • A security-first mindset. This is not to say that every product and service has to be a security product, but every product and service needs to be developed with security as a core part of the design and that goes for any organization undertaking multi-cloud projects.
  • Increasing levels of awareness. Employee training programs and a focus in the boardroom are critical.  No one can afford to lose sight of security in today’s world.  The resulting impact is simply far too severe.
  • AI-powered tools. It’s impossible for humans to keep up with every potential breach.  Only with automated tools that keep a constant vigil over every aspect of an organization’s data infrastructure will companies be able to reduce their risk of exposure.  I believe we’re going to see entire security markets upended by AI-centric tools.  These robots won’t replace humans but will augment them.  The AI tools will sift through volumes of information in seconds and alert humans to anomalies.  We’re starting to see this today, but it’s a market still in its infancy.
  • An expanded definition of security. When we think of security in the context of IT, it’s often thought of a breaches of some kind.  But, as a society, we’ll also need to grapple with security in the context of autonomous vehicles and the like.  2019 and beyond will provide a plethora of opportunities for those with an interest in security to expand their horizons!

Question 5: The economy can’t be ignored. We’ve enjoyed a decade-long rally that has propelled startup fortunes and enabled enterprise customers to make new investments. Do you think that will change in 2019 as we see increased market volatility and, if so, what do you think the outcomes look like?

As a rule, market volatility impacts underlying budgets, and it can have an effect on the way organizations make investments. Panelist Theresa Miller (Cohesity) hopes that businesses have been smart about the high earnings they’ve enjoyed during this rally, but she doesn’t think a downturn will slow development or growth significantly.

More than anything, it’s likely to lead to a higher awareness of ROI. “If and when we do get into a slowdown or even a recession, I think that you might see a change in how people behave in terms of being more cost conscious and having more hard dollar conversations,” predicts Sirish Raghuram (Platform9).

The good news is that IT has considered the possibility of a downturn, and is preparing for it in the form of tools that provide better visibility into the ROI and TCO of applications.

Scott's Take

As the economy continues to cool, it’s inevitable that we’ll see slowdowns in the enterprise IT world.  Potential customers will hold back spending and that will create a slowdown among the vendor community.  Some that are on the bubble may not make it.

More importantly, customers will push what they have to its limit.  Budgets will get tight again and, as stated there will be a stronger interest in ensuring ROI and TCO.  One thing that I still see quite often is an underappreciation for the TCO of some of the solutions on the market.  A slowing economy will force customers to continue to ensure maximum ROI on what they buy, but I also think it will push them to figure out how to wring every possible benefit from those purchases in an effort to lower TCO as well.

Question 6: Is interoperability and integration between product manufacturers increasing or decreasing?

There is no question that product manufacturers are beginning to see the value of collaboration. The trend is API-driven, but it may not go far without standardization. “The intention is there for it to be increasing and for these platforms to be able to talk with each other through their APIs, but I also think that without the standardization, it can be very challenging at times for enterprise to pull into one standard, central platform,” says panelist Theresa Miller (Cohesity).  “The great thing about standards,” jokes Wronski, “is that there’s so many of them to choose from!”

Sirish Raghuram (Platform9) sees an upward trend in API-driven interoperability, too, and believes that open source in particular will drive more integration.

It may end up being enterprises demanding it that drives greater interoperability among product manufacturers. Wronski sees this as an inevitability. “I definitely see a desire by our customers at the enterprise to have this, and they’re the ones that are going to push us as software and solution vendors into this.”

Scott's Take

Interoperability and integration are really two different factors that have similar intended outcomes.  Interoperability describes the capabilities for one solution to work with another. Integration is, to me, a step deeper.  Rather than just working together, integration means that two or more services are deeply intertwined with one another.

It’s interesting that we’re living in an increasingly standards-based world which, at the same time, seems more fractured all the time.  The one real commonality that we’re seeing in a lot of products today is the inclusion of APIs that can increasing integration while also enabling advanced automation and orchestration capabilities.

That doesn’t mean that these APIs are always easy to leverage.  In fact, I believe that we’re going to see previously infrastructure-only IT pros that learn some level of coding as being lynchpins to success in the not-too-distant future.  In 2019, IT pros that learn to code in order to help improve IT and business efficiency will be making an incredible investment in their future.  The market isn’t going to see the level of interoperability and integration that it wants without some effort from IT and APIs being exploited by forward-thinking IT pros will be at the heart of that effort.

Question 7: Presently, it seems that cloud run costs/subscriptions can easily exceed on-premises costs for general workloads beyond small systems. Do you see this changing moving forward, and how soon?

Operating in the cloud is expensive, and that’s not likely to change in the foreseeable future.

On-prem costs vs. off-prem costs are inevitably tied to operating costs, an area that panelist Sirish Raghuram (Platform9) sees gaining more attention in 2019. “I do think there will be a renewed focus on the operating cost [over opportunity cost], and I think there’s very little debate that, generally speaking, operating costs on the cloud can easily exceed the on-premises cost.”

Ultimately, it depends on whether or not companies are willing to change what they’re doing now, because without that change, the cost is not going to go down. “But as customers start doing a true on-prem refactoring of how they build out and architect their data centers,” says Wronski, “that’s where that parity or a true competitive cost nature is going to come from.”

Jeff Ready (Scale Computing) is not optimistic. “I think we’re a ways away from seeing the cloud become cheaper,” he says.

Scott's Take

“The cloud” has had one of the most visible journeys along the Gartner Hype Cycle in recent memory.  At inception, people quaked in fear that the cloud would end their jobs as they listened to the pure hype that was generated about this technology.  That, of course, didn’t happen.  But, a lot of companies still make huge bets and starting dumping everything there in the hope that it would reduce cost and complexity.

In some ways, this happened.  Organizations that wanted to ramp quickly leveraged cloud as their starting point, giving birth to the phrase “born in the cloud.”  This is a trend that is continuing, but, at the same time, organizations have realized that public cloud is not the only way.  As such, we’re on a direct path to a world dominated by hybrid and multi-cloud architectures.

There are a lot of reasons that the public cloud isn’t a simple panacea.  First and foremost is cost.  Those early pioneers that made their big jumps into cloud quickly discovered that there was an inflection point at which the cost of public cloud began to far outweigh the cost of on-premises deployments.  Second, people quickly realized that some applications simply aren’t meant for cloud and that so-called “lift and shift” operations just didn’t provide the hoped-for results or cost benefits.

But the public cloud is here to stay and with good reason.  It has enabled organizations to move far more quickly than ever before and it provides an incredibly powerful and eminently distributed location for business-critical workloads.  It’s also enabled the innovation of software-as-a-service tools that have begun to replace some stubborn on-premises tools.

Moreover, there exist today any number of tools that can help organizations better predict their ongoing cloud spend, although there is a need for constant invoice diligence.  Further, as we see continued innovation in new software development – such as the use of microservices and containers – we’ll see a rise in adoption of both public cloud services as well as on-premises infrastructure in hybrid cloud architectures.

Question 8: IT needs to meet all the needs of the business, but the business isn’t always interested in buying more stuff – tools, software, platforms, hardware, storage, networking, etc. How does the business forge ahead without IT just continuing to buy?

The interests of IT have not always been in line with the interests of the business, since they have traditionally been focused on different priorities. But with greater collaboration in the future, we should begin to see more alignment. “The best case scenario is when your CIO is fully integrated with the business and working together with them,” says panelist Theresa Miller (Cohesity).

It also has to do with a shifting of the IT group to focus on business needs, so they can better solve problems, identify solutions that require less manpower, and avoid repeating mistakes.

The best solution for cost savings, however, goes back to consolidation. The market has become inundated with a “tools and operational sprawl,” says Sirish Raghuram (Platform9), and by focusing on consolidation, companies can begin to realize a true unified experience across operations and development.

Scott's Take

When things get hard, it’s easy to fall back into old tropes like “throwing hardware at problems” but that doesn’t generally get the business to where it really needs to go.  Instead, CIOs and other decision-makers need to take a more revolutionary approach, particularly as the rest of the business seeks to focus squarely on the customer experience and other critical items through the implementation of digital transformation initiatives.

And, as we enter a period in which we’re almost certainly going to see a slowdown in the economy, IT will be asked to tighten up and focus on maximizing current investments.  That’s going to mean thinking differently about infrastructure and its role supporting applications.  Looking for ways for IT to automate infrastructure operations so that they can focus on digital initiatives will be paramount.

For CIOs, every year is a make-or-break year.  Honestly, every year is such, but each year, it’s for a different reason.  In 2019, modern CIOs will need to prove their mettle by demonstrating a deep understanding and appreciation for truly linking technology and the business, but not in ways that we’ve undertaken before.  While a great number of CIOs have made this leap, the fact remains that there are a great many more than haven’t and are rooted in historical operations.

In 2019, for an internal IT operations focus, CIOs should center their efforts around driving as much efficiency as possible into the infrastructure environment.  Make the routine things happen, well, routinely.  If you’ve already managed to wring 100% efficiency from your infrastructure the next step is to get into the rest of the business more deeply and push hard on initiatives that can drive revenue and outcomes.

Summary

Our first PanelCast event was fantastic, and you should keep an eye out for the next one. If you’re a vendor looking to sit on the panel for the next one, check out ActualTech Media’s current event schedule.