Presented by Deloitte Consulting LLP


Cloud computing took off in 2007, and the migration was a gold rush, with companies racing to launch silver bullet solutions that could transform their companies. Fast forward to now. Cloud computing projects are in their second or third generation, and instead of offering ROI, some are racking up sky-high operational costs, impeding efficiency, and stifling productivity. That’s in addition to system outages, security breaches, and customer issues with stoppages and delays that keep developers and architects up at night.

The problem? Cloud complexity — and for some companies, it’s become a disaster.

In its Cloud Complexity Management Survey, Deloitte Consulting LLP found that most large enterprises are running into more complexity than expected — and 47% of companies surveyed saw it as the biggest risk to their ROI. Some are even experiencing negative value from their cloud investments.

“People call up and say, ‘We’re not getting the value from cloud that we thought we would,’” says David Linthicum, Chief Cloud Strategy Officer at Deloitte Consulting LLP. “99 times out of 100 you can map it down to the complexity issue.”

The complexity issue comes down to three main factors:

  • Several different cloud providers to choose from, each with its own databases and development environments, security systems, governing systems, etc.
  • Companies not giving up on-premise systems as quickly as they planned, so IT departments have to support the increasingly complex new while still trying to maintain the old
  • The rise of multicloud over the past five years

“There’s a sweet spot between innovation and complexity,” says Dave Knight, VP of Alliance Relationships and Deloitte’s IBM Alliance Global Cloud Lead. “That’s the challenge, identifying when you’re at that tipping point, and reining it in or increasing your operations cost to offset the complexity. But most organizations, even the global 2000, are just starting to recognize this as a problem.”

Can you regain your cloud ROI?

“Of course, it’s never too soon to do good architectural planning to deal with complexity issues proactively,” Linthicum says, “but it’s not too late for companies that find themselves in a tight spot.”

“The first thing is admitting that there’s an issue, which is a tough thing to do,” Linthicum acknowledges. “It essentially requires creating an ad hoc organization to get things back on track and simplified, whether that’s hiring outside specialists, or doing it internally.

“The good thing about that is typically you can get 10 times ROI over a two-year period if you spend the time on reducing complexity,” he says.

Even with that incentive, reducing complexity involves a cultural change: shifting to a proactive, innovative, and more thoughtful culture, which many organizations are having trouble moving towards, he warned. The most effective way to do that is really retraining, replacing, or revamping.

“That’s going to be a difficult thing for most organizations,” Linthicum says. “I’ve worked with existing companies that had issues like this, and I find it was the hardest problem to solve. But it’s something that has to be solved before we can get to the proactivity, before we can get to using technology as a force multiplier, before we can get to the points of innovation.”

Solving for complexity

Linthicum estimates that the fix typically costs about 20% of the current cloud computing budget, above and beyond most existing spends on IT. In other words, if you spent $4 million on cloud services, in addition to your normal IT budget, it could cost 20% of $4 million to fix a critical complexity issue.

“It can be a fairly significant chunk of money,” Linthicum says, “but keep this front and center: you’re likely to get that money back 10 times over a two-year period.”

To course correct, organizations should break apart the complex architecture into individual domains, grouping them by type — for instance, all cognitive pieces together, all services, all processes, on-premise, infrastructure-related, and so on. Within each of those buckets, you can create an abstraction layer with containerization software, such as Red Hat’s OpenShift. It works to simplify both data and services by combining it into one central resource, whether the data exists in the cloud or not.

“Instead of requiring that people manage three different services, each with its own identity access management and a proprietary directory service, we can leverage a tool that’s able to work across the different heterogeneous cloud environments,” Linthicum explains.

Using a key container-based option like the OpenShift platform, you can abstract the sharing of information between the directories to support common applications like security systems and automate how security information is shared proactively within that console.

From there you can automate the use of data so that it’s accessible across various functions and applications without increasing the complexity as more data sources are added. Another benefit is that when something goes down, you can quickly spin it back up. And it’s all under a single management layer with the same interface, no matter where the workload is running.

All considered, the abstraction layer works to hide the complexity from people who are leveraging that domain. You can worry less about where things are running and how they get there, and instead can focus on creating value.

“Leveraging something like OpenShift is the epitome of that abstraction layer for containerized workloads,” Knight says. “From an architect’s perspective, that’s having backup across clouds. From a business perspective, it’s having that application always available and serving my customers.”

The ROI of best practices

“In the end, it comes down to valuing efficiency and innovation,” Linthicum says. “Managing complexity means returning to a streamlined, productive state. You can spend less on your cloud bill, less on development, less on everything having to do with IT; reduce your security vulnerability; and regain the tremendous benefits cloud computing offers. Plus, you’ll have applications that hold their value over a longer period of time.”

“The most successful companies,” Knight adds, “are the forward-looking ones.” Game changers and innovators in this space see innovation as core and they guard it jealously. They don’t allow complexity or other architectural issues to creep in.

“If a customer wants to talk about yesterday’s challenges and how to address them today, they’re more than likely behind the complexity curve,” he says. “If I find myself talking to a customer that wants to talk about tomorrow’s challenges, they’re the ones who will ultimately be more successful.”

Please see www.deloitte.com/us/about for a detailed description of Deloitte’s legal structure.


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected].

Author
Topics