What is cloud repatriation and what are the leading causes?
Cloud repatriation refers to the practice of relocating public cloud applications to different locations, most often on premises and for financial, performance and regulatory reasons.
Disillusionment with public cloud over common complaints such as unexpected costs and the potential for vendor lock-in are not the only reasons organizations move applications back on premises. Cloud repatriation is also the result of organizations taking a more prescriptive approach to workload deployment against a technological backdrop where they can enjoy the main benefits of public cloud -- especially scalability, elasticity and consumption-based pricing -- in a location of their choosing. It is becoming less reactive and more of an integral part of enterprise IT strategies.
Evolution of public cloud providers
The beginning of the modern cloud computing era can be dated to the launch of Amazon Web Services (AWS) in 2007. Today it is difficult to imagine how simple, albeit useful, AWS' menu of offerings was at its onset: compute, storage and networking capacity oriented around virtual machines.
Application hosting had long existed before AWS emerged to provide critical and fundamental improvements. The AWS platform offered scalability and elasticity when a customer needed it, reducing much of the guesswork associated with rightsizing traditional single-tenant architectures, where an environment is associated with a single customer.
Over time, AWS has added thousands of services to the platform, enabling customers to not only move their on-premises applications to the cloud but also augment and improve them with services -- as well as to build entirely new ones with the sophisticated development and operations tools available in the AWS ecosystem. Competitors soon followed AWS' lead and developed public clouds of their own.
After some years of heated competition, the market in the biggest, "hyperscale" public clouds is dominated by AWS, Microsoft and Google, with Oracle still trailing the three by some distance but experiencing rapid growth in recent quarters. The public cloud market has also seen smaller newcomers recently emerge that, in a sense, are coming full circle since they mostly offer a baseline of compute, network and storage -- albeit often at lower prices than their larger competitors.
There are also a growing number of on-premises cloud platforms. Those offered by hyperscalers, such as AWS Outposts and Microsoft Azure Stack, tend to be closely aligned with their public clouds, giving customers the ability to move workloads back and forth as they choose. Traditional original equipment manufacturers, such as HPE and Dell, offer private cloud platforms that serve a more general purpose, although they are increasingly integrated with public cloud platforms.
All of this means customers invested in the cloud computing model have more choices, as well as more decisions to make, about exactly where certain workloads should run and why.
Common reasons for cloud repatriation
Dropbox, Adobe and Basecamp creator 37signals are three oft-cited examples of companies that performed major cloud repatriations and realized significant benefits. Enterprises considering their own repatriation efforts can learn from these companies' particular experiences, but there are many common reasons for cloud repatriation:
- Cost. The public cloud offers easy consumption and scaling of resources, but waste and inefficiency can creep in if an application isn't recalibrated to take full advantage of the underlying infrastructure and native tools. Constant pricing changes -- even though they typically benefit customer bottom lines -- require thoughtful contemplation and diligent oversight over long-term contract agreements.
- Control. Public cloud platforms make it easy for individuals or small teams to spin up resources on an ad hoc basis, which provides flexibility but can lead to governance and cost issues without strong policies and oversight. The emerging practice of FinOps, in which finance and operations teams collaborate, is meant to mitigate this risk. In addition, a company might find that government regulations compel it to move workloads out of a public cloud, a fact underscored by the sovereign cloud movement.
- Storage. The cost of public cloud storage continues to go down over time, but another critical consideration with placing massive amounts of information in public clouds involves egress fees: money cloud providers charge customers to pull data from the system. Of late, the issue has been addressed to a degree, with one example being the Bandwidth Alliance, a consortium of providers that offer shared customers waived or discounted egress fees. In addition, in 2024 Google, Microsoft and AWS all waived egress fees for customers who wanted to leave their platforms.
- Misconfiguration. The cloud industry has endured seemingly countless stories about data leaks and breaches caused by misconfigured storage buckets. This data security risk is tied to the shared responsibility model, where cloud providers are in charge of the underlying infrastructure but customers must configure and lock down systems on their own.
- Performance. While hyperscale public clouds are constantly expanding their global infrastructure footprint, some applications will still run better in a private environment due to lower latency.
- Vendor lock-in. While a growing number of enterprises are pursuing a multi-cloud or hybrid cloud strategy -- which can mean the use of more than one public cloud in the former case, or a combination of public and private cloud infrastructure in the latter -- those placing the bulk of their IT investment in one provider might find themselves second-guessing that decision.
- Skills gaps. An enterprise that moves existing applications to a public cloud environment with few or no changes -- a practice known as lift and shift -- might not have the internal abilities to refactor these applications to take full advantage of a provider's underlying services. It could also lack the skill sets needed to create new applications on a public cloud using modern, microservices-based architectures and the ever-increasing number of services available. This can lead to expensive consulting bills or inertia when trying to effect substantial change and progress in IT strategies.
What are the challenges of cloud repatriation?
Companies that repatriate public cloud workloads are seeking to learn from their initial mistakes or adapt to new realities regarding costs, regulations and other factors. It is critical for them to develop a thoughtful approach to cloud repatriation to maximize its benefits and avoid the following challenges:
- Determining what to repatriate. It could be, for example, that a lift-and-shift of SAP applications doesn't perform any better -- or performs worse -- than when the software ran on traditional hardware inside the customer's data center. Some critical workloads could be run more economically on premises and provide a greater sense of control. Any workload targeted for migration should be subject to a rigorous cost-benefit analysis, however.
- Choosing the right landing. A repatriated workload might end up running best on commodity hardware, but enterprises should also consider whether to invest in bespoke platforms that provide a public cloud-like software layer. Portability and flexibility are key as well. Seek out software platforms, such as VMware Cloud Foundation, that are certified to run across many environments.
- Collaborating effectively. Both IT and business teams should be in the conversation about whether to repatriate a cloud workload. Each stakeholder can provide valuable input on factors such as cost savings, performance, privacy and compliance concerns.
- Making the most of a repatriation. Just as companies moved workloads to the public cloud in search of cost savings, convenience and efficiency, a move back should seek the same benefits. View cloud repatriation as an evolution of the overall IT strategy, rather than a wholesale rejection of past practices. However, companies who undertook significant overhauls of a given application that they migrated to the cloud must consider what is required to ensure it runs properly in the new environment.