It's The End Of Infrastructure-As-A-Service As We Know It: Here's What's Next
Containerization and serverless computing represent the future of compute.
This article was written by Tolga Tarhan - CTO of Rackspace Technology, and originally published on Forbes.
A decade ago, running virtual machines (VMs) in the cloud was state-of-the-art. It made cloud migration relatively simple: Companies could just shift the VMs they were already running on their on-premises servers to the servers of an infrastructure-as-a-service (IaaS) vendor. Freed from the burden of physical server maintenance, companies gained flexibility and cut costs.
But today, no one is building new applications based on VMs. Instead, they're turning to two models that are more cost-effective, low-maintenance and scalable than VMs can ever hope to be: containerization and serverless computing. These two models, not VMs, represent the future of compute.
Virtual Machines Aren't the Endpoint of IT Infrastructure Evolution
The recent history of IT infrastructure is the story of abstraction. Since the 1990s — when we ran applications on hardware in physical racks (imagine that!) — every new development has abstracted applications from hardware further and further, so the slice of infrastructure that companies had to manage grew thinner and thinner.
But the IaaS model of running VMs in the cloud is hardly the last step in that progression. VMs have some significant downsides:
- The fact that each VM runs multiple operating systems inevitably creates inefficiencies. Even when they're scaled and sized properly — which isn't a given — VMs leave a lot of unused capacity on servers.
- VMs still leave companies responsible for painful ops exercises like disaster recovery, high availability and scaling as well as patching and security.
- VMs aren't very flexible and work differently on different hyperscalers, so a VM built on Microsoft Azure can't be migrated to AWS or Google Cloud.
Companies still trying to migrate to the cloud by migrating their VMs should think twice. Committing to an inefficient model now will hold back progress in the future. Instead, companies should look to containerization or a serverless model — even if it requires significant changes to their processes.
Containers and Serverless Reduce Ops Burden and Increase Efficiency
Containers are the next step in the abstraction trend. Multiple containers can run on a single OS kernel, which means they use resources more efficiently than VMs. In fact, on the infrastructure required for one VM, you could run a dozen containers.
However, containers do have their downsides. While they're more space efficient than VMs, they still take up infrastructure capacity when idle, running up unnecessary costs. To reduce these costs to the absolute minimum, companies have another choice: Go serverless.
The serverless model works best with event-driven applications — applications where a finite event, like a user accessing a web app, triggers the need for compute. With serverless, the company never has to pay for idle time, only for the milliseconds of compute time used in processing a request. This makes serverless very inexpensive when a company is getting started at a small volume while also reducing operational overhead as applications grow in scale.
A Few Tips for Evolving Your IT Infrastructure
Transitioning to containerization or a serverless model requires major changes to your IT teams' processes and structure and thoughtful choices about how to carry out the transition itself.
Some tips for managing a successful shift to modern IT infrastructure:
Know which model works best for your use case
If you can go serverless, you should. It's the most cost-efficient and forward-looking IT infrastructure model today. However, serverless represents an entirely new programming paradigm. Implementing it is usually only feasible when your team is coding something new from scratch.
By contrast, containers are the most convenient solution if you're refactoring or replatforming an application. The leading container framework, Kubernetes, is also universally accepted across hyperscalers, which makes containers ideal for maintaining cross-cloud portability or hybrid portability — running the same applications on-premises as in the cloud.
Adopt a cloud-native mindset
The transition to modern IT infrastructure is a people and process transformation as much as it is a technology transformation. Traditional IT infrastructure management relies heavily on manual point-and-click solutions. By contrast, managing containerized or serverless infrastructure is more like software engineering — IT teams use code to describe the end result they want, and automated systems spin it up.
To take full advantage of the flexibility and efficiency afforded by modern infrastructure, IT teams must shift to what's known as a DevOps orientation, which brings agile software development practices to infrastructure management. For example, traditional enterprise IT teams tend to be siloed by function, but DevOps takes a more agile approach where one team owns an application from end to end. Adjusting to this new way of working is necessary for the success of your infrastructure transition.
Avoid proprietary third-party solutions
There are many third-party software stacks out there that claim to make the transition to containers or serverless easier. However, in the end, these "leaky abstractions" can just add extra steps and additional costs. While they may simplify the initial transition, as you grow more sophisticated, you'll eventually have needs they likely can't handle. Instead, you should cut out the middleman from the beginning and use open-source solutions with active communities, like Kubernetes, or adopt solutions from the hyperscalers. There may be a steeper learning curve, but it'll save your IT teams time down the line.
Don't transition all at once
You don't have to transition to containers or serverless in one go, which is extremely difficult to do. Instead, transition some services to containers while leaving the rest of your application the same. You can transition more services over time until the application is fully containerized. This incremental adoption strategy is known as the "strangler method" since the new code slowly strangles the old. This slow and steady strategy also gives your IT teams time to adjust to new ways of working.
The Future of Compute
The market for VMs probably won't fall off overnight. There are too many legacy systems already running on that infrastructure in the cloud. However, that doesn't change the fact that containers and serverless are more flexible, lower maintenance, easier to automate and more cost-efficient. They're the future of compute, and companies should get on board.
Defining a Future Proof Cloud Strategy
About the Authors
As CTO of Rackspace Technology, Tolga Tarhan leads the vision, driving innovation, and strategy for our technology offerings. With more than two decades of experience leading product and engineering teams and as a hands-on technologist at heart, he brings unique insights to customers undertaking the journey to the cloud. As an early pioneer of cloud native thinking, Tolga's passion has driven our technical approach and transformed our customers into cloud native thinkers. Tolga continues to show thought leadership in the field through his extensive speaking engagements at AWS events, industry conferences, and educational groups. Tolga previously served as CTO of Onica, which was recently acquired by Rackspace Technology. Prior to that, he was a co-founder of Sturdy Networks and served as the CEO through to the acquisition by Onica. Tolga holds an M.B.A. from the Graziadio Business School at Pepperdine University.Read more about Tolga Tarhan