Support: 1-800-961-4454
Sales Chat
1-800-961-2888

Get Faster, More Affordable Cloud Applications With OS Virtualization Containers

3

The concept of virtualization makes cloud computing possible and plenty of people are familiar with machine-level virtualization hypervisors such as Xen, KVM and VMware. Machine-level virtualization is an intuitive abstraction level for a lot of enterprise computing workloads because it makes one server look like many servers. Yet there’s a growing interest in operating system-level virtualization and new projects are emerging to take advantage of the unique properties of this technology for cloud computing.

OS-level virtualization allows one kernel to have multiple images and each image is called a “container.” And just as there are many machine-level virtualization solutions, there are many OS-level virtualization solutions such as LXC, FreeBSD Jails, AIX Workload Partitions and Solaris Containers.

But there haven’t been that many good uses for these systems. It’s impractical for hosting because it requires all the users to run the same operating system—which is kind of like asking all of your customers to wear the same underwear.

And security is a major concern when working with OS virtualization. Operating systems have a lot of vulnerabilities and if an attacker gets the kernel, he or she gets all of the containers that run on top of it. Not what you want from, say, cloud computing.

But a new set of needs is driving people to reconsider containers and develop new uses for them. Here’s a few possible uses to consider:

Save Money On Cloud Computing

Cloud computing costs are based on buying machine-level virtualization instances. That’s great for most workloads, but there are some applications that you might want to break into smaller parts. Say you wanted to have 30 applications running at the same time. You could spin up 30 individual micro cloud servers and run one application per server. Or you could start three big cloud servers and split up the OS on each into 10 containers. Could be cheaper. Could be a lot cheaper. Especially when it comes to managing and accounting for all those cloud servers.

Running containers in the cloud is a matryoshka model of computing with virtualization in virtualization.

There’s a lot of efficiency to be squeezed out of this model too. It’s not unusual for the operating system to require more resources than the actual applications that run on top. When you run five applications on five virtual machines, you have to also run operating systems along with their associated binaries and libraries. But when you use containers, you might be able to squeeze six or seven applications running on a single operating system because there’s no redundant operating system processes running. You don’t send cycles to power multiple CentOS or Ubuntu installations. In effect, you get more application power for less computing power. In that cloud that works out to smaller monthly bills.

Distribute Packaged Software Efficiently, Move Workloads Easily

Engineers make assumptions about how and where their products will be used. BMW assumes that people will drive on roads. Boeing assumes that its products will be used in the air. Neither cars nor airplanes are designed to work underwater (unless, of course they are underwater cars or underwater airplanes). The same holds true for software engineers. They make assumptions about what operating systems their products will run in and what dependencies will be involved with those particular systems.

Software-as-a-Service gets around the operating system dependency problem by delivering applications via the Internet. You don’t worry about working with different operating systems, just different browsers. But not every software package makes sense to deliver as a service.

Containers work well as a way of accounting for and encapsulating the operating system dependencies. Instead of selling a stand-alone application, a company can sell a virtualized operating system and software as a package that works well together and can be tested exhaustively before it ever ships.

Containers also work well when moving applications from development to testing to production because they ensure a consistent operating environment. Same goes for swapping workloads from dedicated machines to private clouds to public cloud computing. Containers make it easy to move things around without getting tripped up in dependencies.

Scale Faster

You can fire up containers faster than virtual machines. There’s just less to build. Instead of a new instance with a new operating system and new binaries, you get just the new application you need. Cloud-based virtual machines take an average of five to 10 minutes to spin up—depending on the configuration. Containers take five to 15 seconds.

This performance difference is important when you’re dealing with explosive growth of Internet services—it’s what cloud bursting and autoscaling are all about. With containers, you get a faster, more responsive cloud service that can give you a lot more compute power in a hurry. The caveat is that your applications all have to be pretty similar—or at least not require dramatically different operating systems or dependencies. But many webscale apps work just this way.

Get Started

There are lots of paths into OS-level virtualization, but one of the most exciting is Docker, a new open source project that streamlines the creation and deployment of containerized applications. We’re using Docker internally for some of our needs and getting good use out of it. Check out this presentation from our friends at dotCloud for more information:

About the Author

This is a post written and contributed by Ken Wronkiewicz.

Ken works to make the cloud better through new services and features. He’s worked on cloud monitoring as a service and is in the process of developing exciting new features for the Rackspace Open Cloud. He’s also “the bike guy,” responsible for making Rackspace’s San Francisco office an award-winning bicycle-friendly work place.


More
3 Comments

Interesting, that OpenVZ is left out in your enumeration of existing container solutions. You could have had all this five to ten years ago, already.

avatar Hein Bloed on August 28, 2013 | Reply

Does Rackspace plan to offer Docker (or any other service that uses LXC) in the future as a supported option?

avatar Dave on August 28, 2013 | Reply

Really enjoyed the clarity of this article. I think it’s the first I’ve come across that builds upon the concept of machine virtualization to explain what LXC and Docker are about.

avatar Suhr Spamalot on December 3, 2013 | Reply

Leave a New Comment

(Required)


Racker Powered
©2014 Rackspace, US Inc.