Support: 1-800-961-4454
Sales Chat
1-800-961-2888

How Mailgun Uses Docker And Contributes Back

1

There are a lot of good reasons to be excited about containers, a form of operating system-level virtualization with many applications. At Mailgun, we’re excited about containers for four major reasons:

  1. Containers on our laptops are exactly the same as our staging and potentially production environments. This makes it easy to find and fix potential problems before we push to production and pulls together our dev and ops in a way that goes beyond buzzword compliance.
  2. We get more bang for our buck with containers instead of virtual machines. Because containers share operating systems, there’s no overhead from multiple OSs running concurrently. It’s great for getting big performance gains on dedicated boxes.
  3. Containers make it easy to rapidly change infrastructure from dedicated hosting or on-premise machines to public cloud and back as necessary. Once you standardize the operating system and configuration bundles, moving workloads to the best-fit computing option is a snap.
  4. Running containers makes the complexity of our system more manageable. We run five different databases across 50 different instances. It takes 72 custom Chef cookbooks and 36 roles to kick-off a new instance. That’s a lot of dependencies and interconnections to worry about. Running a fixed container structure allows us to lock in our OS and run the same configuration across each instance each time. No more worrying that we’ve got the wrong mix of resources, but orchestration still remains a challenge.

Container technology has been around for some time, but is easier than ever to use thanks to Docker.io, an open source project for implementing lightweight LXC containers. We’re very excited about what this technology enables us to do and have launched an open source project to make it even better.

Start working with Docker today and you’ll quickly find yourself doing some of the same things over and over again. Basic tasks such as starting and stopping groups of containers, making sure that a particular container is running, restarting a container if it stops, cleaning up stale containers, flattening images to reduce layers—individually, these tasks take very little time, but doing it at scale means doing it a lot.

At first we wrote shell scripts to automate some of these repetitive tasks. That worked, but we weren’t completely happy with it. We wanted something more powerful, something more closely integrated with Docker. We wanted something we could use as a command line interface. We looked at the Fabric orchestration library for Python with envious eyes. Why couldn’t we have something similar for Docker?

The truth is that traditional orchestration focuses on bringing up physical resources, virtual machines and applications with the appropriate dependencies—making multi-tier application infrastructures sequentially linked so that they work together.

Running containers with Docker means sharing not only the infrastructure, but the operating system as well and—depending on how you set things up—many of the primary application resources. We quickly found that the last generation of orchestration engine didn’t work particularly well for us and had too much legacy overhead; too many assumptions set for a world of machine-level virtualization.

We set out to solve these problems ourselves, and decided to build a Fabric-like orchestration library for Docker and to develop it in the open. We call it Shipper. It supports parallel execution and can generate a command line interface, two key features for the type of environment we’re running. Shipper makes it easy to move containers from a developer environment to a staging environment and to spin up 10 boxes in staging at the same time while shutting down 10 others in a developer environment. When you do a lot with Docker, you quickly appreciate this seemingly simple functionality.

As much as we’re proud of what we built, Shipper has limitations. Some of them are the inevitable product of a small team working to solve its own problems. We’re actively seeking contributions to Shipper from the Docker community to make it a stronger and more integral part of the container ecosystem. But there are also limitations that are the product of the current state of the Docker protocol. That’s one of the reasons we’re so excited for the next major protocol release that will help smooth these issues over.

We’re very excited about the advantages containers give us and we invite you to contribute to the development of the Docker open source project and to the Shipper project we’ve created for orchestration. Please see this video for a more detailed talk about these topics:

Want to read more about containers? Check out how Rackspace customer and partner Pantheon uses containers for faster scaling and lower costs, and how containers deliver disruption from the bottom of the stack.

About the Author

This is a post written and contributed by Sasha Klizhentas.

Sasha loves Emacs tricks, Linux, home made compilers, parsers and loud cats. He makes sure Mailgun distributed mail queue remains fast, simple yet reliable: that's the way he likes it.


More
1 Comment

Staging for emil system is a big deal, I like http://mailtrap.io It’s like docker (container) for your email eyetem ;)

avatar A.B on October 14, 2013 | Reply

Leave a New Comment

(Required)


Racker Powered
©2014 Rackspace, US Inc.