Containers: Conversations Today Are No Longer If, But How
I recently asked an organization to describe its container strategy and was told they didn’t have one yet, “but it doesn’t matter, because the competition is light years behind on this technology.”
I hope this incorrect assumption isn’t a common one, because it could be costly for those who hold it. As a director of app transformation at Rackspace, my team is working with many customers — some in the very same vertical as the presumptuous speaker — in various stages of container adoption.
The truth is, container adoption is firmly underway; Gartner estimates that 75 percent of global companies will be using containers in production in the next two years. And why wouldn’t they?
Containers enforce modularization and necessitate automation to a degree where adding a feature or fixing a typo is as easy as committing the code to source control and watching the automated pipeline run the build with of course static code analysis, unit tests, and automated acceptance tests and deployment to production.
Containers enable multi- and hybrid cloud solutions
The other major benefit, of course, is that containers enable the increasingly popular hybrid cloud approach CIOs are embracing. Hybrid’s annual compound growth rate of 27 percent reflects its ability to deliver greater flexibility and efficiency, plus security preferences, compliance requirements and data sovereignty needs.
With that mix of platforms, however, deployment can be cumbersome, as scripts that are geared towards a particular public cloud won’t work on a dedicated footprint. With containers,that deployment is far more streamlined. After all, a Kubernetes API endpoint on public cloud looks exactly like a Kubernetes API endpoint on a private cloud. (While there are many options for container orchestrators, the clear frontrunner is Kubernetes, so I’ll use that technology in the rest of this article). Once you’ve got Kubernetes deployed on top of the physical or cloud infrastructure, it really delivers on the potential of multi-cloud app deployments: increased portability, flexibility, agility and speed.
Today's container questions focus on the how
Because so many customers today understand these benefits, most of the conversations I and my team are having today are around the nuts and bolts of using containers; I’ve compiled answers to some of the most common questions we hear:
How do I train my team on containers?
There’s no one right answer to this, but there are a few leading approaches. Consider bringing in a consultant or professional trainer. While most developers are capable of learning on their own using YouTube and guides like “Kubernetes the Hard Way,” it can be difficult to get away from the daily escalations and P1 bug fixes to focus on something new. Having a formal training plan in place with a fixed schedule can help the team focus in on training and give them a tool to push back on escalations while they train. Consider dedicating part of the team to train while the rest of the team keeps the legacy workloads going, and cycle resources through training on a schedule if possible.
My monolithic app has too much technical debt to go to containers
This is a common concern tied to the waterfall method of software development and can be overcome with some creative coding and architecture. Microservices don’t have to be deployed all together in one massive deployment. They can be pieced out one by one and deployed as they are ready. It may require building a software bridge to get there — for example, if the legacy app uses a central database as its data store and messaging layer, start attacking that first! Design and build a data access layer and start refactoring to use a message queue like RabbitMQ or AWS SQS instead of the database. You’re going to need that DAL and MQ once you get to Kubernetes, anyway. Getting to containers may require a multi-phased plan. Determine what steps along the road you need to take to get to containers and start creating your roadmap.
Security & Compliance won’t let my team deploy more than once a week; how do I change their minds?
One of the disadvantages of faster, easier deployments is that it’s easier to deploy a vulnerability, too. Engage with your security team early and partner with them to devise a strategy and implementation that makes them comfortable with a more agile approach to deployments. A “Shift left” approach to testing and security is absolutely required when using containers. Static code analysis with tools like Veracode, image scans with tools like Clair, and integrated CI/CD security integration with tools like Twistlock or Stackrox can help mitigate your risk. Have a plan in place on how your team will patch the master and worker nodes in the event of a zero day exploit. Twistlock, Aquasec and Stackrox all have built-in mechanisms that will help provide compensating controls until you can get everything patched. Building in these tools means your business, data and workloads are more secure.
The development team is in final testing and the apps are deployed to Kubernetes. Now what? How can I tell if it’s working?
Forward-thinking developers can sometimes make technology decisions that leave Operations scratching their heads trying to figure out how to support the new runtime. Kubernetes is a relatively new technology and the ecosystem around it is dynamic and growing at a very fast rate, which leads to a greater than average number of zero-day exploits and a growing list of vulnerabilities. Knowing how to monitor and secure the runtime is important, and there are a vast array of companies and products to choose from. Monitoring tools like Prometheus require a bit of investment from the developers, while the old-school style of IT Ops setting some disk space and CPU utilization thresholds into Nagios and calling it a day is insufficient. The payoff for the time invested in full stack monitoring is greater as the Operations and Dev team has a more in-depth view and can respond faster to issues.
My apps are containerized, my workloads are secure and everything is automated. Can I sit back and relax now?
Maybe, for a minute. The truth is, you can’t stop even once you’ve surmounted the challenges listed above. Kubernetes is an incredible technology enabler, but it’s still new and growing — and as I touched on above, comes with a considerable skills gap. Your best bet is to build automation plans into your overall strategy and resource planning. For instance, have a plan ready for your next new hire to learn Kubernetes — and more importantly, your company’s container stack and toolchain — during new hire onboarding.
Have interview questions that test a candidate’s capacity to learn new technology quickly, because odds are you’re not going to find (or be able to afford) a seasoned Kubernetes developer in the wild. Be open to remote resources if you do decide you need that seasoned Kubernetes expert, because there may not be anyone available in your local area. You may also want to consider retention incentives to keep employees who upskill on Kubernetes and your specific container stack.
And remember, Kubernetes is a developing ecosystem; your team will need R&D time built into their work week to stay current and experiment with new tools as they become available. Plan on a review of your containers stack and toolchain strategy at least bi-annually to make sure you’re keeping pace with the containers ecosystem.
Let's talk containers at KubeCon!
To learn more about how Rackspace experts can help your organization on its container strategy, and talk about how we can help solve the challenges above, visit us at KubeCon in booth #S60!
Are You Realizing the Cloud Optimization Benefits of Kubernetes and Containers?
September 22nd, 2023
Google Cloud Next ’23 Highlights— AI and Beyond
September 14th, 2023
Why You Need an MLOps Framework for Standardizing AI and Machine Learning Operations
September 12th, 2023