Filed in Cloud Industry Insights by Lew Moorman | June 28, 2012 2:30 pm
Cloud computing is driving a tectonic shift in application development, radically altering the way applications interact with the underlying infrastructure. The cloud isn’t just faster and nimbler than the dedicated-hardware model; it’s a very different approach to computing, offering huge advantages — along with one big underappreciated risk.
Modern applications are not run “on” the cloud, but rather are built into it. The cloud is now part of the application stack. Think of a cloud data center as a giant computer, with cloud software serving as its operating system. Applications are fused to that cloud software. This is a colossal difference compared to previous models where the infrastructure and applications were independent of each other. Now they’re welded together. They’re one.
This tight coupling between the apps and infrastructure is one of the most powerful benefits of the new computing paradigm. It’s what gives the cloud its agility and its elasticity. It introduces lightening fast access to services and the ability for small teams to build massive and scalable apps. It unlocks an entirely new development stack that makes it easier to build, deploy and maintain applications.
But with these rewards come certain risks.
Because the infrastructure and the applications become dependent upon each other, vendor lock-in becomes a real danger. The deeper your applications become ingrained in the cloud, the more difficult it can be to get them out should you decide to switch providers. This is especially true when using proprietary cloud providers. For basic operations and simple applications, the risk of lock-in isn’t too big. But when more complex services and components are used, lock-in can become very costly and constraining. It’s something you might not realize is happening until it’s too late.
Lock-in doesn’t occur overnight; it creeps in. It starts when you pick a vendor’s platform. And once data is added to the platform, it becomes more ingrained. API integration complicates things further. Apps are now directing the cloud and expecting certain responses. Finally, with deeper integration comes the use of other proprietary features and services offered by the cloud provider causing applications and the infrastructure to become truly fused into one stack.
This becomes a major issue if you want features your cloud provider doesn’t offer; you want to do business in regions or countries that it doesn’t serve; or you want to add a level of customization. In the proprietary cloud model, you’re not going anywhere.
This is one of the reasons we founded OpenStack, the open source cloud operating system. All of the OpenStack code, from API to source, is available to all, creating the possibility for real interoperability. OpenStack thwarts lock-in, making it easy to separate applications from the infrastructure and move from one cloud to another. OpenStack offers choice in how and where you deploy, what features you add and what partners you integrate while delivering a cloud that is open and scalable from end to end. You’ll be hearing a lot more about OpenStack in coming weeks. In August, OpenStack will become the engine of our public cloud. And the Open Cloud that we’re building at Rackspace is backed by our trademark Fanatical Support.
The cloud is generating incredible advancements and its power is undeniable. Be careful, though, not to let the characteristics that drew you to the cloud lock you into a single provider.
We hope you’ll join us in the Open Cloud.
You can learn more at OpenStack.com, and at the website of Rackspace: The Open Cloud Company.
Source URL: http://www.rackspace.com/blog/what-makes-the-cloud-sweet-also-makes-it-sticky/
Copyright ©2013 The Official Rackspace Blog unless otherwise noted.