Getting started with OpenStack

OpenStack logo

So you’d like to get started with OpenStack eh? (Sorry, that’s my Canadian coming through). You may have been thinking about it for a while, evaluating the landscape, the maturity, evaluating your tolerance for new technology, and waiting for the right time to adopt. Being on the verge of adopting a new and disruptive technology is an intimidating place to be. Lots of questions arise: Where to start? What to evaluate? Who to partner with? What gaps to address? What should the architecture look like? What is the project purpose? Is my staff well prepared? What training should we get? Is our infrastructure capable of supporting it? How to handle long term tasks like patching, scaling, or lifecycle management? These aren’t easy questions to answer, and the answers usually come from experience; which you don’t have. What do you do? I’m going to start the first in a series of posts addressing some of these questions and hopefully adding some clarity to OpenStack adoption. First, some of the basics.

**What is OpenStack?


OpenStack is essentially the culmination of commoditized infrastructure, and years of frustration with inflexible, closed, proprietary data center infrastructure components. (That’s a bold statement, let me expand on that.) As hardware virtualization took off in the mid to late 2000’s we began to see the potential, and reap the benefit, of increased application deployment speed. No longer being tied to the physical server procurement process opened the industry’s eyes to server flexibility and utilization. Virtualization was born from the idea of vertical density, ie. how many OS’s can we cram onto this server to maximize its use? As we became accustomed to “spinning up a VM”, these tasks started to feel trivial, frequent, and burdensome. In true sysadmin fashion, we looked to automate some of these tasks, but we didn’t get very far. Existing virtualization tools were designed with focus on performance, high availability, and ease of management. But not automated management. As our virtual environments became more utilized and integrated, we started to see delays in the process of “spinning up a VM”. Many checks and balances were added, and soon the time to provision virtual resources started to creep closer and closer to that of its physical cousin. Sound familiar? (I also wrote a post on this last year). It seemed like we couldn’t even buy an adequate API, let alone integrate it with something else.

At the same time, applications started to adapt to the new concept of commoditized hardware. Server infrastructure began to be treated more like consumables. Scaling out started to be as important as scaling up, and the need to do that quickly was paramount.With these two challenges in mind, the idea of programmable infrastructure spawned, and the idiom infrastructure as code was born. The OpenStack project is essentially that – programmable infrastructure. Services native to a datacenter make up the project, and each is designed with a fully featured API.

Why should I care?

The drive behind virtualization was cost optimization and increased utilization of existing resources. The drive behind OpenStack is also cost optimization, but not of the traditional capital expenditure kind. OpenStack serves to reduce the amount of time your IT staff waits for data center resources. It also increases the flexibility and agility of your datacenter resources so that they may quickly be created and re-purposed in quite literally a matter of seconds. It enables your business to adjust incredibly quickly to the constant change we see in our industry. And lastly, it enables you to take advantage of new cloud capabilities in application and service architecture. Quickly scale up, scale down, integrate automation tooling, or create applications with higher resilience and additional capability, so that you are ready to adjust to the next business priority or opportunity.

What’s the difference between virtualization and OpenStack?

Others have described the difference better than I could have, so I’ll link their work. But I’ll add this; traditionally, hardware was assigned to an application. Then hardware was assigned to virtual datacenters or clusters. If you wanted to re-purpose hypervisors, add storage, or provision networks you would incur a lengthy migration, balancing, or configuration task. And none of these services came with a capable API. OpenStack takes the virtualization concept of a software-defined OS component and blows it out of the water. Now every piece of infrastructure is software defined, so you can “spin up” much more than just a VM.

How do I get started?

I thought you’d never ask! 😉 This is a loaded question with many correct answers to it, but we’ll want to start with sorting out a couple basics. Firstly, you need to decide the workload intent of your OpenStack environment. Specifically, I mean what type of application workloads do you expect to put there, and how many instances or VMs do you want in the environment. Secondly, you’ll want to decide the performance and redundancy you expect out of those workloads. Deciding on these two things will help give you a good idea of where to start for physical node count, physical node resources, and the node type or role each physical device should have. Also, this will help you decide what you should be using for an OpenStack installer. Since the environment architecture is a wildly variable conversation, I’m going to link some helpful resources to assist you in your sizing decisions. This decision is worth engaging an OpenStack provider in, depending on your intent of the environment. Some folks choose a small environment with a flagship app to deploy with it, others deploy general use OpenStack resources for mass consumption.


Next topic I will outline some of the options you have for installers, why you might pick one, and then I’ll choose an environment size to go through and do an example deploy. Have I missed anything important in the above topics? Got anything to add? I’d love to hear your experience.