OpenStack in a day for under $20

Share this post:

One of the hardest things with OpenStack can be just getting started, especially in the enterprise. Acquiring hardware and getting the right networking in place can require a monumental effort. And, while you can use DevStack and virtual machines (VMs), the result isn’t really useful for a proof of concept or demo environment. SoftLayer, an IBM company, solves this challenge.

Offering both virtual and bare metal servers priced by the hour or month and provisioned in real time, SoftLayer provides an ideal place to both get started with and scale OpenStack (attested by 1,500 servers and 75,000 VMs). For the IBM Demo Theater slot at the OpenStack Atlanta Summit, I talked through deploying OpenStack on SoftLayer to provide a proof of concept (PoC) or demo environment. The result was “Build an OpenStack Cluster Before Lunch, Scale Globally by Supper with IBM and SoftLayer” (charts). As part of this presentation, I promised to write a blog post that provided more details on how to get started.

The first decision I faced was the installation method. My primary goal was an environment that represented what a production install would look like—in other words, a package-based install. From past experience, I knew to head over to and click on Documentation for a step-by-step, package-based install guide. I explicitly avoided automation tools like Chef and Puppet here because their automation content can be hit or miss and help even harder to come by. Likewise, I chose Ubuntu because it is the most common operating system in the OpenStack ecosystem and I expected that the guide would be accurate and help easy to find. The other deployment guides should work equally well.

The next decision was the example architecture to follow. With Neutron the default networking component, I chose the “Three-node architecture with OpenStack Networking (Neutron):”


Continuing to step through the documentation, the next actionable item I came across was the Before you begin section that detailed the hardware. My configuration in the SoftLayer tool had the minimum hardware requirements listed as “can support several minimal CirrOS instances” and be acquired from SoftLayer for $0.344 an hour (~$8.25/day). However, as I said, I wanted something useful for a PoC or demo. Here is what I provisioned:


The controller node and network node are both virtual servers and the compute node is a bare metal server. I provisioned all servers on an hourly basis, but if you plan on keeping it around for a while, monthly is slightly cheaper.

Moving on, next up is the networking section. In the OpenStack Networking (Neutron) section, three networks are defined: Management, Instance Tunnels and External. The Management network is used for OpenStack components to talk to each other. The Instance Tunnels network is used for hypervisor-to-hypervisor communications to allow the VMs to talk to each other. The External network is how VMs get to the outside world (outside the OpenStack install).

SoftLayer also offers three networks on every device: public network (metered access to public Internet), private network (isolated VLAN providing unmetered access to other SoftLayer devices in the account) and management network (out-of-band management accessible only through VPN connection). Mapping them together, we target the OpenStack Management and Instance Tunnels networks at the SoftLayer private network, because none of this traffic needs to leave SoftLayer and the private network is unmetered.

For the External network, we have two options. If we want to access our OpenStack VMs using public IP addresses, we need to provision a Portable Public Subnet. If we only need to access them over SoftLayer’s private network, we need to provision a Portable Private Subnet. Note that the console on the VM will be available through the OpenStack Horizon Dashboard over the public network whether you chose to make your external network public or private.

At this point we have all the resources we need from SoftLayer: a controller node, a networking node, a compute node and the subnet to use for the external network. When the servers are provisioned, we need to fix up the hosts file so that DNS names resolve. Modify /etc/hosts so that the entry for the host uses the private IP address (the 10.x address) and add an entry for each of the other provisioned hosts (include both host name and fully qualified domain name).

From here, follow the guide to deploy OpenStack. It took me around three hours of copying and pasting from the guide to complete the install.

General items to keep in mind:

• Bind all the services to the private interface (MySQL, RabbitMQ, OpenStack).

• When building URLs, use the fully qualified domain name (FQDN)–this applies to both configuration file entries and Keystone endpoints.

• I registered all Keystone endpoints using the FQDN that resolved to the private interface. In theory you could use the public IP address for the public URL, but I did not try this.

My additions to the “Install the Image Service” section:

• To simplify the install, I used SoftLayer’s Swift-based Object Storage service as the Glance store. There are a number of benefits here, one of which is reducing the amount of local storage in the box and removing management of space for the store. Using the SoftLayer portal, simply order an Object Storage account to get started.

• Use SoftLayer’s web-based Object Storage management tool to create a “glance” container in the nearest cluster

• In /etc/glance/glance-api.conf, set the default_store to “swift”, the swift_store_auth_version to “1”, and each of swift_store_auth_address,  swift_store_user and swift_store_key to the values from the Account Credentials in the Object Storage section of the SoftLayer Portal (using the private URL for the swift_store_auth_address).

My additions to the “Install Compute controller services” section:

• When configuring the compute node, use the public IP address of the controller node in the novncproxy_base_url setting to get console access to VMs from the public Internet.

My additions to the “Add a networking service” section:

• To set up the external bridge (br-ex) when configuring the Network Node in the Add a networking service section, be sure to use the INTERFACE_NAME that matches the portable subnet you provisioned. If you used a Portable Public Subnet, use the interface with the public IP. If you used a Portable Private Subnet, use the interface with the private IP.

• After creating the br-ex bridge, you will notice connectivity to that interface is broken. To restore, move the IP address from the eth device to the bridge and create the route (gateway IP address can be found in the Network section of the compute device details):

ip addr del dev eth0
ip addr add dev br-ex

If the public IP was moved:

route add default gw br-ex

If the private IP was moved:

route add -net gw dev br-ex

• When creating the initial networks, the command will look like:
neutron subnet-create ext-net –name ext-subnet –allocation-pool start=,end= –disable-dhcp –gateway
The start and end IP address are the first and last non-reserved IPs from the Subnet Details in the OpenStack portal. The details will also give the gateway and CIDR-based network notation to use.

Some specific notes for if you do a multi-region install:

• Don’t forget to add the region to each of the “Keystone endpoint-create” calls

• Enable VLAN spanning in the portal so all devices can communicate over the private network (this also applies if you end up with devices on different VLANs in same data center)

• The machine running Horizon needs host file entries for all machines in the cluster (both local and remote data centers).

In an upcoming blog post, I will show how you can use the 90 day trial of IBM Cloud Manager with OpenStack and the $500 off your first month of services offer from SoftLayer to install OpenStack in hours at no charge!

In the meantime, try this out and connect with me on Twitter @mjfork for any questions, comments or feedback.

Director and Distinguished Engineer, IBM Cloud Infrastructure

More Infrastructure stories

Credits uses cloud to encourage mass adoption of public blockchain

Blockchain is attracting a lot of interest, especially since cryptocurrencies have exploded into the mainstream. To access a market that’s estimated to be worth trillions of dollars, our company, Credits, created a fast, cost-effective public blockchain platform hosted on scalable IBM Cloud infrastructure. Removing barriers to blockchain Innovators across all industries are looking to the […]

Continue reading

What makes an intelligent business process management suite (iBPMS) stand above the rest?

Digital transformation is about completing work differently, and not simply for the sake of change. It should address the dynamic business needs of the customer while optimizing the operational processes that impact the cost of service. As organizations pursue transformation, they often quickly realize that better customer and employee experiences need better and smarter automation […]

Continue reading

New IBM Cloud agreement with Smart Energy Water set to provide global infrastructure

IBM and Smart Energy Water, a water and energy cloud platform provider serving more than 150 utilities around the world, have signed a multi-million-dollar, five-year agreement to bring IBM Cloud infrastructure to SEW’s enterprise web and mobile applications. IBM Cloud will facilitate common global infrastructure to help with “customer engagement and mobile workforce engagement apps, […]

Continue reading