Manual installation of Diablo release of OpenStack cloud controller, compute, and volume with keystone authentication for Ubuntu 11.10

2 likes Updated 4/23/15, 9:14 PM by SalmanBasetTags:

This page describes instructions for manually installing OpenStack Diablo release with keystone authentication and dashboard. Specifically, the instructions describe how to install cloud controller and compute in one machine, and add another compute machine. If you want to use automated installation on a single node, you should use devstack instead.



These instructions are not meant for production use and no warranty is provided. If you have any questions about the documentation or if you find any bugs in the documentation, please send an email to:



Salman Baset <sabaset _at_ us dot ibm dot com>

Michael Fork <mjfork _at _us dot ibm dot com>

1. Introduction

2. Preparing for installation

2.1 Setting up machine and installing OS

2.2 Setting up repos and post OS installation checks

3. Install cloud controller + compute on one machine

3.1 Install the python dependencies and python connector for mysql

3.2 Install the RabbitMQ server

3.3 Install nova-api

3.4 Install nova-scheduler

3.5 Install nova-network

3.6 Install nova-compute

3.7 Install nova-vncproxy

3.8 Install the nova database

3.9 Installing keystone

3.10 Configuring nova-api to work with keystone

3.11 Install dashboard

3.12 Install glance

3.13 Configuring compute

3.14 Checking if services are running

4. Adding another compute node

5. Installing and configuring nova-volume, attaching it to an instance

Helpful documentation link.

http://docs.openstack.org/diablo/openstack-compute/admin/content/installing-the-cloud-controller.html

1. Introduction

We will deploy Cloud Controller and Compute from the OpenStack Diablo release manually on a single machine running Ubuntu 11.10. Setting up swift is not part of the instructions. The machine will use FlatDHCP networking mode. We will then add another compute machine, that will run its own nova-network. We will use diablo final release from cloud builders instead of ppa. Following packages will be installed:



Cloud Controller packages: glance (image service), keystone (authentication), nova-api, nova-network, nova-vncproxy, nova-scheduler, rabbitmq-server, mysql-server (for nova and dashboard database), openstack-dashboard, openstackx, nova-volume



Compute package: nova-compute



All packages except for keystone are installed from the repo that will be setup in the next section.



Dependencies: python-software-properties, python-greenlet, python-mysqldb, gitk, unzip, chkconfig



Dependencies (for keystone manual installation): python-pip, gcc python-lxml, libxml2, python-greenlet-dbg, python-dev, libsqlite3-dev, libldap2-dev, libssl-dev, libxml2-dev, libxslt1-dev, libsasl2-dev, python-httplib2

Dependencies (for nova-volume): iscistarget, iscsitarget-source, iscsitarget-dkms



2 Preparing for installation



We will use the following conventions during installation



hostname: openstack1

user name: nova

password: password



Assume the machine has only Ethernet interface, which will be setup as eth0. You should replace the items in bold with your network configuration during installation. If your machine obtains the IP address through DHCP, note all the items below by checking your ifconfig.



IP: openstack1_IP

netmask: openstack1_IP_netmask

gateway: openstack1_gateway

DNS: openstack1_DNS

If you want to add more compute machines, you can a similar naming convention, i.e., openstack2, openstack2 and so on.

2.1 Setting up machine and installing OS

(1) Reboot the machine into setup

(2) Create three partitions on this machine

  • root partition /
  • swap disk (size of RAM or less)
  • a logical volume (not a logical disk), name it nova-volumes. It can be used for setting up volumes. If you are not planning to setup nova-volume, you can skip creation of nova-volumes but we recommend to leave some unpartitioned disk space in case you decide to set it up later. If you are installing OpenStack for the first time, we recommend skipping installation of nova-volume.
  • Example: 250 GB disk, 4GB of RAM

    • 150 GB root partition /
    • swap disk 4 GB
    • 96 GB (or the rest) for volume
  • Here is a screenshot of BIOS with nova-volumes created.



(3) Install Ubuntu 11.10 64-bit server



Select 'OpenSSH Server' and 'Virtual Machine Host' from the list of packages. See the snapshot below.







 

2.2 Setting up repos and post OS installation checks

Once the machine reboots, we are ready to setup repos and perform checks before starting to install cloud controller and compute.

Setup OpenStack repos

We will install openstack packages from Rackspace cloudbuilder's packages and not from openstack's ppa. Installing packages from ppa has proved to be problematic from our experience.

Add the following line to the top of /etc/apt/sources.list

deb
http://ops.rcb.me/packages oneiric diablo-final

# sudo apt-get update

It prompts for missing GPG key. Ignore it. Here is the how the prompt may look like:

W: GPG error: http://ops.rcb.me oneiric InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 765C5E49F87CBDE0

Install gitk, unzip, and chkconfig

# sudo apt-get install gitk unzip chkconfig

Network forwarding

Check if network forwarding is enabled

# sudo sysctl net.ipv4.ip_forward

If the output is 1, it is enabled. Otherwise, enable it permanently by updating the flag in /etc/sysctl.conf

Check which ports are open and which processes have them opened.

# sudo netstat -tulpn

Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name 
tcp        0      0 192.168.122.1:53        0.0.0.0:*               LISTEN      937/dnsmasq     
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      18387/sshd      
tcp6       0      0 :::22                   :::*                    LISTEN      18387/sshd      
udp        0      0 0.0.0.0:67              0.0.0.0:*                           18989/dnsmasq   
udp        0      0 0.0.0.0:67              0.0.0.0:*                           937/dnsmasq

As you can see, only ssh (tcp 22), DNS (tcp, udp 53) and DHCP (udp 68) port are open.

**(If you are not planning to setup nova-volume, you can skip the steps below.)



Make sure that you create a logical volume before in the BIOS. If you did create one, you can check its status by:

# sudo vgdisplay

Otherwise, create a volume in the free disk space. Assume /dev/sda5 contains free space

# sudo pvcreate /dev/sda5

# sudo vgcreate nova-volumes /dev/sda5

3. Install cloud controller and compute on one machine

Now install and configure the packages.

3.1 Install the python dependencies and python connector for mysql

# sudo apt-get install python-software-properties

# sudo apt-get install -y python-greenlet python-mysqldb

3.2 Install the RabbitMQ server

Install the Rabbit server which runs AMQP protocol. Installation automatically restarts the server.



# sudo apt-get install -y rabbitmq-server



The lists of ports in bold are opened after installing rabbit. 4369 is listening port where 52127 is an ephemeral listening port which varies across installations.

Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:52127           0.0.0.0:*               LISTEN      4475/beam.smp   
tcp        0      0 0.0.0.0:4369            0.0.0.0:*               LISTEN      4434/epmd       
tcp        0      0 192.168.122.1:53        0.0.0.0:*               LISTEN      937/dnsmasq     
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      18387/sshd      
tcp6       0      0 :::22                   :::*                    LISTEN      18387/sshd      
udp        0      0 192.168.122.1:53        0.0.0.0:*                           937/dnsmasq     
udp        0      0 0.0.0.0:67              0.0.0.0:*                           937/dnsmasq

Check the list of exchanges, queues, and bindings opened by rabbitmq using the following commands. For details on rabbit, exchanges, queues, and bindings, please see rabbit documentation.

# sudo rabbitmqctl list_exchanges

# sudo rabbitmqctl list_queues

# sudo rabbitmqctl list_bindings

The output of the first command is:

Listing exchanges ...
amq.direct	direct
amq.topic	topic
amq.rabbitmq.trace	topic
amq.rabbitmq.log	topic
amq.fanout	fanout
amq.headers	headers
	direct
amq.match	headers
...done.
    

3.3 Install nova-api

# sudo apt-get install nova-api

It will prompt for key verification. Ignore it. The service listens on ports 8773 (EC2 API) and 8774 (OpenStack API). Towards the end of installation, the service will show an error saying that it cannot find nova.conf. This file is installed once nova-compute is installed, so this error can be safely ignored. Here is how this error will look like:

[Errno 2] No such file or directory: '/etc/nova/nova-compute.conf'

ERROR:: Unable to open flagfile: /etc/nova/nova-compute.conf

Check if the nova-api service is running by typing:

# ps -e | grep nova-api

If the service is not running, check the log.

# sudo cat /var/log/nova/nova-api.log

3.4 Install nova-scheduler

# sudo apt-get install nova-scheduler

The service does not open any listening tcp or udp ports. Check if the nova-scheduler service is running by typing:

# ps -e | grep nova-scheduler

If the service is not running, check the log

# sudo cat /var/log/nova/nova-scheduler.log

If you are an advanced user, you may want to see which exchanges, queues, and bindings are created in rabbit after you install nova-scheduler. Here is an output from running the following commands. Upon installation of nova-scheduler, a nova and scheduler_fanout exchange is created. The type of nova exchange is 'topic' and type of scheduler_fanout exchange is 'fanout' (See rabbit description). Three queues are created, namely, scheduler, scheduler.openstack1, and scheduler_fanout_03b4... . The first two queues are attached to the nova exchange where as the third queue is attached to the scheduler_fanout exchange.

# sudo rabbitmqctl list_exchanges

# sudo rabbitmqctl list_queues

# sudo rabbitmqctl list_bindings

Listing exchanges ...
amq.direct	direct
amq.topic	topic
amq.rabbitmq.trace	topic
amq.rabbitmq.log	topic
amq.fanout	fanout
amq.headers	headers
nova	topic
scheduler_fanout	fanout
	direct
amq.match	headers
...done.

Listing queues ...
scheduler_fanout_03b4b94f6681424f974c199b6ce0c92f	0
scheduler.psilver1	0
scheduler	0
...done.

Listing bindings ...
	exchange	scheduler	queue	scheduler	[]
	exchange	scheduler.psilver1	queue	scheduler.psilver1	[]
	exchange	scheduler_fanout_03b4b94f6681424f974c199b6ce0c92f	queue	scheduler_fanout_03b4b94f6681424f974c199b6ce0c92f	[]
nova	exchange	scheduler	queue	scheduler	[]
nova	exchange	scheduler.psilver1	queue	scheduler.psilver1	[]
scheduler_fanout	exchange	scheduler_fanout_03b4b94f6681424f974c199b6ce0c92f	queue	scheduler	[]
...done.

 

3.5 Install nova-network

Install the nova-network.

# sudo apt-get install nova-network

The service does not create any listening tcp or udp ports. Check if the nova-compute service is running by typing:

# ps -e | grep nova-network

If the service is not running, check the log

# sudo cat /var/log/nova/nova-network.log

Similar to scheduler, a network_fanout exchange is created. In addition three queues are created, namely, network, network.openstack1, and network_fanout_* (where * denotes any number).

3.6 Install nova-compute

Install the nova-compute. The command below also automatically installs and starts an apache web server, and iSCSI.

# sudo apt-get install nova-compute

Stop the apache server. If installing compute on another node, make sure that apache does restart automatically on reboot (use chkconfig).

# sudo /etc/init.d/apache2 stop

Check if the nova-compute service is running by typing:

# ps -e | grep nova-compute

If the service is not running, check the log

# sudo cat /var/log/nova/nova-compute.log

You will have to configure nova.conf in /etc/nova. You should do that after setting up keystone, glance, and dashboard. See Configuring Compute section.

Similar to scheduler and network, a compute_fanout exchange is created. In addition three queues are created, namely, compute, compute.openstack1, and compute_fanout_* (where * denotes any number).

3.7 Install nova-vncproxy

# sudo apt-get install nova-vncproxy

It will also prompt for key verification. Ignore it. The service listens by default on tcp port 6080 and 843.

Check if the nova-vncproxy service is running by typing:

# ps -e | grep nova-vncproxy

If the service is not running, check the log

# sudo cat /var/log/nova/nova-vncproxy.log

Be sure to add /var/lib/nova/noVNC directory to your --vncproxy_wwwroot variable in /etc/nova/nova.conf after you install all the nova components.

Similar to scheduler, network, and compute, a vncproxy_fanout exchange is created. In addition three queues are created, namely, vncproxy, vncproxy.openstack1, and vncproxy_fanout_* (where * denotes any number).

3.8 Install nova database

Install the nova database. MySQL is used as the database. Use the following passwords for MySQL and nova database that will be created:



MySQL password: nova

NOVA password: notnova




Setup MySQL configuration.




# sudo bash

MYSQL_PASS=nova


NOVA_PASS=notnova


cat <<MYSQL_PRESEED | debconf-set-selections

mysql-server-5.1 mysql-server/root_password password $MYSQL_PASS

mysql-server-5.1 mysql-server/root_password_again password $MYSQL_PASS


mysql-server-5.1 mysql-server/start_on_boot boolean true

MYSQL_PRESEED

Install the MySQL server.

# sudo apt-get install -y mysql-server

MySQL service starts listening on tcp port 3306. Make sure that SQL listens on ‘any’ address.

# sudo sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf

# sudo service mysql restart

Create nova database.

# sudo mysql -u root -p$MYSQL_PASS -e 'CREATE DATABASE nova;'

# sudo mysql -u root -p$MYSQL_PASS -e "GRANT ALL PRIVILEGES ON *.* TO 'nova'@'%' WITH GRANT OPTION;"

# sudo mysql -u root -p$MYSQL_PASS -e "SET PASSWORD FOR 'nova'@'%' = PASSWORD('$NOVA_PASS');"

Sync database. Resyncing may be needed after configuring all nova components.

# sudo nova-manage db sync

Check that database connectivity is successful on the machine on which mysql is installed.

# mysql -u root -pnova -N nova

Also check if tables have been created by typing 'show tables;' in the mysql prompt above. If not, a resyncing is needed after nova.conf has been setup.

And also from a different machine

# mysql -u root -pnova -h [hostname] -N nova

where hostname is the IP address or hostname of openstack1, i.e., the machine where the database is installed.

If you get a prompt, then installation is successful.

3.9 Install keystone

Keystone is the identity service. The keystone documentation at openstack is helpful for understanding its key concepts.

We will show installation instructions both from repo, and stable/diablo.

Install from repo:

# sudo apt-get install keystone

Now copy the keystone.conf sample file in /etc/keystone/ directory. Make a backup of the original keystone.conf file there. Run this script to populate keystone database. Please replace bin/keystone with keystone in the script. Then, don't start keystone as a service because permissions are sometime a problem. Instead start keystone as:

# sudo keystone

This will start keystone as a standalone process.

Install from stable/diablo:

Assume you are in /home/nova directory.

# wget https://github.com/openstack/keystone/zipball/stable/diablo

# mv diablo diablo-keystone.zip

# unzip diablo-keystone.zip

# cd openstack-key*

# sudo apt-get install -y python-pip gcc python-lxml libxml2 python-greenlet-dbg python-dev libsqlite3-dev libldap2-dev libssl-dev libxml2-dev libxslt1-dev libsasl2-dev python-httplib2

# sudo python setup.py install

# sudo pip install -r tools/pip-requires

There are several key concepts in keystone, i.e., tenant, user, role, service, service token etc. Please refer to keystone documentation for details. Tenant is equivalent to a project. We will create two tenants, adminTenant and demoTenant. If you do not want to execute all the commands below, you can run this script in the keystone directory you unzipped. Assume you are in /home/nova/openstack-keystone*. All the commands below, including the script should be run in this directory.



Create a tenant adminTenant, add a user adminUser to this tenant, add an Admin role to the adminUser, and grant the Admin role to adminUser in adminTenant.

# sudo bin/keystone-manage tenant add adminTenant

# sudo
bin/keystone-manage user add adminUser password

# sudo
bin/keystone-manage role add Admin

# sudo
bin/keystone-manage role grant Admin adminUser

# sudo
bin/keystone-manage role grant Admin adminUser adminTenant

Create a tenant demoTenant, add a user demoUser to this tenant, add a Member role, and grant demoUser the Member role, and then grant Member role to demoUser in demoTenant.

# sudo bin/keystone-manage tenant add demoTenant

# sudo
bin/keystone-manage user add demoUser password

# sudo
bin/keystone-manage role add Member

# sudo
bin/keystone-manage role grant Member demoUser

# sudo
bin/keystone-manage role grant Member demoUser demoTenant

All the users and configurations are created in keystone.db.

Now configure keystone admin and service admin roles. Check for these strings in keystone.conf

keystone-admin-role = Admin

keystone-service-admin-role = KeystoneServiceAdmin

The Admin role has already been created. Now create KeystoneServiceAdmin role and assign to the adminUser of the admin tenant.

# sudo bin/keystone-manage role add KeystoneServiceAdmin

# sudo
bin/keystone-manage role grant KeystoneServiceAdmin adminUser

# sudo
bin/keystone-manage role grant KeystoneServiceAdmin adminUser adminTenant

Create the service token, and add it to the admin user and admin tenant. The token expires on February 5th, 2015. The service token is passed to glance and nova-api.

# sudo bin/keystone-manage token add 999888777666 adminUser adminTenant 2015-02-05T00:00

Define service catalog for nova, glance, and identity service.

# sudo bin/keystone-manage service add nova compute "OpenStack Compute Service"

# sudo
bin/keystone-manage service add glance image "OpenStack Image Service"

# sudo
bin/keystone-manage service add identity identity "OpenStack Identity Service"

Once the service catalog is defined, we create endpoints for them. Each service has three relevant URL’s associated with it that are used in the command:

  • the public API URL

  • an administrative API URL

  • an internal URL

Endpoint for nova (compute). Note that we only create an endpoint for OpenStack API (notice the port 8774) for admin user belonging to admin tenant. Be sure to replace openstack1_IP with the IP address of machine where you are installing keystone.



# sudo bin/keystone-manage endpointTemplates add RegionOne nova http://openstack1_IP:8774/v1.1/%tenant_id% http://openstack1_IP:8774/v1.1/%tenant_id% http://openstack1_IP:8774/v1.1/%tenant_id% 1 1

Endpoint for glance (image). Since the glance has not been installed yet, make sure that glance port is the same as being specified in this service endpoint.

# sudo bin/keystone-manage endpointTemplates add RegionOne glance http://openstack1_IP:9292/v1 http://openstack1_IP:9292/v1 http://openstack1_IP:9292/v1 1 1

Endpoint for keystone (identity). The ports are defined in /etc/keystone/keystone.conf. By default these ports should match the ones below, but it is good to verify during setup.

# sudo bin/keystone-manage endpointTemplates add RegionOne identity http://openstack1_IP:5000/v2.0 http://openstack1_IP:35357/v2.0 http://openstack1_IP:5000/v2.0 1 1

Start keystone (assuming cwd is where you unzipped keystone)

# sudo bin/keystone

You should see the following output. Pay attention to the opened ports:

Starting the RAX-KEY extension

Starting the Legacy Authentication component

Service API listening on 0.0.0.0:5000

Admin API listening on 0.0.0.0:35357

3.9.11 Testing if keystone is working.

# curl -d '{"auth": {"tenantName": "adminTenant", "passwordCredentials":{"username": "adminUser", "password": "password"}}}' -H "Content-type: application/json" http://openstack1_IP:35357/v2.0/tokens | python -mjson.tool

Replace openstack1_IP with the IP address of the machine where you are running keystone. If there is an output, keystone is running.

 

3.10 Configuring nova-api to work with keystone

Update the api-paste.ini file in /etc/nova so that nova-api can work with keystone. Copy the sample api-paste.ini file in /etc/nova and replace the openstack1_IP with the IP address configured on eth0 (assuming only single eth). It is a good practice to keep a backup of any configuration file you change just in case things break.

Restart nova-api.

# sudo restart nova-api

3.11 Install dashboard

The command below installs openstack-dashboard and openstackx, and also installs apache. See the instructions at openstack documentaion for more details

# sudo apt-get install openstack-dashboard openstackx

Enable the read write mode for apache

# sudo a2enmod rewrite



Create a symbolic link to "dash". We will need to create a database corresponding to it in nova database.

# sudo a2ensite dash

Disable the default site so that dashboard comes up by default.



# sudo a2dissite default



Recall the MySQL password, which is nova. Assume it is set in MYSQL_PASS environment variable. Then create a dash database with password notdash and grant it privileges.

# sudo mysql -u root -p$MYSQL_PASS -e 'CREATE DATABASE dash'

# sudo mysql -u root -p$MYSQL_PASS -e "GRANT ALL ON dash.* TO 'dash'@'%' IDENTIFIED BY 'notdash'"




In /var/lib/dash/local/, modify the local_settings.py file. Alternatively, you can copy this sample file to /var/lib/dash/local/ directory and change the openstack1_IP address to the IP address of the machine where you installed keystone.



# sudo cp local_settings.py.example local_settings.py



Make the following changes to local_settings.py:



(1) Replace the DATABASE = { ... } with the following:

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.mysql',
        'NAME': 'dash',        'USER': 'dash',
        'PASSWORD': 'notdash',
        'HOST': 'localhost',
        'default-character-set': 'utf8'
    },
}
 

(2) Replace the lines below:



OPENSTACK_KEYSTONE_URL = "http://localhost:5000/v2.0/"

OPENSTACK_KEYSTONE_ADMIN_URL = "http://localhost:35357/v2.0"




with the following. Be sure to replace openstack1_IP with the IP address of the machine where you are installing dashboard.

OPENSTACK_KEYSTONE_URL = "http://openstack1_IP:5000/v2.0/"

OPENSTACK_KEYSTONE_ADMIN_URL = "http://openstack1_IP:35357/v2.0"



(3) Disable Quantum because we will be using OpenStack's 'native' networking:



QUANTUM_ENABLED = False



Now, run the syncdb command to initialize the database.

# cd /var/lib/dash
# sudo PYTHONPATH=/var/lib/dash/ python dashboard/manage.py syncdb

										

You should see the following results at the end:

Installing custom SQL ...
Installing indexes ...
DEBUG:django.db.backends:(0.008) CREATE INDEX `django_session_c25c2c28` ON `django_session` (`expire_date`);; args=()
No fixtures found.

If you want to avoid a warning when restarting apache2, create a blackhole directory in the dashboard directory like so:

# sudo mkdir -p /var/lib/dash/.blackhole

Restart Apache and reload the module to pick up the dash site and symbolic link settings.

# sudo /etc/init.d/apache2 restart
# sudo service apache2 reload

The dashboard can be accessed at http://openstack1_IP. Try logging into the dashboard using adminUser for adminTenant created in keystone configuration. The password is 'password'. Here is a screen shot:







If you see a service catalog error after you have logged in, restart nova-api.

3.12 Install glance

Install glance.

# sudo apt-get install glance



Check if the glance service is running by typing:

# ps -e | grep glance

output should be: glance-api glance-registry. If the service is not running, check the log



# sudo cat /var/log/glance/api.log

# sudo cat /var/log/glance/registry.log




Now configure glance to work with keystone and for appropriate IP address and port numbers. The installation automatically starts glance-registry and glance-api. Stop these services so that we can appropriately set keystone authentication.



# sudo stop glance-api

# sudo stop glance-registry




Make a backup of /etc/glance/glance-api.conf and /etc/glance/glance-registry.conf



# sudo cp glance-api.conf glance-api.conf.orig

# sudo cp glance-registry.conf glance-registry.conf.orig


# sudo vi /etc/glance/glance-api.conf




Set debug to True in glance-api.conf if you want the debug output to show up.



debug = True



Set the token auth appropriately [pipeline:glance-api]



#pipeline = versionnegotiation context apiv1app

# NOTE: use the following pipeline for keystone

pipeline = versionnegotiation authtoken context apiv1app




Update the hostnames to host where glance service is running, and auth port to 35357



[filter:authtoken]

paste.filter_factory = keystone.middleware.auth_token:filter_factory

service_protocol = http

service_host = openstack1_IP

service_port = 5000

auth_host =
openstack1_IP

auth_port = 35357

auth_protocol = http

auth_uri = http://
openstack1_IP:5000/

admin_token = 999888777666




Similarly, make these changes in glance-registry.conf



# sudo vi /etc/glance/glance-registry.conf



Set debug to True in glance-registry.conf if you want the debug output to show up.



debug = True



Set the token auth appropriately



[pipeline:glance-registry]

#pipeline = context registryapp

# NOTE: use the following pipeline for keystone

pipeline = authtoken keystone_shim context registryapp




Update the hostnames to host where glance service is running, and auth port to 35357



[filter:authtoken]

paste.filter_factory = keystone.middleware.auth_token:filter_factory

service_protocol = http service_host =
openstack1_IP

service_port = 5000

auth_host =
openstack1_IP

auth_port = 35357

auth_protocol = http

auth_uri = http://
openstack1_IP:5000/

admin_token = 999888777666

Now start glance-api, and glance-registry.

# sudo start glance-registry

# sudo start glance-api


3.12.1 Add images to glance

We are ready to add an image to glance.



Download a ttylinux image. In /home/nova directory:



# mkdir ttylinux; cd ttylinux;

# wget http://smoser.brickies.net/ubuntu/ttylinux-uec/ttylinux-uec-amd64-12.1_2.6.35-22_1.tar.gz

# tar -xzvf ttylinux-uec-amd64-12.1_2.6.35-22_1.tar.gz




The glance-registry must be running for the following three commands to succeed.



# sudo glance add -A 999888777666 name="ttylinux-uec-amd64-12.1_2.6.35-22_1" is_public=true container_format=aki disk_format=aki < ttylinux-uec-amd64-12.1_2.6.35-22_1-vmlinuz



Note the id returned. This is your kernel id. Enter it in the last line replacing KERNEL_ID.



# sudo glance add -A 999888777666 name="ttylinux-uec-amd64-12.1_2.6.35-22_1" is_public=true container_format=ari disk_format=ari < ttylinux-uec-amd64-12.1_2.6.35-22_1-loader



Note the id returned. This is your ramdisk id. Enter it in the last line, replacing RAMDISK_ID.



Now add the VM image.



# sudo glance add -A 999888777666 name="ttylinux-uec-amd64-12.1_2.6.35-22_1" is_public=true container_format=ami disk_format=ami kernel_id=KERNEL_ID ramdisk_id=RAMDISK_ID < ttylinux-uec-amd64-12.1_2.6.35-22_1.img



Log into dashboard and click on images in the ‘System Panel’. Here is a screen shot.









Click on the 'Edit' button of the image with id 3. You may have to manually add the kernel and ramdisk ids for through GUI. Be sure to click update.



Similarly, you can download an ubuntu image and set it up.



# mkdir ubuntu; cd ubuntu;

# wget http://c0179148.cdn1.cloudfiles.rackspacecloud.com/ubuntu1010-UEC-localuser-image.tar.gz

3.13 Configuring compute

We are now going to configure the network and nova-compute. Recall, that we are doing a single machine install, with only one NIC.

3.13.1 Configuring the interface

The ethernet interface should already be configured with the IP address etc. Here is a sample interfaces file. Be sure to replace openstack1_IP with the IP address of your machine.



# interfaces



cat /etc/network/interfaces



# The loopback network interface



auto lo

iface lo inet loopback




# The primary network interface



auto eth0

iface eth0 inet static

address openstack1_IP

netmask openstack1_IP_netmask

network openstack1_IP_net

broadcast openstack1_IP_broadcast

gateway openstack1_gw



Check if network forwarding is enabled



# sudo sysctl net.ipv4.ip_forward



If the output is 1, it is enabled. Otherwise, enable it.



Restart networking



# sudo /etc/init.d/networking restart

3.13.2 Update nova.conf

We are going to configure FlatDHCP network.



# cd /etc/nova/

# sudo cp nova.conf nova.orig




Replace the existing nova.conf file with the file at here. Be sure to replace openstack1_IP with the IP address of the machine where you are running glance, sql, rabbit, and vncproxy (In our case, all services are running on one machine).

Now sync the nova database.



# sudo nova-manage db sync

Restart all services.



# sudo restart libvirt-bin; sudo restart nova-network; sudo restart nova-compute; sudo restart nova-api; sudo restart nova-scheduler; sudo restart nova-vncproxy

3.13.3 Create the nova-network

Check the DNS server by typing



# cat /etc/resolv.conf



Assume DNS server IP address is DNS_IP. We will create a fixed range on 172.16.0.0/24. Although, we are doing a single node install, we still set the multi_host flag to true. That way, we can easily add new nodes that run their own nova-network.



# sudo nova-manage network create --fixed_range_v4=172.16.0.0/24 --num_networks=1 --network_size=255 --bridge=br100 --bridge_interface=eth0 --dns1=DNS_IP --label=private --multi_host=T



Make sure that the owner of /var/lib/nova is correct



# sudo chown -R nova.nova /var/lib/nova



Now you can start creating instances through dashboard and access them through web vnc. Here are some screenshots.



Launch an instance



Launch an instance.



Instance launched



Instance launched.



VNC console



VNC console.

 

3.14 Checking if services are running

# sudo nova-manage service list



The output should be like:

Binary           Host                                 Zone             Status     State Updated_At
nova-network     psilver2                             nova             enabled    :-)   2012-01-29 23:53:34
nova-compute     psilver2                             nova             enabled    :-)   2012-01-29 23:53:36
nova-scheduler   psilver2                             nova             enabled    :-)   2012-01-29 23:53:34
nova-vncproxy    psilver2                             nova             enabled    :-)   2012-01-29 23:53:41

 

4. Installing nova-compute on another node

If you want to install nova-compute on another node, be sure that it sits on the same network as openstack1. You can assign a hostname with a monotonically increasing number embedded in the name to the new machines.

Assume, we have one other machine with only one network interface. Let's call this machine openstack2. This machine will run its own nova-network.

Setup the openstack repository using the instructions described here, and then install nova-compute using these instructions. Since we created a multihost network (multi_host=T in nova.conf), also install nova-network.

Copy the nova.conf to /etc/nova/nova.conf and update the IP addresses of your cloud controller. Restart nova-compute, and try logging in the dashboard. You should see an output similar to the figure below. Note that in this figure, our machines are called psilver2 and psilver3.

 

5. Installing and configuring nova-volume, attaching to an instance

Assume that we had created a nova-volumes logical volume group during installation on openstack2. Below, we describe instructions on how to create logical volumes using nova client and attach them to instances. The volumes are created over iSCSI. These instructions do not rely on euca-* tools which do not work with keystone.

Install nova-volume package.

# sudo apt-get install nova-volume

Install iSCSI modules. If we do not install them, we get a FATAL error saying that iscsi_trgt not found.

# sudo aptitude install iscsitarget iscsitarget-source iscsitarget-dkms

Now configure the iSCSI service. Set ISCSITARGET_ENABLE=true in /etc/default/iscsitarget.

Add the following two lines to /etc/visudo

nova ALL = (root) NOPASSWD: /bin/dd

nova ALL = (root) NOPASSWD: /usr/sbin/ietadm

Restart iscsitarget service.

# sudo /etc/init.d/iscsitarget restart

Add the --iscsi_ip_prefix=openstack2_IP to /etc/nova/nova.conf. The reason we use openstack2_IP is because nova-volume is running on openstack2. Restart nova-compute after updating nova.conf

# sudo restart nova-compute

Now we will use the python nova client to create a volume. Note that we will use the adminUser created in keystone. We will also specify that we are using nova api version 1.1. Volume magagement support for nova client is only enabled through API v1.1 and not API v1.0. Be sure to replace openstack1_IP with the IP address of machine where you installed keystone. The last parameter in the command below is the size of volume in GB (1 GB).

# nova --username=adminUser --apikey=password --projectid=adminTenant --url=http://openstack1_IP:5000/v2.0/ --region_name=RegionOne --version=1.1 volume-create --display_name=vol1 --display_description=volume1 1

Check if the volume has been created.

# nova --username=adminUser --apikey=password --projectid=adminTenant --url=http://openstack1_IP:5000/v2.0/ --region_name=RegionOne --version=1.1 volume-list

The output should look like:

+----+----------+--------------+------+-------------+
| ID |  Status  | Display Name | Size | Attached to |
+----+----------+--------------+------+-------------+
| 1  | available| vol1         | 1    | None        |
+----+----------+--------------+------+-------------+

Assume, we have already started an instance with an ID of 4. We will volume with ID 1, to an instance with an ID 4.

# nova --username=adminUser --apikey=password --projectid=adminTenant --url=http://openstack1_IP:5000/v2.0/ --region_name=RegionOne --version=1.1 volume-attach 4 1 /dev/vdb

Check the output by running volume-list command. The output should be like:

+----+----------+--------------+------+-------------+
| ID |  Status  | Display Name | Size | Attached to |
+----+----------+--------------+------+-------------+
| 1  | in-use   | vol1         | 1    | 4           |
+----+----------+--------------+------+-------------+

Now log into the instance, and type fdisk -l. Last part of the output should look like the following, indicating the presence of a new partition.

Disk /dev/vdb: 1073 MB, 1073741824 bytes
16 heads, 63 sectors/track, 2080 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes

   Device Boot      Start         End      Blocks  Id System
/dev/vdb1               1        2080     1048288+  0 Empty

Now, make a filesystem on this partition, mount it, and start using.

# mkfs.ext3 /dev/vdb1
# mkdir /iscsipartition
# mount /dev/vdb1 /iscsipartition

The partition is mounted and ready to use. Try creating new volumes, and attaching them to instances running on different compute nodes.