SUSECloud Part 2 Install The OpenStack Services And Ceph On the Multinode Environment Command Line Style


Bicycle

Prerequisites

Have a cluster installed like in: SUSECloud Part 1 - Install The Multi Node OpenStack/Ceph Environment

Getting OpenStack Installed

Turn on the one or more VMs that will act as hosts which SUSECloud can provision.

1cd crowbar-virtualbox
2./start_cluster.sh
1admin:~  for I in `crowbar machines list |grep dc0`; do crowbar machines allocate $I; done
2Executed allocate for dc0-ff-ee-00-01-01.suse-testbed.de
3Executed allocate for dc0-ff-ee-00-02-01.suse-testbed.de
4Executed allocate for dc0-ff-ee-00-03-01.suse-testbed.de
5Executed allocate for dc0-ff-ee-00-04-01.suse-testbed.de
6Executed allocate for dc0-ff-ee-00-05-01.suse-testbed.de
7Executed allocate for dc0-ff-ee-00-06-01.suse-testbed.de

The node will begin the allocation which includes:

  • Initial Chef run
  • Reboot
  • Install of base system via PXE & autoyast
  • Reboot into newly installed system (login prompt)

You can watch the process with an RDP client:

Wait for the VMs to get into the Ready state.

1admin:~  crowbar node_state status
2dc0-ff-ee-00-04-01   Installing
3dc0-ff-ee-00-05-01   Installing
4dc0-ff-ee-00-01-01   Installing
5admin   Ready
6dc0-ff-ee-00-06-01   Installing
7dc0-ff-ee-00-02-01   Installing
8dc0-ff-ee-00-03-01   Installing

After the nodes become ready we edit the nodes to hint their intended role.

1admin:~  crowbar node_state status
2dc0-ff-ee-00-06-01   Ready
3dc0-ff-ee-00-03-01   Ready
4dc0-ff-ee-00-04-01   Ready
5admin   Ready
6dc0-ff-ee-00-01-01   Ready
7dc0-ff-ee-00-05-01   Ready
8dc0-ff-ee-00-02-01   Ready

This is useful for the later steps as it will auto populate hostname values in corresponding fields.

We have Chef at our disposal so lets use a ruby script for that:

 1cat > set_intended_role_and_zone.rb << CODE
 2nodes.all do |node|
 3
 4  puts "updating Node #{node.name}"
 5  puts "current intended_role: #{node["crowbar_wall"]["intended_role"]}"
 6  puts "current az #{node["crowbar_wall"]["openstack"]["availability_zone"]}" if node["crowbar_wall"]["openstack"]
 7  case node.name
 8    when /dc0-ff-ee-00-01-01/
 9      node["crowbar_wall"]["intended_role"] = "controller"
10    when /dc0-ff-ee-00-02-01/
11      node["crowbar_wall"]["intended_role"] = "compute"
12      node["crowbar_wall"]["openstack"] = {"availability_zone" => "mz"}
13    when /dc0-ff-ee-00-03-01/
14      node["crowbar_wall"]["intended_role"] = "compute"
15      node["crowbar_wall"]["openstack"] = {"availability_zone" => "sec"}
16    when /dc0-ff-ee-00-0[4-6]-01/
17      node["crowbar_wall"]["intended_role"] = "storage"
18  end
19  node.save
20end
21CODE
22
23knife exec set_intended_role_and_zone.rb

This script also sets the OpenStack availability_zone for the compute nodes to different values.

OpenStack Installation Order

Order matters when provisioning OpenStack pieces on the various host nodes. The proper order is as already predefined in the Barclamps -> Openstack list.

We have to follow that order from top to bottom.

Database

Create a new proposal for the database barclamp.

ATTENTION: the proposal name MUST be 'default' in SUSECloud!

1admin:~  crowbar database proposal create default
2Created default
3admin:~  crowbar database proposal show default > database.json

Within the json we can alter attribute values. At the moment we are interested that the service gets deployed on our control node. Since we set the intended_role the setting should be already to our wishes.

1admin:~  grep -A4 elements database.json
2      "elements": {
3        "database-server": [
4          "dc0-ff-ee-00-01-01.suse-testbed.de"
5        ]
6      },

we then can save our settings back to the server

1admin:~ crowbar database proposal edit default --file database.json
2Edited default

and commit the change

1admin:~ crowbar database proposal commit default
2Committed default

Keystone

Create a new proposal for the keystone barclamp.

ATTENTION: the proposal name MUST be 'default' in SUSECloud!

1admin:~  crowbar keystone proposal create default
2Created default
3admin:~  crowbar keystone proposal show default > keystone.json

Within the json we can alter attribute values. At the moment we are interested that the service gets deployed on our control node. Since we set the intended_role the setting should be already to our wishes.

1admin:~ grep -A4 elements keystone.json
2      "elements": {
3        "keystone-server": [
4          "dc0-ff-ee-00-01-01.suse-testbed.de"
5        ]
6      },

we then can save our settings back to the server

1admin:~  crowbar keystone proposal edit default --file keystone.json
2Edited default

and commit the change

1admin:~  crowbar keystone proposal commit default
2Committed default

You can always check the current state of a node (in another terminal) with

1admin:~  crowbar node_state status  --no-ready
2dc0-ff-ee-00-01-01   Applying

RabbitMQ

Create a new proposal for the rabbitmq barclamp.

ATTENTION: the proposal name MUST be 'default' in SUSECloud!

1admin:~  crowbar rabbitmq proposal create default
2Created default
3admin:~  crowbar rabbitmq proposal show default > rabbitmq.json

Within the json we can alter attribute values. At the moment we are interested that the service gets deployed on our control node. Since we set the intended_role the setting should be already to our wishes.

1admin:~  grep -A4 elements rabbitmq.json
2      "elements": {
3        "rabbitmq-server": [
4          "dc0-ff-ee-00-01-01.suse-testbed.de"
5        ]
6      },

we then can save our settings back to the server

1admin:~  crowbar rabbitmq proposal edit default --file rabbitmq.json
2Edited default

and commit the change

1admin:~  crowbar rabbitmq proposal commit default
2Committed default

Ceph

Create a new proposal for the ceph barclamp.

ATTENTION: the proposal name MUST be 'default' in SUSECloud!

1admin:~  crowbar ceph proposal create default
2Created default
3admin:~  crowbar ceph proposal show default > ceph.json

Within the json we can alter attribute values. At the moment we are interested that the service gets deployed on our control node. Since we set the intended_role the setting should be already to our wishes.

 1admin:~  grep -A11 elements ceph.json
 2      "elements": {
 3        "ceph-osd": [
 4          "dc0-ff-ee-00-06-01.suse-testbed.de",
 5          "dc0-ff-ee-00-04-01.suse-testbed.de",
 6          "dc0-ff-ee-00-05-01.suse-testbed.de"
 7        ],
 8        "ceph-mon": [
 9          "dc0-ff-ee-00-01-01.suse-testbed.de",
10          "dc0-ff-ee-00-06-01.suse-testbed.de",
11          "dc0-ff-ee-00-04-01.suse-testbed.de"
12        ]
13      },

Here we see that our control node got into the cheph-mon list. We want Ceph functionality within the Ceph part, so edit the file and change the ceph-mon list.

 1admin:~  sed -i 's/dc0-ff-ee-00-01-01.suse-testbed.de/dc0-ff-ee-00-05-01.suse-testbed.de/' ceph.json
 2admin:~  grep -A11 elements ceph.json
 3      "elements": {
 4        "ceph-osd": [
 5          "dc0-ff-ee-00-06-01.suse-testbed.de",
 6          "dc0-ff-ee-00-04-01.suse-testbed.de",
 7          "dc0-ff-ee-00-05-01.suse-testbed.de"
 8        ],
 9        "ceph-mon": [
10          "dc0-ff-ee-00-05-01.suse-testbed.de",
11          "dc0-ff-ee-00-06-01.suse-testbed.de",
12          "dc0-ff-ee-00-04-01.suse-testbed.de"
13        ]
14      },

we then can save our settings back to the server

1admin:~  crowbar ceph proposal edit default --file ceph.json
2Edited default

and commit the change

1admin:~  crowbar ceph proposal commit default
2Committed default

Swift

We leave out Swift in this deployment

Glance

Create a new proposal for the glance barclamp.

ATTENTION: the proposal name MUST be 'default' in SUSECloud!

1admin:~  crowbar glance proposal create default
2Created default
3admin:~  crowbar glance proposal show default > glance.json

Within the json we can alter attribute values. At the moment we are interested that the service gets deployed on our control node. Since we set the intended_role the setting should be already to our wishes.

1admin:~ grep -A4 elements glance.json
2      "elements": {
3        "glance-server": [
4          "dc0-ff-ee-00-01-01.suse-testbed.de"
5        ]
6      },

We want to use the ceph rbd backend, so we have to alter the json

1sed -i 's/"default_store": "file"/"default_store": "rbd"/' glance.json

We then can save our settings back to the server

1admin:~  crowbar glance proposal edit default --file glance.json
2Edited default

and commit the change

1admin:~  crowbar glance proposal commit default
2Committed default

Cinder

Create a new proposal for the cinder barclamp.

ATTENTION: the proposal name MUST be 'default' in SUSECloud!

1admin:~  crowbar cinder proposal create default
2Created default
3admin:~  crowbar cinder proposal show default > cinder.json

Within the json we can alter attribute values. At the moment we are interested that the service gets deployed on our control node. Since we set the intended_role the setting should be already to our wishes.

1admin:~  grep -A7 elements cinder.json
2      "elements": {
3        "cinder-volume": [
4          "dc0-ff-ee-00-06-01.suse-testbed.de"
5        ],
6        "cinder-controller": [
7          "dc0-ff-ee-00-01-01.suse-testbed.de"
8        ]
9      },

We do not want to have the cinder-volume role on one of the ceph nodes.

 1admin:~  sed -i 's/dc0-ff-ee-00-06-01.suse-testbed.de/dc0-ff-ee-00-01-01.suse-testbed.de/' cinder.json
 2admin:~  grep -A7 elements cinder.json
 3      "elements": {
 4        "cinder-volume": [
 5          "dc0-ff-ee-00-01-01.suse-testbed.de"
 6        ],
 7        "cinder-controller": [
 8          "dc0-ff-ee-00-01-01.suse-testbed.de"
 9        ]
10      },

We want to use the ceph rados backend, so we have to alter the json

1admin:~  sed -i 's/"volume_type": "raw"/"volume_type": "rbd"/' cinder.json

We then can save our settings back to the server

1admin:~  crowbar cinder proposal edit default --file cinder.json
2Edited default

and commit the change

1admin:~  crowbar cinder proposal commit default
2Committed default

Neutron

Create a new proposal for the neutron barclamp.

ATTENTION: the proposal name MUST be 'default' in SUSECloud!

1admin:~  crowbar neutron proposal create default
2Created default
3admin:~  crowbar neutron proposal show default > neutron.json

Within the json we can alter attribute values. At the moment we are interested that the service gets deployed on our control node. Since we set the intended_role the setting should be already to our wishes.

1admin:~  grep -A4 elements neutron.json
2      "elements": {
3        "neutron-server": [
4          "dc0-ff-ee-00-01-01.suse-testbed.de"
5        ]
6      },

Here we want to make several customisations. First of all we want linuxbridge and vlan mode:

1admin:~  sed -i 's/"networking_mode": "gre"/"networking_mode": "vlan"/' neutron.json
2admin:~  sed -i 's/"networking_plugin": "openvswitch"/"networking_plugin": "linuxbridge"/' neutron.json

And second of all we want all our public/floating traffic on a separate physical interface. Unfortunately SUSECloud does not support this out of the box yet, so we have to alter the chef recipes and templates.

This is somewhat hackish at the moment as it is really tied to this Virtualbox setup were we know that its gonna be eth1 and vlan 300 for the public interface.

1admin:~  sed -i 's/network_vlan_ranges = physnet1:/network_vlan_ranges = physnet2:300:300,physnet1:/' /opt/dell/chef/cookbooks/neutron/templates/default/ml2_conf.ini.erb
2admin:~  sed -i 's/network_vlan_ranges = physnet1:/network_vlan_ranges = physnet2:300:300,physnet1:/' /opt/dell/chef/cookbooks/neutron/templates/default/linuxbridge_conf.ini.erb
3sed -i 's/physical_interface_mappings = physnet1:/physical_interface_mappings = physnet2:eth1,physnet1:/' /opt/dell/chef/cookbooks/neutron/templates/default/linuxbridge_conf.ini.erb
4sed -i 's/public_net\["vlan"\]} --provider:physical_network physnet1/public_net["vlan"]} --provider:physical_network physnet2/' /opt/dell/chef/cookbooks/neutron/recipes/post_install_conf.rb
5
6```sh
7admin:~  knife cookbook upload neutron -o /opt/dell/chef/cookbooks/
8Uploading neutron        [1.0.0]
9Uploaded 1 cookbook.

We then can save our settings back to the server

1admin:~  crowbar neutron proposal edit default --file neutron.json
2Edited default

and commit the change

1admin:~  crowbar neutron proposal commit default
2Committed default

Nova

Create a new proposal for the nova barclamp.

ATTENTION: the proposal name MUST be 'default' in SUSECloud!

1admin:~  crowbar nova proposal create default
2Created default
3admin:~  crowbar nova proposal show default > nova.json

Within the json we can alter attribute values. At the moment we are interested that the service gets deployed on our control node. Since we set the intended_role the setting should be already to our wishes.

 1admin:~  grep -A17 elements nova.json
 2      "elements": {
 3        "nova-multi-controller": [
 4          "dc0-ff-ee-00-01-01.suse-testbed.de"
 5        ],
 6        "nova-multi-compute-xen": [
 7          "dc0-ff-ee-00-03-01.suse-testbed.de",
 8          "dc0-ff-ee-00-02-01.suse-testbed.de"
 9        ],
10        "nova-multi-compute-kvm": [
11
12        ],
13        "nova-multi-compute-qemu": [
14
15        ],
16        "nova-multi-compute-hyperv": [
17
18        ]
19      }

We are running Nova with Qemu as Virtualbox does us not allow to use kvm in kvm. So we have to move our compute nodes to nova-multi-compute-qemu

 1DOMAIN=`hostname -d`
 2
 3crowbar nova proposal show default > nova.json
 4
 5cat > nova-elements.json <<JSON
 6{  "nova-multi-compute-hyperv": [],
 7   "nova-multi-controller": [
 8      "dc0-ff-ee-00-01-01.$DOMAIN"
 9    ],
10   "nova-multi-compute-qemu": [
11      "dc0-ff-ee-00-02-01.$DOMAIN",
12      "dc0-ff-ee-00-03-01.$DOMAIN"
13    ],
14    "nova-multi-compute-xen":[],
15    "nova-multi-compute-kvm": []
16}
17JSON
18
19json-edit -r -a deployment.nova.elements -v "`cat nova-elements.json`" nova.json

And check if it worked:

 1admin:~ grep -A17 elements nova.json
 2      "elements": {
 3        "nova-multi-controller": [
 4          "dc0-ff-ee-00-01-01.suse-testbed.de"
 5        ],
 6        "nova-multi-compute-xen": [
 7
 8        ],
 9        "nova-multi-compute-kvm": [
10
11        ],
12        "nova-multi-compute-hyperv": [
13
14        ],
15        "nova-multi-compute-qemu": [
16          "dc0-ff-ee-00-02-01.suse-testbed.de",
17          "dc0-ff-ee-00-03-01.suse-testbed.de"
18        ]
19      },

Next we have to set the libvirt_type

1admin:~  sed -i 's/"libvirt_type": "kvm"/"libvirt_type": "qemu"/' nova.json

And finnaly we have to change another part of the scripting. Qemu is not supported in SUSECloud 3. So there is an issue to tackle: Using Ceph/Rbd with Qemu needs a code change to the recipes.

1grep -ir -e "libvirt_type" /opt/dell/chef/cookbooks/* |grep rbd
2/opt/dell/chef/cookbooks/nova/recipes/config.rb:  if cinder_server[:cinder][:volume][:volume_type] == "rbd" and node[:nova][:libvirt_type] == "kvm"

We have to change this line to:

1if cinder_server[:cinder][:volume][:volume_type] == "rbd” and ["kvm","qemu"].include?(node[:nova][:libvirt_type])

change it with:

1admin:~  sed -i 's/and node\[:nova\]\[:libvirt_type\] == "kvm"/and ["kvm","qemu"].include?(node[:nova][:libvirt_type])/' /opt/dell/chef/cookbooks/nova/recipes/config.rb

and upload it to the chef server

1admin:~  knife cookbook upload nova -o /opt/dell/chef/cookbooks/
2Uploading nova           [0.3.0]
3Uploaded 1 cookbook.

We then can save our settings back to the server

1admin:~  crowbar nova proposal edit default --file nova.json
2Edited default

and commit the change

1admin:~  crowbar nova proposal commit default
2Committed default

There is another command which helps us watching the progress:

1Every 2.0s: /opt/dell/bin/crowbar_node_state status --no-ready ; echo ' ' ; /opt/dell/bin/crowbar_node_status                                       Fri Apr 12 14:47:12 2014
2
3dc0-ff-ee-00-01-01   Applying
4dc0-ff-ee-00-02-01   Applying
5dc0-ff-ee-00-03-01   Applying
6
7Host   OK  WARN  CRIT  UNKNOWN  PENDING

Horizon

Create a new proposal for the nova_dashboard barclamp.

ATTENTION: the proposal name MUST be 'default' in SUSECloud!

1admin:~  crowbar nova_dashboard proposal create default
2Created default
3admin:~  crowbar nova_dashboard proposal show default > nova_dashboard.json

Within the json we can alter attribute values. At the moment we are interested that the service gets deployed on our control node. Since we set the intended_role the setting should be already to our wishes.

1admin:~  grep -A4 elements nova_dashboard.json
2      "elements": {
3        "nova_dashboard-server": [
4          "dc0-ff-ee-00-01-01.suse-testbed.de"
5        ]
6      },

We then can save our settings back to the server

1admin:~  crowbar nova_dashboard proposal edit default --file nova_dashboard.json
2Edited default

and commit the change

1admin:~  crowbar nova_dashboard proposal commit default
2Committed default

Finished Installation

We leave out ceilometer and heat in this deployment. So if everything went well we should see now a lot of green on the OpenStack Barclamp list:

To use the Openstack login we can have a look at the control node:

The details page of the control node has two links build in. One for the the admin-net dashboard and one for the public-net dashboard.

OpenStack Dashboard Login

To access the dashboard open the browser at http://192.168.124.81

  • Username: crowbar
  • Password: crowbar
  • System Info - Services:
  • System Info - Network Agents:
  • System Info - Hypervisors:
Go Back explore our courses

We are here for you

You are interested in our courses or you simply have a question that needs answering? You can contact us at anytime! We will do our best to answer all your questions.

Contact us