OpenStack Mitaka is here! In this post, Jan Klare explains step by step how to use Chef to deploy an OpenStack cluster running Mitaka.
Jan Klare is the co-founder of the cloudbau GmbH and a big automation and bleeding edge technology fan. He is a core reviewer in the official Openstack-Chef project and was the project team lead for the Mitaka cycle. In addition to automating the deployment of OpenStack he loves to play with new and fancy automation and orchestration technologies.
To learn more about cloudbau visit their website: http://www.cloudbau.de
Hi everybody and welcome to the OpenStack-Chef kitchen. Today we are going to cook ourselves some nice and tasty OpenStack Mitaka. And since we are not any ordinary boring one-node-only cooking show, we will cook a whole cluster of it to satisfy the cravings of all your developers at once. Lets get started then!
To get started with this, your kitchen should contain at least 16GB of ram, 4 cores and some hard disk to play on. The kitchen I am cooking in today is a simple MacBook Pro with 16GB of ram and a 3,1 GHz i7 from early 2015. If your kitchen lacks some of these equipments, you can still try, but there is no guarantee that you will be able to cook the same cluster we are going for without running out of resources.
In addition to the kitchen itself we will need a whole bunch of cookbooks, but luckily you can get all of them at a one-stop-berkself directly from the openstack-chef-repo. Just pull the repo to your favorite git location.
Since we will be using a lot of the standard tools in store the chef business, you should either have all the needed gems already bundled up somewhere, or you should download and install the fitting chefdk with the version 0.9.0. I will recommend to go with the chefdk 0.9.0, since we used it too and it worked. In case you wonder why we are not using the most recent version: We tested with 0.9.0 during the whole development and it still works and somebody said “never change a winning team”.
If you have prepared all the things mentioned above and already some appetite for a nice and tasty OpenStack, you should go ahead and continue with the next steps.
To get started you should cd to the openstack-chef-repo you pulled before and have a quick look at some of the core documents we will use during the actual cooking.
The first and most mysterious thing here is the Rakefile, which contains a lot of useful methods and has them already bundled in tasks to deploy and test different scenarios of OpenStack. Of course you can go ahead and read the whole file (which might even be a good idea in case you want to continue working with this chef-repo), but for today we will just use three of these tasks:
berks_vendor: As stated in the description, this task will pull in all of the needed cookbooks for today’s cooking session. It will read the Berksfile, use berkshelf to resolve all the dependencies and download all cookbooks to the cookbooks folder inside the openstack-chef-repo.
multi_node: This task will automate all the cooking for you and the only thing you have to do during this, is take a look at how fast this one will build a local three-node OpenStack cluster. It will read the vagrant_linux.rb and pull in the needed vagrant box. This can be either ubuntu 14.04 or centos 7.2, which is completely up to you and switchable by the environment variable ‘REPO_OS’. In addition to that, it will also use the multi-node.rb which specifies the exact configuration of all virtual machines (controller_config and compute_config) and how these should be created with chef-provisioning. It will create one controller-node with the role ‘multi-node-controller’ and two compute-nodes with the role ‘multi-node-compute’. So after running this, we will have a total of three nodes, which will be controller, compute1 and compute2.
clean: This task is pretty straightforward and well described with “Blow everything away”. You can and should use it in case you get bored after you have seen the whole cluster work or in case you get stuck somewhere and want to start fresh. Since it says “Blow everything away”, you really will need to start at the beginning by vendoring your cookbooks (1).
As you might have seen already in the multi-node.rb, we will also use a distribution specific environment, to get some additional flavor into our cluster. Since the two environment files are very similar, we will just look at the ubuntu one for now and you should be able to walk yourself through the centos one if you need it.
Starting from the
we will reset the
 in order to not use the
anymore, since this would interfere with our configuration.
Additionally we will configure
to run a full update of its sources during the compile time, so we can install
the newest stuff right from the beginning (and even during the compile_time).
Since we want to deploy OpenStack with the openstack-chef-cookbooks, we will need to pass some attributes to these, to align them with all our expectations for a three node cluster.
The first thing we want to do here, is to allow all nodes to forward network traffic. This is needed, since we want to run our routers and dhcp namespaces on the controller and connect them via openvswitch (ovs) bridges to the instances running on the two compute nodes.
To actually allow all the OpenStack services to talk to each other, either via the message queue or directly via the APIs, we need to define the endpoints we want to use. Since all of the APIs and the message queue (mq) will be running on our controller node, we will configure one of its IP-addresses (‘192.168.101.60’) as the ‘host’ attribute for the endpoints and the mq. With this configuration, all of the OpenStack service APIs will be reachable via their default ports (e.g. 9696 for neutron) on the address ‘192.168.101.60’ (e.g. ‘192.168.101.60:9696’ for neutron).
Right below the endpoint setting, we see a whole block that looks quite similar to the endpoint one, but is called ‘bind_service’. In addition to the endpoints where the service will be reachable, we also need to define where the actual service should be listening. You might think that this is the exact same thing, but it’s not. In the most production environments you will need additional proxies like ‘haproxy’ or ‘apache’ right in front of your APIs for security, filtering, threading and HA. This said, you the endpoint where your API is reachable might in fact be an ‘apache’ or ‘haproxy’, listening on a completely different ip and port than your actual OpenStack service is. During the design of the cookbooks we decided to bind all of the services by default to ‘127.0.0.1’ so we have some security by default and do not make them world accessible. In our scenario today however, we need them to be accessible by our compute nodes and from the outside of the vagrant boxes (since we maybe want to test the cli tools on our local machine against the APIs), and will therefore bind them to ‘0.0.0.0’ to avoid a more complex configuration. This will make them accessible on their default ports via all IP addresses assigned to the controller node. The next important setting in our environment is the detailed and attribute driven configuration of the networking service neutron.
In the first section we configure the ml2 plugin we want to use for our virtual networks. In this case we want to go with the default ml2 plugin using vxlan as the overlay to separate tenant networks.
To allow the actual traffic to flow between instances and router/dhcp namespaces, we need to additionally specify the interface we want to create our overlay vxlan ovs bridge on. For our scenario this will be the ‘eth1’ interface on the controller and compute nodes. The actual ovs bridge configuration will be done with the example recipe from the network cookbook.
In the following and last networking section, we are setting some configuration parameters that will directly go into the neutron.conf. In our scenario we want to use the neutron-l3-agent and therefore need to enable the service_plugin ‘router’. We also need to specify where the neutron-openvswitch-agent running on our compute nodes can find the mq to talk to the other Neutron agents. And since we want to use vxlan as the default for our tenant networks, we need to specify this as well.
To be able to instantly start some instances after we have the cluster up and running, we are also enabling the image upload of a simple cirros image.
The last section in our environment is dedicated to configuring Nova. Since we will be running our cluster on top of visualization, we do not want to use the default ‘virt_type’ kvm, but rather go with qemu. The last option for the ‘oslo_messaging_rabbit’ section enables nova-compute to talk to the mq and all of the other nova service (same as for the neutron-openvswitch-agent).
I guess we have now spend enough time with our Mise en Place and should start the actual cooking, people are getting hungry.
As all of you chefs might now, if you have done a good Mise en Place, cooking becomes a breeze. The only thing we need to do now to get things started is getting all our cookbooks with the ‘berks_vendor’ task (1) and run the ‘multi_node’ task (2) from the Rakefile mentioned above. If you are using chefdk you can do this by running:
chef exec rake berks_vendor chef exec rake multi_node
If your first chef run fails while installing the package “cinder-common”, you are probably on a mac and there seems to be strange issue with handing over the locales during a chef run. Just start the run again with:
chef exec rake multi_node
You should now get yourself a coffee and maybe even some fresh air, since this will take a while.
After roundabout 15-20 minutes, depending on your kitchen hardware, you will have a full OpenStack Mitaka ready for consumption. Now lets dig into it!
At the time of writing, there is a rather unpleasent bug in the startup of libvirt-bin, since the default logging service virtlogd seems to be started but instantly crashes. The bug is documented on launchpad and can simply be fixed by starting the service virtlogd manually or restarting the whole compute nodes. To start the service manually ssh to the compute1 and compute2 and run:
sudo service virtlogd start
After that you should be good to go.
Most people like to start with the good looking stuff, so we will go ahead and navigate to the dashboard, which should be accessible on https://localhost:9443. You can log in as the ‘admin’ user with the password ‘mypass’.
If you enjoyed the dashboard, you maybe want to dig a bit deeper and try to work with the command line clients directly from the controller. To do so, you should go back to your openstack-chef-repo and navigate to the subfolder ‘vms’. Inside of that folder you can use vagrant to directly ssh to the controller or one of the compute nodes like this:
# ssh to controller vagrant ssh controller # ssh to the first compute node vagrant ssh compute1
Once you are on the controller, you should become ‘root’ and load the environment variables from the provided openrc file in /root/openrc like this:
# become root sudo -i # load openrc environment variables . /root/openrc
Since all of the python clients you need to talk to the OpenStack APIs were already installed during the deployment, you can now go ahead and use them to either do the same things you did on the dashboard above (create networks, routers and launch some instances) or try something new and play a little with heat, since chefs usually love hot cooking.
As soon as you decide that you had enough OpenStack for today, you can exit the controller node, navigate back to the openstack-chef-repo root directory and clean up your whole kitchen with the ‘clean’ task (3) from the Rakefile mentioned at the beginning.
chef exec rake clean
I think thats it for today, if you have any questions regarding this setup, come and find me and the other openstack chef core reviewers at the irc channel on freenode #openstack-chef.