Introduction
This is the fourth post of a series of blogs about how to configure and deploy a baremetal controller using Ironic. At this stage, you should have Docker installed and running (see part 3 for details). Congratulations! Your system is ready to set up Kolla.
Kolla setup
Once you have the controller node operating system ready, you can begin to install the Ironic controller using Kolla.
Kolla is meant to be a production OpenStack* deployment tool which provides product-ready containers and deployment tools for operating OpenStack clouds1. Please take a moment to review Kolla’s documentation before you continue.
The first step is to install Kolla in our system. In this procedure we are using Mitaka 2 release:
git clone -b stable/mitaka https://git.openstack.org/openstack/kolla ~/kolla
pip install -U ~/kolla
rm -rf /tmp/kolla
This step is required if you are installing Kolla from scratch. These commands will create a generic configuration file:
cd ~/kolla
tox -e genconfig
cp -r ~/kolla/etc/kolla /etc/
The Kolla community builds and makes tested images available for each tagged release of Kolla, but if running from primary, it is recommended to build images locally.
Configure Docker
The docker daemon is needed to execute this task. To allow Docker to connect to the Internet and download the images, configure the docker daemon proxy server by modifying the configuration file /etc/default/docker. The variable “DOCKER_OPTS” contains the input parameters given to the daemon. These DNS servers are written in the container configuration (to be precise, in /etc/resolv.conf). For more information, refer to the Docker daemon documentation3.
cat << EOF | sudo tee /etc/default/docker
DOCKER_OPTS="--dns <company DNS> --dns <company DNS2>\
--dns-search <company search domain> --insecure-registry 127.0.0.1:4000"
export http_proxy="<company proxy>"
EOF
Before running the instructions below, ensure the docker daemon is running or the build process will fail. To load the new configuration, restart the service:
service docker restart
Kolla build configuration
Add the distro version, install type and proxy configuration. Modify the configuration of kolla-build script, /etc/kolla/kolla-build.conf:
[DEFAULT]
# The base distro to use when building
base = ubuntu
# The method of the OpenStack install
# In Liberty, Ubuntu/source is the unique possible combination now
install_type = source
# The base distro image (OS version) and tag (compilation)
base-tag = trusty
tag = latest
# Proxy configuration.
include-header = /repo/inject-header.docker
include-footer = /repo/inject-footer.docker
Ironic Config for kolla
Some specific configuration is needed to run Ironic project in our environment. The driver needed to implement the Intelligent Platform Management Interface (IPMI) in Ironic project is called “agent_ipmitool”. To configure it, create a file called /etc/kolla/config/ironic.conf with the following content:
[DEFAULT]
enabled_drivers = agent_ipmitool
The last config file you need to modify is the one related with the Nova project4, which is the resource controller. These changes will modify the compute manager from the default one, which is the virtual machine manager, to the baremetal manager. On /etc/kolla/config/nova.conf, modify the following parameters:
# Full class name for the Manager for compute (string value)
#compute_manager=nova.compute.manager.ComputeManager
compute_manager=ironic.nova.compute.manager.ClusteredComputeManager
# Flag to decide whether to use baremetal_scheduler_default_filters or not.
# (boolean value)
#scheduler_use_baremetal_filters=False
scheduler_use_baremetal_filters=True
# Determines if the Scheduler tracks changes to instances to help with
# its filtering decisions (boolean value)
#scheduler_tracks_instance_changes=True
Start the Docker Registry
Now everything is ready to build the container images. Before executing the build command, manually start the Docker Registry5. This application stores and distributes Docker images.
docker run -d -p 4000:5000 --restart=always --name registry registry:2 ~/kolla/tools/build.py --registry 127.0.0.1:4000 --push ~/kolla/tools/build.py
Deploying Kolla
The Kolla community provides two example methods of Kolla deployment: all-in-one and multinode. The "all-in-one" deployment is similar to devstack deployment, which installs all OpenStack services on a single host. In the "multinode" deployment, OpenStack services can be run on specific hosts. This documentation only describes deploying "all-in-one" method as the simplest approach.
All variables for the environment can be specified in the following files: /etc/kolla/globals.yml and
/etc/kolla/passwords.yml.
In /etc/kolla/globals.yml, edit these parameters: “kolla_base_distro” and “kolla_install_type”. In this deployment we are using Ubuntu as the reference image. In a production environment, the installation type should be “binary” to save space. In this case you can choose “source” if you want to do debugging or development:
# Valid options are [ centos, fedora, oraclelinux, ubuntu ]
kolla_base_distro: "ubuntu"
# Valid options are [ binary, source ]
kolla_install_type: "source"
To ensure that a container has the original configuration every time it is started, “config_strategy” variable must be set to “COPY_ALWAYS”:
# Valid options are [ COPY_ONCE, COPY_ALWAYS ]
config_strategy: "COPY_ALWAYS"
The Kolla addresses can both be the same. In a multinode environment, with high-availability, this should be a virtual IP address, which is an unused IP address in our network that will float between the hosts running Keepalived6. In an “all-in-one” deployment, this IP should be the one used to access this host. The public address should be a name stored in a DNS but in our case, we prefer to use the same internal address:
kolla_internal_vip_address: "<controller node first NIC IP address - management>"
kolla_external_vip_address: "{{ kolla_internal_vip_address }}"
If the environment doesn't have a free IP address available for VIP configuration, the host's IP address may be used here by disabling HAProxy:
enable_haproxy: "no"
The default behavior when running the playbooks is to pull the latest image available. If a newer image is found, it will pull that new image and recreate the containers. Since we don't want this behavior, the variable “docker_pull_policy” should be set to “missing”:
# Valid options are [ always, missing ]
docker_pull_policy: "missing"
The “network_interface” variable is the interface to which Kolla binds API services. For example, when starting up MariaDB7 it will bind to the IP address on the interface list in the “network_interface” variable.
network_interface: "<name of management interface, first NIC>"
The “neutron_external_interface” variable is the interface that will be used for the external bridge in Neutron. Without this bridge the deployment instance traffic will be unable to access the rest of the Internet. In the case of a single interface on a machine, a veth pair may be used where one end of the veth pair is listed here and the other end is in a bridge on the system.
neutron_external_interface: "<name of Neutron controller interface, second NIC>"
The containers already built are tagged with the name set in /etc/kolla/kolla-build.conf. By default, the name given is “latest”:
# The Docker tag
tag = latest
openstack_release: "latest"
Enable Ironic
Finally, the last configuration step is to enable optional projects; in this case, Ironic:
####################
# OpenStack options
####################
# OpenStack services can be enabled or disabled with these options
enable_ironic: "yes"
Start the containers
Execute the prechecks. Be sure none of your images are failing. Once everything is working, deploy the container images. Once the last command finishes, you’ll have OpenStack services up and running on containers:
~/tools/kolla-ansible prechecks -vvvv -i ~/ansible/inventory/all-in-one
~/kolla/tools/kolla-ansible deploy -vvvv -i ~/kolla/ansible/inventory/all-in-one
References
4. Nova project
6. Keepalived
7. MariaDB