CORD : Internals of the CORD Build Process

This document tracks where custom code/artifacts/images used by the CORD installation process come from, and how they're combined and installed when setting up CORD. It currently focuses on "platform install" stage (after MaaS runs), and uses the cord-pod service profile.

Think of it as a map for where to find and change out any specific part of the code used to build CORD during the development process.

Repository Organization

The CORD reference implementation is an integration of many software components, with the repository at (and mirrored at spanning multiple projects:

corduber repository...
maasinstall base OS...
platform-installinstall XOS, OpenStack, ONOS
service-profileinstall service graph...

The other projects in the repository correspond to services that can be configured into CORD; see Service Inventory.

Talk about other repositories...  e.g., Java artifacts are in the public repository:

Talk about the role of gradle...

Base OS Install

Summarize the role played by maas; talk about Ubuntu LTS 14.04; talk about switch OS; talk about vagrant; talk about Maven, Docker...

Talk about single node case, where Ubuntu is booted manually and then you jump to "Platform Install"

Talk about linkage to the next stage (platform-install)...

Platform Install

Once the base OS is installed on all the hardware components, the next stage installs and configures the core CORD platform components, including OpenStack, ONOS, and XOS. This is under the control of Ansible playbooks in platform-install.

OpenStack Setup

OpenStack is installed using Juju, using the platform-install ansible playbooks. There are playbooks for the various supported configurations, which use the same common set of ansible roles.

For the single node pod testing environment, there is an script that performs and tests the installation.

Juju charms used to build OpenStack

There are a few modifications to Juju charms used to build OpenStack.

The specific charm versions are supplied as ansible variables - see charm_versions in vars/cord_defaults.yml.

There are custom Juju Charms for neutron-api and nova-compute:

CORD specific charm changes (as of 2016-06-28):

Upstream repo:

ONOS Setup

Prebuilt ONOS Containers are on DockerHub:

ONOS is deployed using the onos-vm-install role, which has a Dockerfile that modifies the one provided on Docker hub to install OpenStack SSL certificates that Juju creates. This is done in a manner similar to the methods described in these docs.

XOS Setup

XOS has various service profiles (aka "configurations"), which describe how to set up a system.  The current build method for a service profile is using a Makefile, and there are several targets.  For the cord-pod profile, the current make target invocation process is described in the setup_openstack function of the script.

These also rely on Docker containers, which are built using Dockerfiles in xos containers directory, with prebuilt images are available from here:

The xosproject/xos-base container isn't run directly, but is used to make the other containers and contains all the prerequisites software needed by XOS. By default it's just pulled from Docker Hub, but you can rebuild xos-base from scratch with  make local_containers.

Talk about linkage to service profile...

Glance Images

In the default installation, there are two VM images loaded into OpenStack's glance. These are downloaded either when the cord-pod service profile Makefile is run, or during build by the xos-vm-install role, as specified in xos_images in the cord.yml config.

The two images are:

  • trusty-server-multi-nic.img, need docs on how this was created

  • vsg-1.1.img, which is created with the imagebuilder tools.

Service Profiles

Service profiles are the XOS mechanism that is used to bring up a stack of CORD services. For additional details, see XOS Dynamic On-Boarding and Service Profiles.

TODO: Link to on-boarding tutorial as appropriate... Replicate material here as appropriate...