This document tracks where custom code/artifacts/images used by the CORD installation process come from, and how they're combined and installed when setting up CORD. It currently focuses on "platform install" stage (after MaaS runs), and uses the
cord-pod service profile.
Think of it as a map for where to find and change out any specific part of the code used to build CORD during the development process.
The CORD reference implementation is an integration of many software components, with the repository at https://gerrit.opencord.org (and mirrored at https://github.com/opencord) spanning multiple projects:
|maas||install base OS...|
|platform-install||install XOS, OpenStack, ONOS|
|service-profile||install service graph...|
The other projects in the repository correspond to services that can be configured into CORD; see Service Inventory.
Talk about other repositories... e.g., Java artifacts are in the public repository: https://oss.sonatype.org/content/groups/public/org/opencord/
Talk about the role of gradle...
Base OS Install
Summarize the role played by maas; talk about Ubuntu LTS 14.04; talk about switch OS; talk about vagrant; talk about Maven, Docker...
Talk about single node case, where Ubuntu is booted manually and then you jump to "Platform Install"
Talk about linkage to the next stage (platform-install)...
Once the base OS is installed on all the hardware components, the next stage installs and configures the core CORD platform components, including OpenStack, ONOS, and XOS. This is under the control of Ansible playbooks in platform-install.
OpenStack is installed using Juju, using the platform-install ansible playbooks. There are playbooks for the various supported configurations, which use the same common set of ansible roles.
For the single node pod testing environment, there is an single-node-pod.sh script that performs and tests the installation.
Juju charms used to build OpenStack
There are a few modifications to Juju charms used to build OpenStack.
The specific charm versions are supplied as ansible variables - see
charm_versions in vars/cord_defaults.yml.
There are custom Juju Charms for neutron-api and nova-compute: https://code.launchpad.net/~cordteam
CORD specific charm changes (as of 2016-06-28):
Prebuilt ONOS Containers are on DockerHub: https://hub.docker.com/r/onosproject/
ONOS is deployed using the onos-vm-install role, which has a Dockerfile that modifies the one provided on Docker hub to install OpenStack SSL certificates that Juju creates. This is done in a manner similar to the methods described in these docs.
XOS has various service profiles (aka "configurations"), which describe how to set up a system. The current build method for a service profile is using a Makefile, and there are several targets. For the
cord-pod profile, the current make target invocation process is described in the
setup_openstack function of the single-node-pod.sh script.
xosproject/xos-base container isn't run directly, but is used to make the other containers and contains all the prerequisites software needed by XOS. By default it's just pulled from Docker Hub, but you can rebuild
xos-base from scratch with
Talk about linkage to service profile...
In the default installation, there are two VM images loaded into OpenStack's glance. These are downloaded either when the
cord-pod service profile Makefile is run, or during build by the xos-vm-install role, as specified in
xos_images in the cord.yml config.
The two images are:
trusty-server-multi-nic.img, need docs on how this was created
vsg-1.1.img, which is created with the imagebuilder tools.
Service profiles are the XOS mechanism that is used to bring up a stack of CORD services. For additional details, see XOS Dynamic On-Boarding and Service Profiles.
TODO: Link to on-boarding tutorial as appropriate... Replicate material here as appropriate...