The master repository for setting up the CORD platform is platform-install, which takes systems from a "blank" OS-only state (either manually installed in the single-node scenario or a multi-node setup configured with MAAS and installs the platform prequisties.

This is done using Ansible playbooks, so familiarity with the Ansible toolset will be helpful during debugging.  If you're seeing errors during the installation process, please see Troubleshooting a Build.

While oriented toward the single-node-pod configuration, much of this guide is applicable to any CORD installation.

Obtaining diagnostics information from a running CORD pod

In addition to being able to install the CORD platform, platform-install also contains a diagnostic gathering playbook, cord-diag-playbook.yml. Every time this playbook is run, it creates a directory named diag-<timestamp> in the home directory of the user that runs Ansible (usually ubuntu on the current installation) on the head node of the CORD pod, which contains the output.

The cord-diag-playbook.yml can be run during the execution of other playbooks (in another terminal session), to quickly determine the current status of the installation.

If you are having issues with the your CORD installation, it's highly encouraged that you run the cord-diag-playbook.yml - its diagnostic output will be referenced throughout this guide, and portions of the data it collects may be asked for when requesting help from others.

How the CORD platform is structured

A CORD installation consists of a head node, one or more compute nodes (which may be on head node in the single-node configuration), and attached whitebox switches.

The steps for bringing up the single-node POD are defined in the cord-single-playbook.yml, which is run by Ansible.

As a way to start quickly, there's a convenience script, single-node-pod.sh which handles installing Ansible and other prereqs and running the playbook.  It can also be used to run post-install tests on the POD.  See this page for more information on how to run the script.

Note that this playbook takes quite a while (over an hour, possibly 2-3 or more depending on your connection speed) to run, as it downloads and builds the entire environment.

Following along from this playbook, there are a few overarching steps:

  • Setup of native platform services like DNS and package caching, and configuring systems
  • Creating virtual machines to encapsulate XOS, ONOS, and Openstack systems
  • Installing software on those virtual machines
    • XOS, deployed in Docker containers on xos-1
    • ONOS, deployed in Docker containers on onos-cord-1 and onos-fabric-1
    • OpenStack and supporting tasks, which are configured with Juju, with vm names: juju-1, ceilometer-1, glance-1, keystone-1, percona-cluster-1, nagios-1, neutron-api-1, nova-cloud-controller-1, openstack-dashboard-1, rabbitmq-server-1, and optionally nova-compute-1 (only in the single node case)
  • Boostrapping an XOS service-profile configuration, in this case cord-pod-ansible
  • Loading OpenCORD Apps into ONOS
  • Performing testing tasks