Introduction

Cord-in-the-Cloud (CiTC) is a deployment of CORD where the entire framework comprising of OpenStack, ONOS and XOS is brought up on a cloud infrastructure in a distributed setup. CiTC allows for on demand test-bed creation and one click provisioning of the CORD solution. All components of CORD are brought up independently on multiple virtual machines and the environment can be used for learning, development and test needs.

Click here for a quick introduction video of the solution.

Pre-requisites

CiTC is a self-contained environment for users to experiment with the CORD project. To get started, please request Larry Peterson, ON.Lab for an user account for the ONLab SDCloud Enterprise. Once the account is setup, you just need internet access to start using the solution. The virtual machines for hosting various components of the CORD solution are automatically created. Additionally all the dependencies are taken care while bringing up the solution on the cloud. For most basic learning and development needs, no additional investment is needed.

Custom Deployments

While the basic version of CORD is available to ON.Lab users on pre-configured virtual server environments, the solution is flexible, customizable and scalable, allowing provisioning and deployment on different types of server configurations including bare metal.

If you are looking for CORD deployments using CiTC on specific hardware configurations, including connectivity to hardware in lab environments write to cord-info@criterionnetworks.com. CORD solution sandbox is available on a pay-per-use basis on SDCloud from Criterion Networks (separate registration required).

Getting started with CORD

Provisioning CiTC can be accomplished through an intuitive user interface on ONLab SDCloud Enterprise Portal . Users are encouraged to understand the architecture,  getting started video, and the sample use cases before getting started with complex use cases.

Cord-in-a-Cloud Architecture

Cord-in-a-Cloud brings up the CORD architecture on 5 different virtual machines. Once provisioned, the nodes are available with the following host names and roles. CiTC is based on Open CORD 2.0 the stable community distribution. Support for CORD 3.0 is planned soon.

  1. Openstack-xos-ctrl1 - Openstack Controller Running Kilo

  2. Xos1 - XOS orchestrator

  3. Onos-xos-ctrl1- ONOS Controller running ONOS v1.8

  4. Service-xos-lb1 - consists of OpenStack Horizon & HAproxy. Additionally this node hosts LXC containers for simulating subscribers of the example R-CORD usecase

  5. Openstack-xos-cp1 & 2 - Compute Nodes where VNFs are provisioned

Below is the high level architecture of CiTC. The HAproxy can additionally do load balancing across multiple SDN controllers for more complex deployments (not part of the community release).

 

cord_1_node.png

Network Connection Details

The picture below provides a more detailed view of the network connectivity across different nodes in CiTC.

cord_node_detail_single.png

  1. All the nodes in cluster are connected with a single ‘eth0’ interface.

  2. The two compute nodes ‘openstack-xos-cp1’ and ‘openstack-xos-cp2’ each have two OVS bridges  ‘br-int’ and ‘br-ex’.

  3. The ‘service-lb1’ node has ‘subbr’ bridge to provide external connectivity for VNFs and to connect the emulated subscribers (LXC container clients)

  4. ‘vxlan+42’ interface in ‘service-lb1’ node and ‘vxlan0’ in openstack-xos-cp1 node is connected to ensure private networking

  5. ‘vxlan+43’ interface in ‘service-lb1’ node and ‘vxlan0’ in openstack-xos-cp2 node is connected to ensure private networking

  6. Internet access for all the nodes is available via eth0 interface

OpenStack Cluster Details

This section consists of details of OpenStack cluster and includes the following:

Nova

Nova provides compute services in OpenStack. The Nova OpenStack Compute service is used for hosting and managing cloud computing systems. Nova’s messaging architecture and all of its components can typically be run on several servers. This architecture allows the components to communicate through a message queue. Deferred objects are used to avoid blocking while a component waits in the message queue for a response. Nova and its associated components have a centralized SQL-based database.

The following diagram describes the Nova OpenStack Cluster configuration.

 

nova-xos.jpg


Connection details

 

Service Name

Node name

Frontend port

Backend port

nova-api

openstack-xos-ctrl1

8775

18775

nova-compute

openstack-xos-ctrl1

8774

18774

 


  • Nova request is initiated by XOS and request is routed through haproxy in service-lb1

  • Nova openstack can be accessed through the ‘service-lb1’ which acts as a load balancer front-end and then through an apache server in reverse-proxy configuration

  • Nova has various components like nova-scheduler, nova-conductor, nova-cert and nova-consoleauth. These components communicate with each other through a messaging server, ZeroMq. Nova maintains its database in MySQL

  • Nova compute is deployed in the ‘openstack-xos-cp1’ and ‘openstack-xos-cp2’ nodes and is responsible for bringing up the user VMs with the help of QEMU hypervisor

  • Nova services can be directly accessed through Horizon UI running in service-lb1 or the python-client in openstack-xos-ctrl1 node

Neutron

Neutron provides networking service in the openstack between interface devices (e.g., vNICs) managed by other Openstack services (e.g., nova).

The following diagram describes the Neutron OpenStack Cluster configuration.

 

neutron-xos.jpg

Connection details

 

Service Name

Node name

Frontend port

Backend port

neutron-server

openstack-xos-ctrl1

9696

19696

 

  • Neutron request is initiated by XOS and request is routed through haproxy in service-lb1

  • Neutron can be accessed from service-lb1 node, which acts as a load balancer front-end and then through apache server in reverse proxy configuration

  • Neutron has various components like metadata agent, DHCP agent, Neutron server and communicates through the ‘RabbitMq’ messaging server. Neutron maintains its database in MySQL server

  • ONOS communicates with Neutron-server through a service plugin called ‘networking_onos’

Keystone

Keystone is the identity service used by OpenStack for authentication and high-level authorization. Keystone is an OpenStack service that provides API client authentication, service discovery, and distributed multi-tenant authorization. Several openstack services like glance, cinder, nova etc. will be authenticated by keystone.

The following diagram describes the Keystone OpenStack Cluster configuration.

 

keystone-xos.jpg

 

Connection details

 

Service Name

Node name

Frontend port

Backend port

Keystone(public)

openstack-xos-ctrl1

5000

15000

Keystone(admin)

openstack-xos-ctrl1

35357

135357

 

  • Keystone is an OpenStack service running in openstack-xos-ctrl1 node and can be accessed from XOS through haproxy in service-lb1 node, which acts as a load balancer front-end and then through apache server in reverse proxy configuration

  • Keystone database is maintained in MySQL server

Glance

Glance in openstack is used to provide image service and metadata service. Glance image services include discovering, registering, and retrieving the virtual machine images. Glance uses RESTful APIs that allows querying of VM image metadata as well as retrieval of the actual image.

 

The following diagram describes Glance OpenStack Cluster configuration.

 

glance-xos.jpg

Connection details

 

Service Name

Node name

Frontend port

Backend port

glance-api

openstack-xos-ctrl1

9292

19292

glance-registry

openstack-xos-ctrl1

9191

19191

 

  • The components of glance include glance-api, glance-registry and communicates with each other through messaging server and ZeroMq

  • Glance can be accessed from XOS through haproxy in service-lb1 node, which acts as a load balancer front-end and then through apache server in reverse proxy configuration

  • Glance runs on openstack-xos-ctrl1 node

  • Glance database is maintained in MySQL server

Cinder

Cinder refers to the block storage service for OpenStack. Cinder virtualizes the block storage devices and present the storage resources to end users through the use of reference implementation (LVM) which are consumed by Nova in openstack.

The following diagram describes Cinder OpenStack Cluster configuration.

 

cinder-xos.jpg

 

Connection details

 

Service Name

Node name

Frontend port

Backend port

cinder-api

openstack-xos-ctrl1

8776

18776

 

  • Cinder is deployed in openstack-xos-ctrl1 node

  • Cinder can be accessed from XOS through haproxy in ‘service-lb1’ node, which acts as a load balancer front-end and then through apache server in reverse proxy configuration

  • In the case of CiAB, users are provided with 20GB LVM as a block storage which is present on openstack-xos-ctrl1.

ONOS

ONOS is the SDN network controller which is responsible for providing networking in this version of OpenCORD. Openstack communicates with ONOS through networking_onos service plugin.

The following diagram describes ONOS Cluster configuration

 

cord_onos_detail_single.png

Connection details

 

Service Name

Node name

Frontend port

Backend port

ONOS-GUI

ONOS-xos-ctrl1

8181

8181

ONOS-openflow

ONOS-xos-ctrl1

6633,6653

6633,6653

 

  • ONOS manages the OVS switches on compute nodes through OVSDB protocol and configures Openflow rules as needed

  • All network calls like, network creation, port update are generated from neutron service in openstack-xos-ctrl1 node

Sample use case

The section below provides an overview of how a common CORD use case (ie. R-CORD) can be provisioned on CiTC.  To facilitate learning and experimentation for first time users, during bring up, we provision a test subscriber for which a vSG service comes up as part of cluster creation. Subscriber clients are simulated using LXC containers to test the use-case.

R-CORD is a complex use case and we recommend that you understand the components of the solution at http://opencord.org/wp-content/uploads/2016/10/BBWF-CORD.pdf and other information at http://opencord.org/collateral/ before experimenting with the solution. More detailed learning modules won R-CORD and other applications of CORD are available at http://sdcloud.criterionnetworks.com.

Datapath for vSG

A datapath is a collection of functional units that perform data processing operations. The following diagram describes Datapath network configuration in OpenStack.

 

cord_datapath_vsg.png

 

  • As indicated, a vSG (Virtual Security Gateway) VM is deployed in openstack-xos-cp1’ and ‘openstack-xos-cp2’ nodes

  • External connectivity for subscribers is provided by vSG in VM with the implementation of DHCP server and a vRouter

  • Each VSG VM has an associated VLAN and mapped to specific cTAG and sTAG

  • Subscribers simulated as Linux Containers have specific cTAG and sTAG to access the suitable vSG VM

  • For every new sTAG in vOLT device, a vSG VM is created and for every cTAG in vOLT device a docker container running DHCP server is created inside the vSG VM

  • Subscriber with suitable sTAG and cTAG can access the vSG VM to get an IP address and can then connect to internet through vRouter in vSG VM

  • Subscriber simulated as Linux containers are deployed in service-lb1 node over a ‘subbr’ bridge. It reaches the vSG VM in compute nodes with certain VLAN id through a subscriber vxlan tunnel. Based on the vlan id, the respective vSG VM will respond and the subscriber container will get an IP address.