Introduction
This page describes how to setup a working development environment that can be used in developing with VTN. It includes manual build, configuration and installation of ONOS, OpenStack and XOS, which might help understanding how VTN interacts with the other software components. For those who just wants to try VTN out, it is recommended to try CORD-in-a-Box. Note that this instructions assume you’re familiar with ONOS and OpenStack, and do not provide a guide to how to install or trouble shooting these services. However, if you aren’t, please find a guide from ONOS(http://wiki.onosproject.org) and OpenStack(http://docs.openstack.org), respectively.
You will need:
- Ubuntu machines for ONOS cluster
Ubuntu machines for OpenStack controller(at least 4G RAM is recommended) and compute nodes(at least 8G RAM is recommended)
Ubuntu machine for XOS
Installation Steps
Pre-requisites on the OpenStack compute nodes
1. Upgrade OVS version to 2.3.0 or later. This guide works very well for me (don't forget to change the version in the guide to 2.3.0 or later).
2. Set OVSDB passive mode.
Install OVS version 2.3.0 or later and then set OVSDB in passive mode in compute nodes by running the following command.
$ ovs-appctl -t ovsdb-server ovsdb-server/add-remote ptcp:6640:[host_ip]
Or you can set permanently by adding the following line to /usr/share/openvswitch/scripts/ovs-ctl, right after "set ovsdb-server "$DB_FILE" line. After modifying the script, restart openvswitch-switch service.
set "$@" --remote=ptcp:6640
Check if it listens to the TCP port 6640.
$ netstat -ntl Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:6640 0.0.0.0:* LISTEN
sdn@onos:~$ cp ~/.ssh/id_rsa ~/.ssh/node_key sdn@onos:~$ ssh -i ~/.ssh/node_key root@[compute-01] check if it's successful
Installing ONOS and VTN
1. Download, build, and install ONOS.
Please refer to https://wiki.onosproject.org/display/ONOS/Tutorials and https://wiki.onosproject.org/display/ONOS/Developer+Guide.
2. Activate the following ONOS applications (don't care about the version number in the examples).
onos> apps -a -s * 10 org.onosproject.optical-model 1.10.0.SNAPSHOT Optical information model * 42 org.onosproject.drivers 1.10.0.SNAPSHOT Default device drivers * 51 org.onosproject.openflow-base 1.10.0.SNAPSHOT OpenFlow Provider
Any other ONOS applications, especially any kind of host provider, can cause conflicts with VTN. So please be careful when you activate other ONOS applications.
3. Download and build cord-config and vtn.
$ git clone https://gerrit.opencord.org/config $ cd config && mci $ git clone https://gerrit.opencord.org/vtn $ cd vtn && mci
4. Run the command below from the ONOS build machine to install cord-config and VTN to running ONOS.
$ onos-app $OC1 install! config/target/cord-config-1.2-SNAPSHOT.oar $ onos-app $OC1 install! vtn/target/vtn-1.2-SNAPSHOT.oar
onos> apps -a -s * 10 org.onosproject.optical-model 1.10.0.SNAPSHOT Optical information model * 21 org.onosproject.ovsdb-base 1.10.0.SNAPSHOT OVSDB Provider * 24 org.onosproject.drivers.ovsdb 1.10.0.SNAPSHOT OVSDB Device Drivers * 42 org.onosproject.drivers 1.10.0.SNAPSHOT Default device drivers * 51 org.onosproject.openflow-base 1.10.0.SNAPSHOT OpenFlow Provider * 98 org.opencord.config 1.2.SNAPSHOT CORD configuration meta application * 99 org.opencord.vtn 1.2.SNAPSHOT VTN App
Installing OpenStack with DevStack
1. Download and install ONOS ML2 mechanism driver.
$ mkdir -p /opt/stack && cd /opt/stack $ git clone https://github.com/openstack/networking-onos.git $ cd networking-onos $ sudo pip install ./networking-onos
2. Configure ONOS ML2 mechanism driver.
# Configuration options for ONOS ML2 Mechanism driver [onos] # (StrOpt) ONOS ReST interface URL. This is a mandatory field. url_path = http://[onos_ip]:8181/onos/cordvtn # (StrOpt) Username for authentication. This is a mandatory field. username = onos # (StrOpt) Password for authentication. This is a mandatory field. password = rocks
3. Download DevStack.
$ git clone https://git.openstack.org/openstack-dev/devstack -b stable/mitaka
4. Create local.conf for each OpenStack node.
Here is the sample local.conf for controller node, which runs Keystone, Nova, Neutron, and Glance services.
[[local|localrc]] HOST_IP=10.90.0.58 SERVICE_HOST=10.90.0.58 RABBIT_HOST=10.90.0.58 DATABASE_HOST=10.90.0.58 ADMIN_PASSWORD=[admin_password] DATABASE_PASSWORD=$ADMIN_PASSWORD RABBIT_PASSWORD=$ADMIN_PASSWORD SERVICE_PASSWORD=$ADMIN_PASSWORD SERVICE_TOKEN=$ADMIN_PASSWORD DATABASE_TYPE=mysql FORCE_CONFIG_DRIVE=True USE_SSL=True # Networks Q_ML2_TENANT_NETWORK_TYPE=vxlan Q_ML2_PLUGIN_MECHANISM_DRIVERS=onos_ml2 Q_PLUGIN_EXTRA_CONF_PATH=/opt/stack/networking-onos/etc/neutron/plugins/ml2 Q_PLUGIN_EXTRA_CONF_FILES=(ml2_conf_onos.ini) NEUTRON_CREATE_INITIAL_NETWORKS=False # Services enable_service q-svc disable_service n-net disable_service n-cpu disable_service tempest disable_service c-sch disable_service c-api disable_service c-vol # Branches GLANCE_BRANCH=stable/mitaka HORIZON_BRANCH=stable/mitaka KEYSTONE_BRANCH=stable/mitaka NEUTRON_BRANCH=stable/mitaka NOVA_BRANCH=stable/mitaka
Here is the sample local.conf for compute node, which runs Nova compute agent.
[[local|localrc]] HOST_IP=10.90.0.64 <-- local IP SERVICE_HOST=162.243.x.x <-- controller IP, must be reachable from your test browser for console access from Horizon RABBIT_HOST=10.90.0.58 DATABASE_HOST=10.90.0.58 ADMIN_PASSWORD=nova DATABASE_PASSWORD=$ADMIN_PASSWORD RABBIT_PASSWORD=$ADMIN_PASSWORD SERVICE_PASSWORD=$ADMIN_PASSWORD SERVICE_TOKEN=$ADMIN_PASSWORD DATABASE_TYPE=mysql NOVA_VNC_ENABLED=True VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP VNCSERVER_LISTEN=$HOST_IP LIBVIRT_TYPE=kvm # Services ENABLED_SERVICES=n-cpu,neutron # Branches NOVA_BRANCH=stable/mitaka KEYSTONE_BRANCH=stable/mitaka NEUTRON_BRANCH=stable/mitaka
For those who installs OpenStack with other deploy tools, here are the Nova and Neutron configurations to use VTN as a ML2 mechanism driver. Also, make sure to enable SSL for all services.
[DEFAULT] force_config_drive = True network_api_class = nova.network.neutronv2.api.API
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
[ml2] tenant_network_types = vxlan type_drivers = vxlan mechanism_drivers = onos_ml2
5. Run DevStack.
$ ./stack.sh
Installing and Running XOS and R-CORD profile
1. In the XOS machine, download and run cord-bootstrap.sh script under /opt.
$ curl -o ~/cord-bootstrap.sh https://raw.githubusercontent.com/opencord/platform-install/master/scripts/cord-bootstrap.sh $ bash cord-bootstrap.sh
2. Edit rcord inventory to add compute nodes. Hosts file(/etc/hosts) should have IP addresses of the compute nodes as well.
; rcord configuration [all:vars] cord_profile=rcord [config] localhost ansible_connection=local [head] localhost ansible_connection=local [build] localhost ansible_connection=local [compute] compute-01 compute-02
3. Edit vtn-service.yaml.j2 to fix ONOS rest_port to 8181, ovsdbPort to 6640, controllerPort to onos-cord:6653 and the path of node_key in ONOS machine.
topology_template: node_templates: service#ONOS_CORD: type: tosca.nodes.ONOSService requirements: properties: kind: onos view_url: /admin/onos/onosservice/$id$/ no_container: true rest_hostname: onos-cord rest_port: 8181 // FIX THIS VALUE replaces: service_ONOS_CORD service#vtn: type: tosca.nodes.VTNService properties: view_url: /admin/vtn/vtnservice/$id$/ privateGatewayMac: 00:00:00:00:00:01 localManagementIp: {{ management_network_ip }} ovsdbPort: 6640 // FIX THIS VALUE sshUser: root sshKeyFile: ~/.ssh/node_key // FIX THIS VALUE sshPort: 22 xosEndpoint: http://xos:{{ xos_ui_port }}/ xosUser: {{ xos_admin_user }} xosPassword: {{ xos_admin_pass }} replaces: service_vtn vtnAPIVersion: 2 controllerPort: onos-cord:6653 // FIX THIS VALUE
4. Now we need to manually create or edit several files that MAAS does for us in CiaB install. They are,
- /etc/hosts, extra_hosts
- /root/.ssh/id_rsa, /root/.ssh/id_rsa.pub, /root/node_key
- /root/openstack-compute.yaml
- /root/openstack-compute-vtn.yaml
- /root/cord/build/platform-install/profile_manifests/local_vars.yml
etc/hosts, extra_hosts
First, edit hostfile and add the followings.
127.0.0.1 localhost xos.cord.lab xos xos-gui xos-ws xos-chameleon COMPUTE_01_IP compute-01 COMPUTE_01_IP compute-02 OPENSTACK_IP openstack keystone.cord.lab ONOS_CORD_IP onos-cord
You also need to edit docker-compose.yml.j2 to allow extra hosts option.
xos_ui: image: {{ deploy_docker_registry }}xosproject/xos-ui:{{ deploy_docker_tag }} networks: {% for network in xos_docker_networks %} - {{ network }} {% endfor %} {% if extra_hosts %} // ADD THIS BLOCK - START extra_hosts: {% for extra_host in extra_hosts %} - {{ extra_host }} {% endfor %} {% endif %} // ADD THIS BLOCK - END {% if svc.synchronizer is not defined or svc.synchronizer %} {{ svc.name }}-synchronizer: {% if extra_hosts %} // ADD THIS BLOCK - START extra_hosts: {% for extra_host in extra_hosts %} - {{ extra_host }} {% endfor %} {% endif %} // ADD THIS BLOCK - END image: {{ deploy_docker_registry }}xosproject/{{ svc.name }}-synchronizer:{{ deploy_docker_tag }} networks: {% for network in xos_docker_networks %} - {{ network }} {% endfor %}
/root/.ssh/id_rsa, /root/.ssh/id_rsa.pub, /root/node_key
Copy SSH keys created during the pre-requisites step 3 from ONOS node to ansible user home .ssh directory, /root/.ssh in this example. Copy id_rsa to /root/node_key as well.
/root/openstack-compute.yaml
Create /root/openstack-compute.yaml file. Here is the sample with two compute nodes, compute-01 and compute-02. You'll need to fix only the hostname according to your setup. Note that the hostname here must be the same with the hostname registered in OpenStack service. You can check the hostname on OpenStack with "nova host-list" command.
tosca_definitions_version: tosca_simple_yaml_1_0 imports: - custom_types/xos.yaml description: Adds OpenStack compute nodes topology_template: node_templates: # Site/Deployment, fully defined in deployment.yaml mysite: type: tosca.nodes.Site properties: no-delete: true no-create: true no-update: true MyDeployment: type: tosca.nodes.Deployment properties: no-delete: true no-create: true no-update: true # OpenStack compute nodes compute-01: // FIX THE HOSTNAME IF NECESSARY type: tosca.nodes.Node requirements: - site: node: mysite relationship: tosca.relationships.MemberOfSite - deployment: node: MyDeployment relationship: tosca.relationships.MemberOfDeployment compute-02: // FIX THE HOSTNAME IF NECESSARY type: tosca.nodes.Node requirements: - site: node: mysite relationship: tosca.relationships.MemberOfSite - deployment: node: MyDeployment relationship: tosca.relationships.MemberOfDeployment
/root/openstack-compute-vtn.yaml
Create openstack-compute-vtn.yaml file and place it under /root/. Here is the sample with the two compute nodes. You might need to fix most of the fields including hostname, dataPlaneIntf and dataPlaneIp. Refer to VTN Configuration Guide to get an idea of these fields.
tosca_definitions_version: tosca_simple_yaml_1_0 imports: - custom_types/xos.yaml description: Configures VTN networking for OpenStack compute nodes topology_template: node_templates: # VTN ONOS app, fully defined in vtn-service.yaml service#ONOS_CORD: type: tosca.nodes.ONOSService properties: no-delete: true no-create: true no-update: true # VTN networking for OpenStack Compute Nodes # Compute node, fully defined in compute-nodes.yaml compute-01: // FIX THE HOSTNAME IF NECESSARY type: tosca.nodes.Node properties: no-delete: true no-create: true no-update: true # VTN bridgeId field for node compute-01 compute-01_bridgeId_tag: // FIX THE HOSTNAME IF NECESSARY type: tosca.nodes.Tag properties: name: bridgeId value: of:000000000000001 // FIX THE BRIDGE ID IF NECESSARY requirements: - target: node: compute-01 // FIX THE HOSTNAME IF NECESSARY relationship: tosca.relationships.TagsObject - service: node: service#ONOS_CORD relationship: tosca.relationships.MemberOfService # VTN dataPlaneIntf field for node compute-01 compute-01_dataPlaneIntf_tag: // FIX THE HOSTNAME IF NECESSARY type: tosca.nodes.Tag properties: name: dataPlaneIntf value: veth1 // FIX THE DATA PLANE INTERFACE requirements: - target: node: compute-01 // FIX THE HOSTNAME IF NECESSARY relationship: tosca.relationships.TagsObject - service: node: service#ONOS_CORD relationship: tosca.relationships.MemberOfService # VTN dataPlaneIp field for node compute-01 compute-01_dataPlaneIp_tag: // FIX THE HOSTNAME IF NECESSARY type: tosca.nodes.Tag properties: name: dataPlaneIp value: 10.2.2.28/24 // FIX THE DATA PLANE IP requirements: - target: node: compute-01 // FIX THE HOSTNAME IF NECESSARY relationship: tosca.relationships.TagsObject - service: node: service#ONOS_CORD relationship: tosca.relationships.MemberOfService # Compute node, fully defined in compute-nodes.yaml compute-02: // FIX THE HOSTNAME IF NECESSARY type: tosca.nodes.Node properties: no-delete: true no-create: true no-update: true # VTN bridgeId field for node compute-02 compute-02_bridgeId_tag: // FIX THE HOSTNAME IF NECESSARY type: tosca.nodes.Tag properties: name: bridgeId value: of:0000000000000002 // FIX THE BRIDGE ID IF NECESSARY requirements: - target: node: compute-02 // FIX THE HOSTNAME IF NECESSARY relationship: tosca.relationships.TagsObject - service: node: service#ONOS_CORD relationship: tosca.relationships.MemberOfService # VTN dataPlaneIntf field for node compute-02 compute-02_dataPlaneIntf_tag: // FIX THE HOSTNAME IF NECESSARY type: tosca.nodes.Tag properties: name: dataPlaneIntf value: veth1 // FIX THE DATA PLANE INTERFACE requirements: - target: node: compute-02 // FIX THE HOSTNAME IF NECESSARY relationship: tosca.relationships.TagsObject - service: node: service#ONOS_CORD relationship: tosca.relationships.MemberOfService # VTN dataPlaneIp field for node compute-02 compute-02_dataPlaneIp_tag: // FIX THE HOSTNAME IF NECESSARY type: tosca.nodes.Tag properties: name: dataPlaneIp value: 10.2.2.29/24 // FIX THE DATA PLANE IP requirements: - target: node: compute-02 // FIX THE HOSTNAME IF NECESSARY relationship: tosca.relationships.TagsObject - service: node: service#ONOS_CORD relationship: tosca.relationships.MemberOfService
/root/cord/build/platform-install/profile_manifests/local_vars.yml
Edit local_vars.yml for extra configurations.
# local_custom_vars.yaml # Put any local customizations to variables in this file. extra_hosts: [ "onos-cord:ONOS_IP", "compute-01:COMPUTE_01_IP", "compute-02:COMPUTE_02_IP", "keystone.cord.lab:OPENSTACK_CTRL_IP", "xos-core.cord.lab:172.18.0.1", "cordloghost:172.18.0.1" ] head_cord_profile_dir: "{{ ansible_user_dir + '/cord_profile' }}" head_cord_dir: "{{ ansible_user_dir + '/cord' }}" keystone_admin_password: OPENSTACK_ADMIN_PASSWD
5. Lastly, you need to install ElasticSearch.
$ cd ~/cord/build/platform-install/ $ ansible-playbook -i inventory/head-localhost deploy-elasticstack-playbook.yml
6. All configurations are ready. Run XOS with R-CORD profile with "xos-deploy" command below. You can also teardown R-CORD profile with "xos-teardown" command.
alias xos-teardown="rm -rf /opt/credentials; pushd /root/cord/build/platform-install; ansible-playbook -i inventory/rcord teardown-playbook.yml;" alias xos-deploy="mkdir /root/cord_profile; cp /root/openstack-*.yaml /root/cord_profile/; pushd /root/cord/build/platform-install; ansible-playbook -i inventory/rcord deploy-xos-playbook.yml; mkdir /opt/credentials; cp /root/cord/build/platform-install/credentials/* /opt/credentials/;"
7. You'll need to configure VTN by pushing network configurations.
$ docker exec rcord_xos_ui_1 python tosca/run.py xosadmin@opencord.org /opt/cord_profile/vtn-service.yaml $ docker exec rcord_xos_ui_1 python tosca/run.py xosadmin@opencord.org /opt/cord_profile/openstack-compute.yaml $ docker exec rcord_xos_ui_1 python tosca/run.py xosadmin@opencord.org /opt/cord_profile/openstack-compute-vtn.yaml
8. Check if all compute nodes are in COMPLETE state and all necessary networks are created.
$ ssh -p 8101 karaf@onos-cord onos> cordvtn-nodes Hostname Management IP Data IP Data Iface Br-int State compute-01 10.1.1.122/24 10.2.2.28/24 veth1 of:0000000000000001 COMPLETE compute-02 10.1.1.126/24 10.2.2.29/24 veth1 of:0000000000000002 COMPLETE Total 2 nodes onos> cordvtn-networks ID Name Type VNI Subnet Service IP 5302bef6-a070-4fb9-a6b9-bea721abdcba management MANAGEMENT_LOCAL 1073 172.27.0.0/24 172.27.0.1 bfa7366e-2622-416a-bed0-9e310d76530e mysite_vsg-access VSG 1029 10.0.2.0/24 10.0.2.1 d04edff5-9666-4613-915e-f6e42aa7cd94 public PUBLIC 1009 10.6.1.192/26 10.6.1.193
9. Run pod-test-playbook.yaml to bring up test VSG tenant and on-boarding exampleservice. Note that you'll need to disable "maas-test-client-install" role for successful test.
Comment out the following lines #- name: Create test client # hosts: head # become: yes # roles: # - maas-test-client-install
$ cd /root/cord/build/platform-install; ansible-playbook -i inventory/rcord pod-test-playbook.yml
10. Check if VSG instance is up and running. You should be able to log-in to the VSG VM with the management IP(172.27.0.X) and the private key /root/cord_profile/key_import/vsg_rsa from the compute node.
compute-01$ ssh -i vsg_rsa ubuntu@172.27.0.2
References
[1] CORD platform install: https://github.com/opencord/platform-install
[2] CORD CiaB: https://github.com/opencord/cord/blob/master/docs/quickstart.md
Attachments:
Screen Shot 2016-08-31 at 1.58.02 PM.png (image/png)
admin-openrc.sh (application/x-sh)
vtn.yaml (application/octet-stream)
nodes.yaml (application/octet-stream)
deployment.yaml (application/octet-stream)
fabric.yaml (application/octet-stream)
openstack.yaml (application/octet-stream)
public-net.yaml (application/octet-stream)
management-net.yaml (application/octet-stream)
fabric.yaml (application/octet-stream)
cord-services.yaml (application/octet-stream)
cord-test-subscriber.yaml (application/octet-stream)
exampleservices.yaml (application/octet-stream)