This tutorial uses ExampleService to illustrate how to write and on-board a service in CORD. ExampleService is a multi-tenant service that instantiates a VM instance on behalf of each tenant, and runs an Apache web server in that VM. This web server is then configured to serve a tenant-specified message (a string), where the tenant is able to set this message using CORD's administrative interface. From a service modeling perspective, ExampleService extends the base Service model with two fields:
service_message
, a string that contains a message to display for the service as a whole (i.e., to all tenants of the service).tenant_message
, a string that is displayed for a specific Tenant.
Supplemental Information
ExampleService: The snippets of code used throughout this page are available in GitHub.
Summary
The result of preparing ExampleService for on-boarding is the following set of files, all located in the xos directory of the ExampleService repository. (There are other helper files, as described throughout this page.)
Component | Source Code (in https://github.com/opencord/exampleservice/) |
---|---|
Data Model | xos/exampleservice.xproto |
TOSCA API | |
Synchronizer | xos/synchronizer/steps/sync_exampletenant.py |
On-Boarding Spec | xos/exampleservice-onboard.yaml |
Development Environment
For this tutorial we recommend using CORD-in-a-Box (CiaB) as your development environment. By default CiaB brings up OpenStack, ONOS, and XOS running the R-CORD collection of services. This tutorial demonstrates how to add a new customer-facing service to R-CORD.
CiaB includes a build machine, a head node, switches, and a compute node all running as VMs on a single host. Before proceeding you should familiarize yourself with the CiaB environment.
Supplemental Information
CORD-in-a-Box: Instructions on how to install a development environment.
Development Loop
Once you’ve prepared your CiaB, the development loop for changing/building/testing service code involves these stages:
- Make changes to your service code and propagate them to your CiaB host. There are a number of ways to propagate changes to the host depending on developer preference, including using gerrit draft reviews, git branches, rsync, scp, etc.
Build XOS container images on the build machine (corddev VM) and publish them to the head node (prod VM). For this step, run the following commands in the
corddev
VM:cd /cord/build ./gradlew -PdeployConfig=config/cord_in_a_box.yml PIprepPlatform ./gradlew :platform-install:buildImages ./gradlew -PdeployConfig=config/cord_in_a_box.yml :platform-install:publish ./gradlew -PdeployConfig=config/cord_in_a_box.yml :orchestration:xos:publish
Launch the new XOS containers on the head node (prod VM). For this step, run the following commands in the
prod
VM (after the aliases have been defined for the first time, it's only necessary to run line 4):alias xos-teardown="pushd /opt/cord/build/platform-install; ansible-playbook -i inventory/head-localhost --extra-vars @/opt/cord/build/genconfig/config.yml teardown-playbook.yml; popd" alias xos-launch="pushd /opt/cord/build/platform-install; ansible-playbook -i inventory/head-localhost --extra-vars @/opt/cord/build/genconfig/config.yml launch-xos-playbook.yml; popd" alias compute-node-refresh="pushd /opt/cord/build/platform-install; ansible-playbook -i /etc/maas/ansible/pod-inventory --extra-vars=@/opt/cord/build/genconfig/config.yml compute-node-refresh-playbook.yml; popd" xos-teardown; xos-launch; compute-node-refresh
- Test and verify your changes
- Go back to step #1
Supplemental Information
CiaB Development Workflow: A description of the XOS development workflow using CiaB.
Define a Model
Create a file named exampleservice.xproto in your service's xos directory. This file encodes the models in the service in a format called xproto which is a combination of Google Protocol Buffers and some XOS-specific annotations to facilitate the generation of service components, such as the GRPC and REST APIs, security policies, and database models among other things.
It consists of three parts:
- The Service model, which manages the service as a whole.
- The Tenant model, which manages tenant-specific (per-service-instance) state.
- Custom model extensions, which let you add arbitrary methods and properties to the generated Django models. Note that the preferred strategy for implementing any extension to your model is to do so via the API, GRPC or REST. However, when placing such code with a model is unavoidable, a custom model extension can be used.
Defining the Service model
A Service model extends (inherits from) XOS' base Service model. At its head is a set of option declarations: the name of the service as a configuration string, and as a human readable one. Then follows a set of field definitions.
message ExampleService (Service){ option name = "exampleservice"; option verbose_name = "Example Service"; required string service_message = 1 [help_text = "Service Message to Display", max_length = 254, null = False, db_index = False, blank = False]; }
Defining the Tenant model
Your tenant model will extend the core TenantWithContainer class, which is a Tenant that creates a VM instance:
message ExampleTenant (TenantWithContainer){ option name = "exampletenant"; option verbose_name = "Example Tenant"; required string tenant_message = 1 [help_text = "Tenant Message to Display", max_length = 254, null = False, db_index = False, blank = False]; }
The following field specifies the message that will be displayed on a per-Tenant basis:
tenant_message = models.CharField(max_length=254, help_text="Tenant Message to Display")
Think of this as a tenant-specific (per service instance) parameter.
Custom Code Insertions
The declarative xproto part of models is used to generate model definitions in the ORM (Django). It may sometimes be desirable to extend these ORM model definitions with arbitrary code, such as computed fields that process the stored fields in an object and provide a computed result. There are three parts to doing this. The first part is to add a "legacy" option to your xproto file:
option legacy = "True";
The second part is to generate an extension stub for yourself. To do so, run the following commands:
CORD_DIR=<location of cord directory> XPROTO_DIR=$CORD_DIR/orchestration/xos/xos/genx make -f $XPROTO_DIR/tool/Makefile.service service_extender PREFIX=$XPROTO_DIR
Dependency Errors
If you run into any dependency errors while generating the models.py stub, please install the following and retry:
$ sudo pip install git+https://github.com/sb98052/plyprotobuf@v1.2.0
$ sudo pip install jinja2
The above command should generate a stub called models.py, which you will fill in to add your custom model extensions. Within this file, you will find definitions for each of your models as Python classes. You must add any additional functionality as methods to these models. Refer to the Django documentation to get a list of special methods, such as save() and delete().
class ExampleService(ExampleService_del): class Meta: proxy = True def my_custom_method(self): # my custom code pass
The save() method is called when a model is saved. In the code below, it is overridden to call a "model policy" which in turn creates a container.
def save(self, *args, **kwargs): super(ExampleTenant, self).save(*args, **kwargs) model_policy_exampletenant(self.pk)
Similarly, the delete() method is called to clean up this container.
def delete(self, *args, **kwargs): self.cleanup_container() super(ExampleTenant, self).delete(*args, **kwargs)
Similarly, code that goes into the ExampleTenant class:
def model_policy_exampletenant(pk): with transaction.atomic(): tenant = ExampleTenant.objects.select_for_update().filter(pk=pk) if not tenant: return tenant = tenant[0] tenant.manage_container()
Supplemental Information
Modeling Conventions: Conventions for creating models in Django.
Design of Synchronizers: Includes helpful information about data models and their relationship to Synchronizers.
Admin GUI
In CORD 3.0, a view in the GUI is auto-generated for your model. Once example service has been on-boarded you'll find a new view in the side navigation that references it.
REST API
In CORD 3.0, the REST API is auto-generated when your service is on-boarded.
Define a TOSCA API
The TOSCA API will also be auto-generated, but not until the next release. For Dangerous-Addition, it must still be coded by hand.
Creating a TOSCA resource for your service allows your service to be configured using Tosca. The first step in creating a TOSCA API is to create any TOSCA custom types used by your service. ExampleService's custom types are in a file called xos/exampleservice.m4: (TODO: move this to tosca/custom_types)
tosca_definitions_version: tosca_simple_yaml_1_0 # compile this with "m4 exampleservice.m4 > exampleservice.yaml" # include macros include(macros.m4) node_types: tosca.nodes.ExampleService: derived_from: tosca.nodes.Root description: > Example Service capabilities: xos_base_service_caps properties: xos_base_props xos_base_service_props service_message: type: string required: false tosca.nodes.ExampleTenant: derived_from: tosca.nodes.Root description: > A Tenant of the example service properties: xos_base_tenant_props tenant_message: type: string required: false
The above is written in the m4 macro language, and uses a couple of macros (xos_base_props, xos_base_service_props). You'll have to remember to run the m4 tool on it whenever you edit it, m4 exampleservice.m4 > exampleservice.yaml
.
You will typically create a python file for each important model of your service, and that python file tells how to translate between the TOSCA and REST representations. Here is ExampleServices' Tosca resource for the ExampleService object, located in tosca/resources/exampleservice.py:
import os import pdb import sys import tempfile sys.path.append("/opt/tosca") from translator.toscalib.tosca_template import ToscaTemplate import pdb from core.models import Service,User,CoarseTenant from services.exampleservice.models import ExampleService from xosresource import XOSResource class XOSExampleService(XOSResource): provides = "tosca.nodes.ExampleService" xos_model = ExampleService copyin_props = ["view_url", "icon_url", "enabled", "published", "public_key", "private_key_fn", "versionNumber", "service_message"] def postprocess(self, obj): for provider_service_name in self.get_requirements("tosca.relationships.TenantOfService"): provider_service = self.get_xos_object(ExampleService, name=provider_service_name) existing_tenancy = CoarseTenant.get_tenant_objects().filter(provider_service = provider_service, subscriber_service = obj) if existing_tenancy: self.info("Tenancy relationship from %s to %s already exists" % (str(obj), str(provider_service))) else: tenancy = CoarseTenant(provider_service = provider_service, subscriber_service = obj) tenancy.save() self.info("Created Tenancy relationship from %s to %s" % (str(obj), str(provider_service))) def can_delete(self, obj): if obj.slices.exists(): self.info("Service %s has active slices; skipping delete" % obj.name) return False return super(XOSExampleService, self).can_delete(obj)
Define a Synchronizer
Synchronizers are processes that run continuously, checking for changes to models. When a synchronizer detects a change, it will apply that change to the underlying system. For exampleservice, the Tenant model is the model we will want to synchronize, and the underlying system is an Instance. In this case, we’re using TenantWithContainer, which creates a Virtual Machine Instance for us.
XOS Synchronizers are typically located in the xos/synchronizer directory of your service.
Create a file named model-deps
with the contents: {}
.
NOTE: This is used to track model dependencies using tools/dmdot
, but that tool currently isn’t working.
Create a file named exampleservice-synchronizer.py
:
#!/usr/bin/env python # Runs the standard XOS synchronizer import importlib import os import sys synchronizer_path = os.path.join(os.path.dirname( os.path.realpath(__file__)), "../../synchronizers/new_base") sys.path.append(synchronizer_path) mod = importlib.import_module("xos-synchronizer") mod.main()
The above is boilerplate. It loads and runs the default xos-synchronizer module in it’s own Docker container.
To configure this module, create a file named exampleservice_from_api_config
, which specifies various configuration and logging options:
# Sets options for the synchronizer [observer] name=exampleservice dependency_graph=/opt/xos/synchronizers/exampleservice/model-deps steps_dir=/opt/xos/synchronizers/exampleservice/steps sys_dir=/opt/xos/synchronizers/exampleservice/sys log_file=console log_level=debug pretend=False backoff_disabled=True save_ansible_output=True proxy_ssh=True proxy_ssh_key=/opt/cord_profile/node_key proxy_ssh_user=root enable_watchers=True accessor_kind=api accessor_password=@/opt/xos/services/exampleservice/credentials/xosadmin@opencord.org required_models=ExampleService, ExampleTenant, ServiceDependency, ServiceMonitoringAgentInfo
NOTE: Historically, synchronizers were named “observers”, so s/observer/synchronizer/
when you come upon this term in the XOS code/docs.
Create a directory within your synchronizer directory named steps
. In steps
, create a file named sync_exampletenant.py
:
import os import sys from synchronizers.new_base.SyncInstanceUsingAnsible import SyncInstanceUsingAnsible from synchronizers.new_base.modelaccessor import * from xos.logger import Logger, logging parentdir = os.path.join(os.path.dirname(__file__), "..") sys.path.insert(0, parentdir) logger = Logger(level=logging.INFO)
Bring in some basic prerequities. Also include the models created earlier, and SyncInstanceUsingAnsible which will run the Ansible playbook in the Instance VM.
class SyncExampleTenant(SyncInstanceUsingAnsible): provides = [ExampleTenant] observes = ExampleTenant requested_interval = 0 template_name = "exampletenant_playbook.yaml" service_key_name = "/opt/xos/synchronizers/exampleservice/exampleservice_private_key" def __init__(self, *args, **kwargs): super(SyncExampleTenant, self).__init__(*args, **kwargs) def get_exampleservice(self, o): if not o.provider_service: return None exampleservice = ExampleService.objects.filter(id=o.provider_service.id) if not exampleservice: return None return exampleservice[0] # Gets the attributes that are used by the Ansible template but are not # part of the set of default attributes. def get_extra_attributes(self, o): fields = {} fields['tenant_message'] = o.tenant_message exampleservice = self.get_exampleservice(o) fields['service_message'] = exampleservice.service_message return fields def delete_record(self, port): # Nothing needs to be done to delete an exampleservice; it goes away # when the instance holding the exampleservice is deleted. pass
Two optional sections, "watches" and "handle_service_monitoringagentinfo_watch_notifications" can be added to tie the service in with the A-CORD monitoring service. If you don't need monitoring, then you may omit those sections. If you wish to use the monitoring service, then please consult the monitoring service documentation for information on how to create those sections of the sync step.
Next, create a run-from-api.sh file for your synchronizer.
export XOS_DIR=/opt/xos python exampleservice-synchronizer.py -C $XOS_DIR/synchronizers/exampleservice/exampleservice_from_api_config
Finally, create a Dockerfile for your synchronizer, name it "Dockerfile.synchronizer" and place it in the synchronizer directory with the other synchronizer files:
FROM xosproject/xos-synchronizer-base:candidate COPY . /opt/xos/synchronizers/exampleservice ENTRYPOINT [] WORKDIR "/opt/xos/synchronizers/exampleservice" # Label image ARG org_label_schema_schema_version=1.0 ARG org_label_schema_name=exampleservice-synchronizer ARG org_label_schema_version=unknown ARG org_label_schema_vcs_url=unknown ARG org_label_schema_vcs_ref=unknown ARG org_label_schema_build_date=unknown ARG org_opencord_vcs_commit_date=unknown LABEL org.label-schema.schema-version=$org_label_schema_schema_version \ org.label-schema.name=$org_label_schema_name \ org.label-schema.version=$org_label_schema_version \ org.label-schema.vcs-url=$org_label_schema_vcs_url \ org.label-schema.vcs-ref=$org_label_schema_vcs_ref \ org.label-schema.build-date=$org_label_schema_build_date \ org.opencord.vcs-commit-date=$org_opencord_vcs_commit_date CMD bash -c "cd /opt/xos/synchronizers/exampleservice; ./run-from-api.sh"
Create Ansible Playbooks
In the same steps
directory, create an Ansible playbook named exampletenant_playbook.yml
which is the “master playbook” for this set of plays:
--- # exampletenant_playbook - hosts: "{{ instance_name }}" connection: ssh user: ubuntu sudo: yes gather_facts: no vars: - tenant_message: "{{ tenant_message }}" - service_message: "{{ service_message }}"
This sets some basic configuration, specifies the host this Instance will run on, and the two variables that we’re passing to the playbook.
roles: - install_apache - create_index
This example uses Ansible’s Playbook Roles to organize steps, provide default variables, organize files and templates, and allow for code reuse. Roles are created by using a set directory structure.
In this case, there are two roles, one which installs Apache, and one which creates the index.html
file from a Jinja2 template.
Create a directory named roles inside steps
, then create two directories named for your roles, install_apache
and create_index
.
Within install_apache
, create a directory named tasks
, then within that directory, a file named main.yml
. This will contain the set of plays for the install_apache
role. To that file add the following:
--- - name: Install apache using apt apt: name=apache2 update_cache=yes
This will use the Ansible apt module to install Apache.
Next, within create_index
, create two directories, tasks
and templates
. In templates, create a file named index.html.j2
, with the contents:
ExampleService Service Message: "{{ service_message }}" Tenant Message: "{{ tenant_message }}"
These Jinja2 Expressions will be replaced with the values of the variables set in the master playbook.
In the tasks
directory, create a file named main.yml
, with the contents:
--- - name: Write index.html file to apache document root template: src=index.html.j2 dest=/var/www/html/index.html
This uses the Ansible template module to load and process the Jinja2 template then put it in the dest
location. Note that there is no path given for the src
parameter - Ansible knows to look in the templates
directory for templates used within a role.
As a final step, you can check your playbooks for best practices with ansible-lint if you have it available.
Supplemental Information
Design of Synchronizers: Introduction to the concepts underlying Synchronizers
Implementation of Synchronizers: Details about how to implement Synchronizers
Define an On-boarding Spec
Each service should have an on-boarding recipe. By convention, we use <servicename>-onboard.yaml, and place it in the xos directory of the service.
The on-boarding recipe is a TOSCA specification that lists all of the resources for your synchronizer. It's basically a collection of everything that has been created above. For example, here is the on-boarding recipe for ExampleService:
tosca_definitions_version: tosca_simple_yaml_1_0 description: Onboard the exampleservice imports: - custom_types/xos.yaml topology_template: node_templates: exampleservice: type: tosca.nodes.ServiceController properties: base_url: file:///opt/xos_services/exampleservice/xos/ # The following will concatenate with base_url automatically, if # base_url is non-null. xproto: ./ admin: admin.py tosca_custom_types: exampleservice.yaml tosca_resource: tosca/resources/exampleservice.py, tosca/resources/exampletenant.py rest_service: api/service/exampleservice.py rest_tenant: api/tenant/exampletenant.py private_key: file:///opt/xos/key_import/exampleservice_rsa public_key: file:///opt/xos/key_import/exampleservice_rsa.pub
You will also need to modify the profile-manifest in platform-install to on-board your service. To do this, modify the xos_services and xos_service_sshkeys sections as shown below:
xos_services: ... (lines omitted) - name: exampleservice path: orchestration/xos_services/exampleservice keypair: exampleservice_rsa synchronizer: true xos_service_sshkeys: ... (lines omitted) - name: exampleservice_rsa source_path: "~/.ssh/id_rsa"
The above modifications to the profile manifest will cause platform-install to automatically install an ssh key for your service, and to onboard the service at build time.
Supplemental Information
Service On-boarding (Dangerous-Addition): Describes the internals of how services are on-boarded.
Test Your Service
This section still needs to be updated from 2.0.
Execute the makefile target you created in the previous step, for example, make exampleservice
.
For example, to test exampleservice do the following:
In the Admin web UI, navigate to the Slice -> <slicename>
-> Instances, and find an IP address starting with 10.11.X.X
in the Addresses column (this address is the “nat” network for the slice, the other address is for the “private” network).
Run curl <10.11.X.X address>
, and you should see the display message you entered when creating the ExampleTenant.
user@ctl:~/xos/xos/configurations/devel$ curl 10.11.10.7
ExampleService
Service Message: "Example Service Message"
Tenant Message: "Example Tenant Message"
After verifying that the text is shown, change the message in the “Example Tenant” and “Example Service” sections of the Admin UI, wait a bit for the Synchronizer to run, and then the message that curl
returns should be changed.
Debugging
This section still needs to be updated from 2.0.
XOS isn’t coming up after making changes
Verify that the docker containers for XOS are running with:
sudo docker ps
If you need to see log messages for a container:
sudo docker logs <docker_container>
There’s also a shortcut in the makefile to view logs for all the containers: make showlogs
If you want to delete the containers, including the database, and start over, run:
make rm
Which will delete the containers.
“500 Internal Server Error” when navigating the admin webpages
This is most likely Django reporting a problem in admin.py
or model.py
.
Django’s debug log is located in in/var/log/django_debug.log
on the xos container, so runmake enter-xos
and then look at the end of that logfile.
“Ansible playbook failed” messages
The logs messages for when the Synchronizer runs Ansible are located in /opt/xos/synchronizers/<servicename>/sys
in its synchronizer container. There are multiple files for each Tenant instance, including the processed playbook and stdout/err files . You can run a shell in the docker container with this command to access those files:
sudo docker exec -it devel_xos_synchronizer_<servicename>_1 bash