This section provides a high level strategy for testing CORD.  It also contains details about the strategy and contributions made by collaborators and those who contribute to system testing.

CORD Testing Goals for 2017

  • Build a solid foundation of automated test framework for the base platform. Set a baseline for initial end-end test coverage and work to improve the coverage each release.

  • Get the community working with a limited/common set of test automation tools so that the community can easily collaborate and not duplicate efforts.

  • Build a set of automated tests to exercise the platform. These tests will include:

    • Functional regression tests - black box tests to make sure base components have not regressed.

    • End to end CI/CD tests to make sure a system can be built from scratch, deployed, and can pass a baseline of tests for both control and for traffic.

    • Performance tests:-

      • Analyze various performance tools and come up with a performance test tool along with base performance automation framework

      • Create few performance tests that can be used to set a baseline and then track performance over time. [For 2017 this is probably a small set of tests for very critical performance sensitive areas - that are not changing rapidly release to release.]  

      • Performance tests will be mostly for R-CORD platform

      • M-CORD: Performance depends on availability of stable M-CORD platform.

    • Scale tests - similar to performance, but exercising areas that are important for scale and not covered by performance.

  • Begin building profile specific automated tests where there is sufficient stability to do so. These tests build upon the base platform tests and exercise specific profiles supported in the community.

  • Develop automated unit tests to produce a stable CORD platform. Developers are responsible for the unit testing coverage and framework.

  • Work with collaborators building Physical PODs (R-CORD) and use them for the following:

    • Building Automated POD [Jenkins]

    • Run basic sanity end-end tests

  • Extend CORD automation framework for M-CORD/E-CORD ? (Based on availability of resources)

Planned test environments

  • Unit

    • Automated unit tests run on commit.

    • Sanity automated tests to check the stability of various repos

  • CIAB

    • Automated nightly sanity test for the build.

    • Run existing QA testcases

  • Hardware Pod

    • Automated nightly sanity test for the build, with small set of regression tests.

    • Regression includes coverage for testing core components of CORD

    • Regression tests mostly from R-CORD test set

  • M-CORD Pod (not setup yet)

  • R-CORD Pod

    • Automated jenkins job for repetitive test coverage on R-CORD regression test set

    • Basic Performance tests executed two or three times a week

  • E-CORD Pod (none for QA )

Current Status

  • Current Framework development is focused on testing core components of CORD.

  • Framework for R-CORD related services => In Progress

  • ON.Lab has 2.5 dedicated resources to focus on the following:

    • Contributing towards automation framework

    • Lead the community test strategy

    • Provide baseline reference test environments

    • Write initial system level tests for the base platform and for the R-CORD specific profile

    • Control Plane API tests completed

    • Unit Tests for testing the stability of various repos [ Jenkins jobs triggered by commits]

    • End-to-End Tests on CORD-in-a-Box => In Progress.

    • Integration Tests on CORD-in-a-Box → Run nightly, this test builds CiaB, and targeted tests are run against various components (e.g., container based tests, system test based tests ). => In Progress

    • XOS regression test - API Tests on CORD-in-a-Box → Every commit that touches xos or platform-install repo triggers a short-running test that brings up just XOS and tests the correctness of select API calls.

    • End-to-End Tests on Physical POD → None today -> PODs have been provisioned recently at QCT

    • Integration of Control Plane with Data Plane

  • Ciena (4 Engineers)

    • Focused on R-CORD only

    • Contribution towards majority of automation framework in cord-tester

    • Framework for general cord-tester and also developed majority of data plane level framework

    • Mostly focused on the component based tests and data plane tests

    • End-end data plane tests, integration of individual components => In Progress

    • Currently, most of the focus is aligned towards CiaB

    • Developing automation for functional end-to-end tests of AAA, IGMP and DHCP CORD apps for VOLTHA.

    • End-to-End Tests on Physical POD → No setup available yet

  • Radisys (1 Engineer)

    • Focused on R-CORD in 2017

    • Currently analyzing the automation cord-tester framework

    • Currently running tests for VOLTHA functionality

    • Also, contributing towards VOLTHA automation framework

  • Spirent (1 Engineer)

    • Performance Test Bed Setup (R-CORD and M-CORD)

    • Performance Test Executions

  • iXia (2Engineers)

    • Involved in M-CORD

    • Working in collaboration with Intel and Netronome

    • Working on performance benchmarking use-case proposed by Intel

QA PODS

    • We have PODs being built at Calix for ON.Lab test and development groups (ON.Lab resources)

    • Three PODs already built at QCT lab
      • POD #1 (3 nodes + 4switches) is used for nightly builds and few patch tests
      • POD #2 (3nodes and no fabric) used by QA for installation and functional tests
      • POD #3 used by development
      • Goal is to build/deploy POD periodically and run tests using jenkins jobs.  

    • Physical POD built at Flex CloudLabs- Milpitas  
      • POD #1 (4 nodes + 4 switches) integrated with Jenkins for nightly builds
      • Goal is to build POD periodically, integrate edge devices such as GPON and run R-CORD test-cases
    • PODs for Performance Tests

      • POD at ON.LAB (setup by Intel Labs) -> used for performance benchmarkings (Intel EPC)

      • PODs at Spirent (R-CORD and M-CORD) -> used for Performance Benchmarks/Tests for M-CORD and R-CORD (leveraging DPDK/OVS optimizations)

      • POD at Netronome Lab in SantaClara -> Used for Performance use-cases from Intel (M-CORD related).  Ixia and Netronome together working with Intel to perform these measurements.

Roles

  • Developers - responsible for unit testing. Methods and goals for unit testing to be discussed at ONS.

  • ON.Lab QA

    • Align community around a common framework so that work can be additive from community members

    • Selected base platform tests and plan/strategy/priorities of tests for base platform. Organize community to make sure base platform has the tests that are needed.

    • Contribute tests as part of the community.

  • Partner/Integrator QA

    • Work to align with community test strategy. Provide QA engineers to meet testing goals. Work upstream in test community by developing plans and automated tests for all to use/benefit from.

    • Test to production ready standards

  • Community (collaborators, individuals)

    • Contribute to testing scenarios with contributions

    • Help ON.Lab and partners/integrators build out test environments

Resources, where to go from here

          Need resources

    •  to extent existing framework to support M-CORD, E-CORD Tests
    • Build Stabilization (CI/CD Tests)

    • Automation Framework Expansion/Developement

    • Functional Tests

    • End-End Tests

    • Performance Scripts/Tests

New Requests/Requirements

  • QA coverage to validate stability of all the repos

  • M-CORD Performance Tests

  • End-end Tests on Physical POD