Test automation for a multi-instance ONOS deployment in a cluster of one or more ONOS instances with CORD apps like vrouter, AAA , IGMP etc configured.
configured .
ID | Title | Function Name | Test Steps | Expected Result |
cluster_1 | Verify if cluster exists with provided no.of ONOS instances | test_onos_cluster_formation_verify | Execute ‘summary’ command on ONOS cli and grep no.of nodes value to verify cluster formation | If cluster already exists, test case should pass else Fail |
cluster_2 | Verify adding additional ONOS instances to already existing cluster | test_onos_cluster_adding_members | 1. Verify if cluster already exists 2. Add few more ONOS instances to the cluster. | A new cluster with added ONOS instances should come up without restarting existing cluster |
cluster_3 | Verify cluster status if cluster master instance removed | test_onos_cluster_remove_master | verify if cluster already exists. Grep cluster current master instance Remove master instance ONOS container | Cluster with one instance less and new mater elected |
cluster_4 | Verify cluster status if one member goes down | test_onos_cluster_remove_one_member | Verify if cluster exists. Grep one of the member ONOS instance Kill the member container | Cluster with one member less and with same master should come up |
cluster_5 | Verify cluster status if two member goes down | test_onos_cluster_remove_two_members | 1. Verify if cluster exists. 2. Grep two of the member ONOS instances Kill the member containers | Cluster with two member less and with same master should come up |
cluster_6 | Verify cluster status if N member instances goes down | test_onos_cluster_remove_N_members | 1. Verify if cluster exists . 2. Grep and kill N no.of member ONOS instances | Cluster with N instances less and same master should come up. |
cluster_7 | Verify cluster if few ONOS instances added and removed | test_onos_cluster_add_remove_members | verify if cluster exists Add few instances to cluster Remove the added instances | Cluster should be stable before and after addition and deletion of member instances |
cluster_8 | Verify cluster status if few instances removed and added | test_onos_cluster_remove_add_member | verify if cluster exists Removed few member instances Add back the same instances | Cluster should be stable in each stage |
cluster_9 | Verify cluster status after entire cluster restart | test_onos_cluster_restart | verify if cluster exists Restart entire cluster | Cluster should come up with as it is ( master may change ) |
cluster_10 | Verify cluster status if current master restarts. | test_onos_cluster_master_restart | Verify cluster exists Restart current master instance container | Cluster master restart should success Cluster with same no.of instances should come up |
cluster_11 | Verify master IP if current master restarts | test_onos_cluster_master_ip_after_master_restart | Verify if cluster exists Restart current master ONOS instance | cluster with new master should come up |
cluster_12 | Verify cluster status if one member restarts | test_onos_cluster_one_member_restart | verify if cluster exists Grep one of member ONOS instance Restart the instance | Cluster should come up with same master |
cluster_13 | Verify cluster status if two members restarts | test_onos_cluster_two_members_restart | Verify if cluster exists Grep two member instances restart the ONOS containers | Cluster should come up without changing master |
cluster_14 | Verify cluster status if N members restarts | test_onos_cluster_N_members_restart | Verify if cluster exists Grep N no.od ONOS instances and restart containers | Cluster should come with same master |
cluster_15 | Verify cluster master can be changed | test_onos_cluster_master_change | 1.verify if cluster exists with a master 2. using ONOS cli change cluster master to other than existed | Cluster master should be changed to new master |
cluster_16 | Verify if cluster current master withdraw it mastership | test_onos_cluster_withdraw_cluster_current_mastership | 1. verify if cluster exists with a master 2. using ONOS cli change cluster master by making current master an none to device | Cluster master should changed to new master |
cluster_17 | Verify routes pushed from Quagga to cluster master distributed to all cluster members | test_onos_cluster_vrouter_routes_in_cluster_members | 1.verify if cluster exists 2. push few routes from quagga to cluster master 3. verify master has routes | All cluster member should receive the route information |
cluster_18 | Verify vrouter functionality works fine even if cluster master goes down | test_onos_cluster_vrouter_master_down | 1.verify if cluster exists 2.push few routes from quagga to onos cluster 3.send traffic to above routes 4. kill cluster master instance container | Verify traffic forwards to routes even after cluster master goes down |
cluster_19 | Verify vrouter functionality works fine even if cluster master restarts | test_onos_cluster_vrouter_master_restarts | 1.verify if cluster exists 2.push few routes from quagga to onos cluster 3.send traffic to above routes 4. restarts cluster master instance container | Verify traffic forwards to routes even after cluster master restarts |
cluster_20 | Verify vrouter functionality when vrouter app deactivated on cluster | test_onos_cluster_vrouter_app_deactivate | 1.verify cluster exists 2.verify vrouter functionality 3.deactivate vrouter app in onos cluster master instance | Traffic should not received to routes after the app deactivates |
cluster_21 | Verify vrouter functionality works fine when vrouter app deactivated and cluster master goes down | test_onos_cluster_vrouter_app_deactivate_master_down | 1.verify if cluster exists 2.verify vrouter works fine 3.deactivate vrouter app and kill master onos instance container | Vrouter functionality should not work after app deactivate |
cluster_22 | Verify vrouter functionality works fine even if cluster member goes down | test_onos_cluster_vrouter_member_down | 1.verify if cluster exists 2.push few routes from quagga to onos cluster 3.send traffic to above routes 4. kill cluster member instance container | Verify traffic forwards to routes even after cluster member goes down |
cluster_23 | Verify vrouter functionality works fine even if cluster member restarts | test_onos_cluster_vrouter_member_restart | 1.verify if cluster exists 2.push few routes from quagga to onos cluster 3.send traffic to above routes 4. restart cluster member instance container | Verify traffic forwards to routes even after cluster member restarts |
cluster_24 | Verify vrouter functionality works fine even if cluster restarts | test_onos_cluster_vrouter_cluster_restart | 1.verify if cluster exists 2.push few routes from quagga to onos cluster 3.send traffic to above routes 4. restart cluster | traffic should forwards to routes even after cluster restarts |
cluster_25 | Verify flows works fine on cluster even if cluster master goes down | test_onos_cluster_flow_master_down_flow_udp_port | 1.push a flow to onos cluster master 2.verify traffic forwards to as per flow 3. now kill cluster master onos instance container | Flow traffic should forward properly as per flow added even after master goes down |
cluster_26 | Verify flows works fine on cluster even if cluster master change | test_cluster_flow_master_change_flow_ecn | 1.push a flow to onos cluster master 2.verify traffic forwards to as per flow 3. now change cluster master | Flow traffic should forward properly as per flow added even after master changes |
cluster_27 | Verify flows works fine on cluster even if cluster master restarts | test_cluster_flow_master_restart_ipv6_extension_header | 1.push a flow to onos cluster master 2.verify traffic forwards to as per flow 3. now restart cluster master | Flow traffic should forward properly as per flow added even after master restarts |
cluster_28 | Verify igmp include and exclude modes with cluster master restarts | test_onos_cluster_igmp_include_exclude_modes_master_restart | 1.verify if cluster exists 2.verify cluster include and excludes works fine 3. restart cluster master | Igmp include and exclude modes should work properly before and after cluster master restarts |
cluster_29 | Verify igmp include and exclude modes with cluster master goes down | test_onos_cluster_igmp_include_exclude_modes_master_down | 1.verify if cluster exists 2.verify cluster include and excludes works fine 3. Kill onos cluster master instance container | Igmp include and exclude modes should work properly before and after cluster master goes down |
cluster_30 | Verify igmp data traffic recovery time when master goes down | test_onos_cluster_igmp_include_calculate_traffic_recovery_time_after_master_down | Verify if cluster exists Keep sending igmp include and verify traffic Now kill cluster master onos instance container | Calculate time to recover igmp data traffic after master goes down |
cluster_31 | Verify Igmp leave after master change | test_onos_cluster_igmp_leave_group_after_master_change | Verify if cluster exists Send igmp include mode and verify traffic Change cluster master Send igmp leave now | New master should process igmp leave and traffic should not receive after sending leave |
cluster_32 | Verify igmp join and traffic with cluster master changing | test_onos_cluster_igmp_join_after_master_change_traffic_after_master_change_again | Verify if cluster exists Send igmp include mode Change cluster master now Send data traffic above registered igmp group | Igmp data traffic should receive to client |
cluster_33 | Verify eap tls authentication on cluster setup | test_onos_cluster_eap_tls | verify if cluster exists Configure radius server ip in onos cluster master Initiate eap tls authentication process from client side | Client should get authenticated for valid certificate |
cluster_34 | Verify eap tls authentication before and after cluster master change | test_onos_cluster_eap_tls_before_and_after_master_change | Verify if cluster exists Verify eap tls authentication process Change cluster master Initiate eap tls authentication process again | Authentication should get success before and after cluster master change |
cluster_35 | Verify eap tls authentication before and after cluster master goes down | test_onos_cluster_eap_tls_before_and_after_master_down | 1. Verify if cluster exists 2. Verify eap tls authentication process 3. Kill cluster master onos instance container 4. Initiate eap tls authentication process again | Authentication should get success before and after cluster master goes down |
cluster_36 | Verify eap tls authentication with no certificate and master restarts | test_onos_cluster_eap_tls_with_no_cert_before_and_after_member_restart | verify if cluster exists Verify eap tls authentication fail with no certificates Restart master and repeat step 2 | Authentication should get fail before and after cluster master restart |
cluster_37 | Verify proxy arp functionality on cluster setup before and after the app deactivate | test_onos_cluster_proxyarp_master_change_and_app_deactivate | verify if cluster exists Verify proxy arp functionality Deactivate the app on cluster master Verify proxy apr functionality again | Proxy arp functionality should work before app deactivate |
cluster_38 | Verify proxyarp functionality on cluster before and after on member goes down | test_onos_cluster_proxyarp_one_member_down | Verify if cluster exists Verify if proxyarp works fine on cluster setup Kill one of cluster member onos instance container Verify proxyarp functionality now | Proxy arp functionality should before and after cluster member goes down |
cluster_39 | Verify proxyarp functionality with concurrent requests on cluster setup | test_onos_cluster_proxyarp_concurrent_requests_with_multiple_host_and_different_interfaces | verify if cluster exists Create multiple interfaces and hosts on OvS Initiate multiple proxy arp requests in parallel | Cluster should be stable for multiple arp requests in parallel and arp replies should receive for all requests |
cluster_40 | Verify acl rule addition and remove before and after cluster master change | test_onos_cluster_add_acl_rule_before_master_change_remove_acl_rule_after_master_change | Verify if cluster exists Add an acl rule in onos cluster master Change cluster master Remove the acl rule in new cluster master | Should be able to remove acl in new cluster master |
cluster_41 | Verify if acl traffic works fine before and after cluster members goes down | test_onos_cluster_acl_traffic_before_and_after_two_members_down | Add an acl rule Send traffic to match above rule Kill two onos cluster instance containers Send acl traffic again | Acl traffic should receive on interface before and after cluster members goes down |
cluster_42 | Verify dhcp relay release on cluster new master | test_onos_cluster_dhcpRelay_release_dhcp_ip_after_master_change | Verify if cluster exists Initiate dhcp discover and get an ip address from server Change cluster master Send dhcp release to release the leased ip | New master should be able to process dhcp release packet and send to server |
cluster_43 | Verify client gets same dhcp ip after cluster master goes down | test_onos_cluster_dhcpRelay_verify_dhcp_ip_after_master_down | Verify if cluster exists Initiate dhcp process and get ip from server Kill cluster master onos instance container Send dhcp request from client to verify if same ip gets | Client should receive same ip after cluster master goes down |
cluster_44 | Verify simulating dhcp clients by changing cluster master | test_onos_cluster_dhcpRelay_simulate_client_by_changing_master | verify if cluster exists Simulate dhcp client-1 Change cluster master Simulate client-2 Change cluster master again Simulate one more client-3 | All the clients should get valid ip from cluster irrespective cluster change |
Cluster_45 | Verify cord_subscriber functionality works before and after cluster restarts | test_onos_cluster_cord_subscriber_join_next_before_and_after_cluster_restart | Verify if cluster exists Verify cord_subscriber functionality works Restart cluster Repeat step 2 | Cord_subscriber should work properly before and after cluster restarts |
cluster_46 | Verify cord_subscriber on 10 channels when cluster member goes down | test_onos_cluster_cord_subscriber_join_recv_10channels_one_cluster_member_down | verify if cluster exists Verify cord_subscriber on 10 channels Kill one of the cluster member onos instance container Repeat step 2 | Cord_subscriber functionality should work properly even after cluster member goes down |
cluster_47 | Verify cord_subscriber on 10 channels when cluster members goes down | test_onos_cluster_cord_subscriber_join_next_10channels_two_cluster_members_down | 1. verify if cluster exists 2.Verify cord_subscriber on 10 channels 3.Kill two of the cluster member onos instance containers 4. Repeat step 2 | Cord_subscriber functionality should work properly even after cluster member s goes down |
cluster_48 | Verify multiple devices connected to cluster setup | test_onos_cluster_multiple_ovs_switches | verify if cluster exists Connect multiple devices to cluster setup | Verify if all the devices connected to onos cluster setup and each device has master elected |
cluster_49 | Verify multiple devices connected to cluster setup | test_onos_cluster_verify_multiple_ovs_switches_in_cluster_instances | 1. verify if cluster exists 2. Connect multiple devices to cluster setup | Each every cluster member should has information all the devices connected to cluster setup |
cluster_50 | Verify multiple switches connected to cluster setup | test_onos_cluster_verify_multiple_ovs_switches_master_restart | verify if cluster exists Connect multiple devices to cluster setup Verify devices information in cluster members Restart master of a device | When master of a device restarts, new master should elected for that device |
cluster_51 | Verify multiple switches connected to cluster setup | test_onos_cluster_verify_multiple_ovs_switches_one_master_down | 1.verify if cluster exists 2. Connect multiple devices to cluster setup 3. Verify devices information in cluster members 4. Kill cluster onos master of a device | When master of a device goes down, new master should elected for that device |
cluster_52 | Verify multiple switches connected to cluster setup | test_onos_cluster_verify_multiple_ovs_switches_current_master_withdraw_mastership | 1.verify if cluster exists 2. Connect multiple devices to cluster setup 3. Verify devices information in cluster members 4. Withdraw cluster onos mastership of a device | When master of a device withdraws mastership, new master should elected for that device |
cluster_53 | Verify multiple switches connected to cluster setup | test_onos_cluster_verify_multiple_ovs_switches_cluster_restart | 1. verify if cluster exists 2. Connect multiple devices to cluster setup 3. Verify devices information in cluster members 4. Restart entire cluster | All the device information should appear in onos instances after cluster restart.masters may change |
cluster_54 | Verify cord_subscriber functionality when cluster master withdraw its mastership | test_onos_cluster_cord_subscriber_join_next_before_and_after_cluster_mastership_withdraw | Verify if cluster exists Verify cord-subscriber functionality Withdraw cluster master Repeat step 2 | Cord subscriber functionality should work properly before and after cluster master change |