CSIT Design\r
===========\r
\r
-FD.io CSIT system design needs to meet continuously expanding requirements of\r
-FD.io projects including VPP, related sub-systems (e.g. plugin applications,\r
-DPDK drivers) and FD.io applications (e.g. DPDK applications), as well as\r
-growing number of compute platforms running those applications. With CSIT\r
-project scope and charter including both FD.io continuous testing AND\r
-performance trending/comparisons, those evolving requirements further amplify\r
-the need for CSIT framework modularity, flexibility and usability.\r
+FD.io CSIT system design needs to meet continuously expanding\r
+requirements of FD.io projects including VPP, related sub-systems (e.g.\r
+plugin applications, DPDK drivers) and FD.io applications (e.g. DPDK\r
+applications), as well as growing number of compute platforms running\r
+those applications. With CSIT project scope and charter including both\r
+FD.io continuous testing AND performance trending/comparisons, those\r
+evolving requirements further amplify the need for CSIT framework\r
+modularity, flexibility and usability.\r
\r
Design Hierarchy\r
----------------\r
\r
-CSIT follows a hierarchical system design with SUTs and DUTs at the bottom\r
-level, and presentation level at the top level, with a number of functional\r
-layers in-between. The current CSIT design including CSIT framework is depicted\r
-in the diagram below.\r
+CSIT follows a hierarchical system design with SUTs and DUTs at the\r
+bottom level of the hierarchy, presentation level at the top level and a\r
+number of functional layers in-between. The current CSIT system design\r
+including CSIT framework is depicted in the figure below.\r
\r
.. figure:: csit_design.png\r
:alt: FD.io CSIT system design\r
\r
A brief bottom-up description is provided here:\r
\r
-#. SUTs, DUTs, TGs:\r
+#. SUTs, DUTs, TGs\r
\r
- - SUTs - Systems Under Test\r
- - DUTs - Devices Under Test\r
- - TGs - Traffic Generators\r
+ - SUTs - Systems Under Test;\r
+ - DUTs - Devices Under Test;\r
+ - TGs - Traffic Generators;\r
\r
-#. Level-1 libraries - Robot and Python:\r
+#. Level-1 libraries - Robot and Python\r
\r
- Lowest level CSIT libraries abstracting underlying test environment, SUT,\r
- DUT and TG specifics\r
- - Used commonly across multiple L2 KWs\r
+ DUT and TG specifics;\r
+ - Used commonly across multiple L2 KWs;\r
- Performance and functional tests:\r
\r
- L1 KWs (KeyWords) are implemented as RF libraries and Python\r
- libraries\r
+ libraries;\r
\r
- Performance TG L1 KWs:\r
\r
- - All L1 KWs are implemented as Python libraries\r
+ - All L1 KWs are implemented as Python libraries:\r
\r
- - Support for TRex only today\r
- - Need to add IXIA\r
+ - Support for TRex only today;\r
+ - CSIT IXIA drivers in progress;\r
\r
- Performance data plane traffic profiles:\r
\r
- TG-specific stream profiles provide full control of:\r
\r
- Packet definition – layers, MACs, IPs, ports, combinations thereof\r
- e.g. IPs and UDP ports\r
+ e.g. IPs and UDP ports;\r
- Stream definitions - different streams can run together, delayed,\r
- one after each other\r
+ one after each other;\r
- Stream profiles are independent of CSIT framework and can be used\r
in any T-rex setup, can be sent anywhere to repeat tests with\r
- exactly the same setup\r
+ exactly the same setup;\r
- Easily extensible – one can create a new stream profile that meets\r
- tests requirements\r
+ tests requirements;\r
- Same stream profile can be used for different tests with the same\r
- traffic needs\r
+ traffic needs;\r
\r
- - Sunctional data plane traffic scripts:\r
+ - Functional data plane traffic scripts:\r
\r
- - Scapy specific traffic scripts\r
+ - Scapy specific traffic scripts;\r
\r
#. Level-2 libraries - Robot resource files:\r
\r
- Higher level CSIT libraries abstracting required functions for executing\r
- tests\r
+ tests;\r
- L2 KWs are classified into the following functional categories:\r
\r
- - Configuration, test, verification, state report\r
- - Suite setup, suite teardown\r
- - Test setup, test teardown\r
+ - Configuration, test, verification, state report;\r
+ - Suite setup, suite teardown;\r
+ - Test setup, test teardown;\r
\r
#. Tests - Robot:\r
\r
- Test suites with test cases;\r
- Functional tests using VIRL environment:\r
\r
- - VPP\r
- - HoneyComb\r
+ - VPP;\r
+ - HoneyComb;\r
+ - NSH_SFC;\r
\r
- Performance tests using physical testbed environment:\r
\r
- - VPP\r
- - Testpmd\r
+ - VPP;\r
+ - DPDK-Testpmd;\r
+ - DPDK-L3Fwd;\r
\r
- Tools:\r
\r
- - Documentation generator\r
- - Report generator\r
- - Testbed environment setup ansible playbooks\r
- - Operational debugging scripts\r
+ - Documentation generator;\r
+ - Report generator;\r
+ - Testbed environment setup ansible playbooks;\r
+ - Operational debugging scripts;\r
\r
Test Lifecycle Abstraction\r
--------------------------\r
\r
-A well coded test must follow a disciplined abstraction of the test lifecycles\r
-that includes setup, configuration, test and verification. In addition to\r
-improve test execution efficiency, the commmon aspects of test setup and\r
-configuration shared across multiple test cases should be done only once.\r
-Translating these high-level guidelines into the Robot Framework one arrives to\r
-definition of a well coded RF tests for FD.io CSIT.\r
-Anatomy of Good Tests for CSIT:\r
+A well coded test must follow a disciplined abstraction of the test\r
+lifecycles that includes setup, configuration, test and verification. In\r
+addition to improve test execution efficiency, the commmon aspects of\r
+test setup and configuration shared across multiple test cases should be\r
+done only once. Translating these high-level guidelines into the Robot\r
+Framework one arrives to definition of a well coded RF tests for FD.io\r
+CSIT. Anatomy of Good Tests for CSIT:\r
\r
#. Suite Setup - Suite startup Configuration common to all Test Cases in suite:\r
- uses Configuration KWs, Verification KWs, StateReport KWs\r
+ uses Configuration KWs, Verification KWs, StateReport KWs;\r
#. Test Setup - Test startup Configuration common to multiple Test Cases: uses\r
- Configuration KWs, StateReport KWs\r
+ Configuration KWs, StateReport KWs;\r
#. Test Case - uses L2 KWs with RF Gherkin style:\r
\r
- prefixed with {Given} - Verification of Test setup, reading state: uses\r
- Configuration KWs, Verification KWs, StateReport KWs\r
- - prefixed with {When} - Test execution: Configuration KWs, Test KWs\r
+ Configuration KWs, Verification KWs, StateReport KWs;\r
+ - prefixed with {When} - Test execution: Configuration KWs, Test KWs;\r
- prefixed with {Then} - Verification of Test execution, reading state: uses\r
- Verification KWs, StateReport KWs\r
+ Verification KWs, StateReport KWs;\r
\r
#. Test Teardown - post Test teardown with Configuration cleanup and\r
Verification common to multiple Test Cases - uses: Configuration KWs,\r
- Verification KWs, StateReport KWs\r
+ Verification KWs, StateReport KWs;\r
#. Suite Teardown - Suite post-test Configuration cleanup: uses Configuration\r
- KWs, Verification KWs, StateReport KWs\r
+ KWs, Verification KWs, StateReport KWs;\r
\r
RF Keywords Functional Classification\r
-------------------------------------\r
lifecycle events described earlier. All CSIT RF L2 and L1 KWs have been grouped\r
into the following functional categories:\r
\r
-#. Configuration\r
-#. Test\r
-#. Verification\r
-#. StateReport\r
-#. SuiteSetup\r
-#. TestSetup\r
-#. SuiteTeardown\r
-#. TestTeardown\r
+#. Configuration;\r
+#. Test;\r
+#. Verification;\r
+#. StateReport;\r
+#. SuiteSetup;\r
+#. TestSetup;\r
+#. SuiteTeardown;\r
+#. TestTeardown;\r
\r
RF Keywords Naming Guidelines\r
-----------------------------\r
\r
-Readability counts: "..code is read much more often than it is written." Hence\r
-following a good and consistent grammar practice is important when writing RF\r
-KeyWords and Tests.\r
-All CSIT test cases are coded using Gherkin style and include only L2 KWs\r
-references. L2 KWs are coded using simple style and include L2 KWs, L1 KWs, and\r
-L1 python references. To improve readability, the proposal is to use the same\r
-grammar for both RF KW styles, and to formalize the grammar of English sentences\r
-used for naming the RF KWs.\r
-RF KWs names are short sentences expressing functional description of the\r
-command. They must follow English sentence grammar in one of the following\r
-forms:\r
+Readability counts: "..code is read much more often than it is written."\r
+Hence following a good and consistent grammar practice is important when\r
+writing RF KeyWords and Tests. All CSIT test cases are coded using\r
+Gherkin style and include only L2 KWs references. L2 KWs are coded using\r
+simple style and include L2 KWs, L1 KWs, and L1 python references. To\r
+improve readability, the proposal is to use the same grammar for both RF\r
+KW styles, and to formalize the grammar of English sentences used for\r
+naming the RF KWs. RF KWs names are short sentences expressing\r
+functional description of the command. They must follow English sentence\r
+grammar in one of the following forms:\r
\r
#. **Imperative** - verb-object(s): *"Do something"*, verb in base form.\r
#. **Declarative** - subject–verb–object(s): *"Subject does something"*, verb in\r
something"*, *"Object should not exist"*, verb in base form.\r
\r
Passive form MUST NOT be used. However a usage of past participle as an\r
-adjective is okay. See usage examples.\r
-Following sections list applicability of the above grammar forms to different\r
-RF KW categories. Usage examples are provided, both good and bad.\r
+adjective is okay. See usage examples provided in the Coding guidelines\r
+section below. Following sections list applicability of the above\r
+grammar forms to different RF KW categories. Usage examples are\r
+provided, both good and bad.\r
\r
Coding guidelines\r
-----------------\r