X-Git-Url: https://gerrit.fd.io/r/gitweb?a=blobdiff_plain;f=docs%2Freport%2Fintroduction%2Ftest_environment_intro.rst;h=3b7793f64499d8adbd4c879b16d3f603d91954e2;hb=refs%2Fchanges%2F62%2F27862%2F3;hp=b02520b99da10c4fd5d00c593c87e6ba867e6591;hpb=c5e84fbee876a45d3495cde6f4e2d8140cacbe5a;p=csit.git diff --git a/docs/report/introduction/test_environment_intro.rst b/docs/report/introduction/test_environment_intro.rst index b02520b99d..3b7793f644 100644 --- a/docs/report/introduction/test_environment_intro.rst +++ b/docs/report/introduction/test_environment_intro.rst @@ -1,6 +1,47 @@ Test Environment ================ +Environment Versioning +---------------------- + +In order to determine any benchmark anomalies (progressions, +regressions) between releases of a specific data-plane DUT application +(e.g. VPP, DPDK), the DUT needs to be tested in the same test +environment, to avoid test environment changes impacting the results and +clouding the picture. + +In order to enable test system evolution, a mirror scheme is required to +determine benchmarking anomalies between releases of specific test +system like CSIT. This is achieved by testing the same DUT application +version between releases of CSIT test system. + +CSIT test environment versioning scheme ensures integrity of all the +test system components, including their HW revisions, compiled SW code +versions and SW source code, within a specific CSIT version. Components +included in the CSIT environment versioning include: + +- Server hosts hardware firmware and BIOS (motherboard, processsor, + NIC(s), accelerator card(s)). +- Server host Linux operating system versions. +- Server host Linux configuration. +- TRex Traffic Generator version, drivers and configuration. +- CSIT framework code. + +Following is the list of CSIT versions to date: + +- Ver. 1 associated with CSIT rls1908 git branch as of 2019-08-21. +- Ver. 2 associated with CSIT rls2001 git branch as of 2020-03-27. +- Ver. 3 interim associated with master branch as of 2020-xx-xx. +- Ver. 4 associated with CSIT rls2005 git branch as of 2020-06-24. + +To identify performance changes due to VPP code changes from v20.01.0 to +v20.05.0, both have been tested in CSIT environment ver. 4 and compared +against each other. All substantial progressions has been marked up with +RCA analysis. See Current vs Previous Release and Known Issues. + +CSIT environment ver. 4 has been evaluated against the ver. 2 by +benchmarking VPP v20.01.0 in both environment versions. + Physical Testbeds ----------------- @@ -15,8 +56,8 @@ topology types are used: server as TG both connected in ring topology. Tested SUT servers are based on a range of processors including Intel -Xeon Haswell-SP, Intel Xeon Skylake-SP, Intel Xeon Cascadelake-SP, Arm, Intel -Atom. More detailed description is provided in +Xeon Haswell-SP, Intel Xeon Skylake-SP, Intel Xeon Cascade Lake-SP, Arm, +Intel Atom. More detailed description is provided in :ref:`tested_physical_topologies`. Tested logical topologies are described in :ref:`tested_logical_topologies`. @@ -25,33 +66,6 @@ Server Specifications Complete technical specifications of compute servers used in CSIT physical testbeds are maintained in FD.io CSIT repository: -`FD.io CSIT testbeds - Xeon Cascadelake`_, +`FD.io CSIT testbeds - Xeon Cascade Lake`_, `FD.io CSIT testbeds - Xeon Skylake, Arm, Atom`_ and `FD.io CSIT Testbeds - Xeon Haswell`_. - -Pre-Test Server Calibration ---------------------------- - -Number of SUT server sub-system runtime parameters have been identified -as impacting data plane performance tests. Calibrating those parameters -is part of FD.io CSIT pre-test activities, and includes measuring and -reporting following: - -#. System level core jitter – measure duration of core interrupts by - Linux in clock cycles and how often interrupts happen. Using - `CPU core jitter tool `_. - -#. Memory bandwidth – measure bandwidth with `Intel MLC tool - `_. - -#. Memory latency – measure memory latency with Intel MLC tool. - -#. Cache latency at all levels (L1, L2, and Last Level Cache) – measure - cache latency with Intel MLC tool. - -Measured values of listed parameters are especially important for -repeatable zero packet loss throughput measurements across multiple -system instances. Generally they come useful as a background data for -comparing data plane performance results across disparate servers. - -Following sections include measured calibration data for testbeds.