-.. _test_environment:
-
Test Environment
================
+Environment Versioning
+----------------------
+
+In order to determine any benchmark anomalies (progressions,
+regressions) between releases of a specific data-plane DUT application
+(e.g. VPP, DPDK), the DUT needs to be tested in the same test
+environment, to avoid test environment changes impacting the results and
+clouding the picture.
+
+In order to enable test system evolution, a mirror scheme is required to
+determine benchmarking anomalies between releases of specific test
+system like CSIT. This is achieved by testing the same DUT application
+version between releases of CSIT test system.
+
+CSIT test environment versioning scheme ensures integrity of all the
+test system components, including their HW revisions, compiled SW code
+versions and SW source code, within a specific CSIT version. Components
+included in the CSIT environment versioning include:
+
+- Server hosts hardware firmware and BIOS (motherboard, processsor,
+ NIC(s), accelerator card(s)).
+- Server host Linux operating system versions.
+- Server host Linux configuration.
+- TRex Traffic Generator version, drivers and configuration.
+- CSIT framework code.
+
+Following is the list of CSIT versions to date:
+
+- Ver. 1 associated with CSIT rls1908 git branch as of 2019-08-21.
+- Ver. 2 associated with CSIT rls2001 git branch as of 2020-03-27.
+- Ver. 3 interim associated with master branch as of 2020-xx-xx.
+- Ver. 4 associated with CSIT rls2005 git branch as of 2020-06-24.
+
+To identify performance changes due to VPP code changes from v20.01.0 to
+v20.05.0, both have been tested in CSIT environment ver. 4 and compared
+against each other. All substantial progressions has been marked up with
+RCA analysis. See Current vs Previous Release and Known Issues.
+
+CSIT environment ver. 4 has been evaluated against the ver. 2 by
+benchmarking VPP v20.01.0 in both environment versions.
+
Physical Testbeds
-----------------
FD.io CSIT performance tests are executed in physical testbeds hosted by
-:abbr:`LF (Linux Foundation)` for FD.io project.
-
-Two physical testbed topology types are used:
+:abbr:`LF (Linux Foundation)` for FD.io project. Two physical testbed
+topology types are used:
- **3-Node Topology**: Consisting of two servers acting as SUTs
(Systems Under Test) and one server as TG (Traffic Generator), all
server as TG both connected in ring topology.
Tested SUT servers are based on a range of processors including Intel
-Xeon Haswell-SP, Intel Xeon Skylake-SP, Arm, Intel Atom. More detailed
-description is provided in
-:ref:`tested_physical_topologies`.
-
-Tested logical topologies are described in
-:ref:`tested_logical_topologies`.
+Xeon Haswell-SP, Intel Xeon Skylake-SP, Intel Xeon Cascade Lake-SP, Arm,
+Intel Atom. More detailed description is provided in
+:ref:`tested_physical_topologies`. Tested logical topologies are
+described in :ref:`tested_logical_topologies`.
Server Specifications
---------------------
Complete technical specifications of compute servers used in CSIT
-physical testbeds are maintained on FD.io wiki pages: `CSIT/Testbeds:
-Xeon Hsw, VIRL
-<https://wiki.fd.io/view/CSIT/Testbeds:_Xeon_Hsw,_VIRL.#FD.io_CSIT_testbeds_-_Xeon_Haswell.2C_VIRL>`_
-and `CSIT Testbeds: Xeon Skx, Arm, Atom
-<https://wiki.fd.io/view/CSIT/Testbeds:_Xeon_Skx,_Arm,_Atom.#Server_Specification>`_.
-
-Pre-Test Server Calibration
----------------------------
-
-Number of SUT server sub-system runtime parameters have been identified
-as impacting data plane performance tests. Calibrating those parameters
-is part of FD.io CSIT pre-test activities, and includes measuring and
-reporting following:
-
-#. System level core jitter – measure duration of core interrupts by
- Linux in clock cycles and how often interrupts happen. Using
- `CPU core jitter tool <https://git.fd.io/pma_tools/tree/jitter>`_.
-
-#. Memory bandwidth – measure bandwidth with `Intel MLC tool
- <https://software.intel.com/en-us/articles/intelr-memory-latency-checker>`_.
-
-#. Memory latency – measure memory latency with Intel MLC tool.
-
-#. Cache latency at all levels (L1, L2, and Last Level Cache) – measure
- cache latency with Intel MLC tool.
-
-Measured values of listed parameters are especially important for
-repeatable zero packet loss throughput measurements across multiple
-system instances. Generally they come useful as a background data for
-comparing data plane performance results across disparate servers.
-
-Following sections include measured calibration data for Intel Xeon
-Haswell and Intel Xeon Skylake testbeds.
+physical testbeds are maintained in FD.io CSIT repository:
+`FD.io CSIT testbeds - Xeon Cascade Lake`_,
+`FD.io CSIT testbeds - Xeon Skylake, Arm, Atom`_ and
+`FD.io CSIT Testbeds - Xeon Haswell`_.