9 FD.io CSIT performance tests are executed in physical testbeds hosted by
10 :abbr:`LF (Linux Foundation)` for FD.io project.
12 Two physical testbed topology types are used:
14 - **3-Node Topology**: Consisting of two servers acting as SUTs
15 (Systems Under Test) and one server as TG (Traffic Generator), all
16 connected in ring topology.
17 - **2-Node Topology**: Consisting of one server acting as SUTs and one
18 server as TG both connected in ring topology.
20 Tested SUT servers are based on a range of processors including Intel
21 Xeon Haswell-SP, Intel Xeon Skylake-SP, Arm, Intel Atom. More detailed
22 description is provided in
23 :ref:`tested_physical_topologies`.
25 Tested logical topologies are described in
26 :ref:`tested_logical_topologies`.
31 Complete technical specifications of compute servers used in CSIT
32 physical testbeds are maintained on FD.io wiki pages: `CSIT/Testbeds:
34 <https://wiki.fd.io/view/CSIT/Testbeds:_Xeon_Hsw,_VIRL.#FD.io_CSIT_testbeds_-_Xeon_Haswell.2C_VIRL>`_
35 and `CSIT Testbeds: Xeon Skx, Arm, Atom
36 <https://wiki.fd.io/view/CSIT/Testbeds:_Xeon_Skx,_Arm,_Atom.#Server_Specification>`_.
38 Pre-Test Server Calibration
39 ---------------------------
41 Number of SUT server sub-system runtime parameters have been identified
42 as impacting data plane performance tests. Calibrating those parameters
43 is part of FD.io CSIT pre-test activities, and includes measuring and
46 #. System level core jitter – measure duration of core interrupts by
47 Linux in clock cycles and how often interrupts happen. Using
48 `CPU core jitter tool <https://git.fd.io/pma_tools/tree/jitter>`_.
50 #. Memory bandwidth – measure bandwidth with `Intel MLC tool
51 <https://software.intel.com/en-us/articles/intelr-memory-latency-checker>`_.
53 #. Memory latency – measure memory latency with Intel MLC tool.
55 #. Cache latency at all levels (L1, L2, and Last Level Cache) – measure
56 cache latency with Intel MLC tool.
58 Measured values of listed parameters are especially important for
59 repeatable zero packet loss throughput measurements across multiple
60 system instances. Generally they come useful as a background data for
61 comparing data plane performance results across disparate servers.
63 Following sections include measured calibration data for Intel Xeon
64 Haswell and Intel Xeon Skylake testbeds.