CSIT-1197: Add Comparison Across Testbeds to the Report
[csit.git] / docs / report / dpdk_performance_tests / overview.rst
index 6af7fe9..b38f959 100644 (file)
@@ -8,23 +8,23 @@ CSIT DPDK performance tests are executed on physical baremetal servers hosted
 by :abbr:`LF (Linux Foundation)` FD.io project. Testbed physical topology is
 shown in the figure below.::
 
 by :abbr:`LF (Linux Foundation)` FD.io project. Testbed physical topology is
 shown in the figure below.::
 
-    +------------------------+           +------------------------+
-    |                        |           |                        |
-    |  +------------------+  |           |  +------------------+  |
-    |  |                  |  |           |  |                  |  |
-    |  |                  <----------------->                  |  |
-    |  |       DUT1       |  |           |  |       DUT2       |  |
-    |  +--^---------------+  |           |  +---------------^--+  |
-    |     |                  |           |                  |     |
-    |     |            SUT1  |           |  SUT2            |     |
-    +------------------------+           +------------------^-----+
-          |                                                 |
-          |                                                 |
-          |                  +-----------+                  |
-          |                  |           |                  |
-          +------------------>    TG     <------------------+
-                             |           |
-                             +-----------+
+        +------------------------+           +------------------------+
+        |                        |           |                        |
+        |  +------------------+  |           |  +------------------+  |
+        |  |                  |  |           |  |                  |  |
+        |  |                  <----------------->                  |  |
+        |  |       DUT1       |  |           |  |       DUT2       |  |
+        |  +--^---------------+  |           |  +---------------^--+  |
+        |     |                  |           |                  |     |
+        |     |            SUT1  |           |  SUT2            |     |
+        +------------------------+           +------------------^-----+
+              |                                                 |
+              |                                                 |
+              |                  +-----------+                  |
+              |                  |           |                  |
+              +------------------>    TG     <------------------+
+                                 |           |
+                                 +-----------+
 
 SUT1 and SUT2 are two System Under Test servers (Cisco UCS C240, each with two
 Intel XEON CPUs), TG is a Traffic Generator (TG, another Cisco UCS C240, with
 
 SUT1 and SUT2 are two System Under Test servers (Cisco UCS C240, each with two
 Intel XEON CPUs), TG is a Traffic Generator (TG, another Cisco UCS C240, with
@@ -87,7 +87,7 @@ Performance tests are split into two main categories:
   previously discovered throughput rate. These tests are currently done against
   0.9 of reference NDR, with reference rates updated periodically.
 
   previously discovered throughput rate. These tests are currently done against
   0.9 of reference NDR, with reference rates updated periodically.
 
-CSIT |release| includes following performance test suites, listed per NIC type:
+|csit-release| includes following performance test suites, listed per NIC type:
 
 - 2port10GE X520-DA2 Intel
 
 
 - 2port10GE X520-DA2 Intel
 
@@ -115,17 +115,16 @@ continuously.
 Performance Tests Naming
 ------------------------
 
 Performance Tests Naming
 ------------------------
 
-CSIT |release| follows a common structured naming convention for all performance
-and system functional tests, introduced in CSIT |release-1|.
+|csit-release| follows a common structured naming convention for all performance
+and system functional tests, introduced in CSIT-17.01.
 
 The naming should be intuitive for majority of the tests. Complete description
 
 The naming should be intuitive for majority of the tests. Complete description
-of CSIT test naming convention is provided on `CSIT test naming wiki
-<https://wiki.fd.io/view/CSIT/csit-test-naming>`_.
+of CSIT test naming convention is provided on :ref:`csit_test_naming`.
 
 Methodology: Multi-Core and Multi-Threading
 -------------------------------------------
 
 
 Methodology: Multi-Core and Multi-Threading
 -------------------------------------------
 
-**Intel Hyper-Threading** - CSIT |release| performance tests are executed with
+**Intel Hyper-Threading** - |csit-release| performance tests are executed with
 SUT servers' Intel XEON processors configured in Intel Hyper-Threading Disabled
 mode (BIOS setting). This is the simplest configuration used to establish
 baseline single-thread single-core application packet processing and forwarding
 SUT servers' Intel XEON processors configured in Intel Hyper-Threading Disabled
 mode (BIOS setting). This is the simplest configuration used to establish
 baseline single-thread single-core application packet processing and forwarding
@@ -133,7 +132,7 @@ performance. Subsequent releases of CSIT will add performance tests with Intel
 Hyper-Threading Enabled (requires BIOS settings change and hard reboot of
 server).
 
 Hyper-Threading Enabled (requires BIOS settings change and hard reboot of
 server).
 
-**Multi-core Tests** - CSIT |release| multi-core tests are executed in the
+**Multi-core Tests** - |csit-release| multi-core tests are executed in the
 following VPP thread and core configurations:
 
 #. 1t1c - 1 pmd worker thread on 1 CPU physical core.
 following VPP thread and core configurations:
 
 #. 1t1c - 1 pmd worker thread on 1 CPU physical core.
@@ -215,7 +214,7 @@ TRex is installed and run on the TG compute node. The typical procedure is:
 - TRex is started in the background mode
   ::
 
 - TRex is started in the background mode
   ::
 
-  $ sh -c 'cd /opt/trex-core-2.25/scripts/ && sudo nohup ./t-rex-64 -i -c 7 --iom 0 > /dev/null 2>&1 &' > /dev/null
+  $ sh -c 'cd <t-rex-install-dir>/scripts/ && sudo nohup ./t-rex-64 -i -c 7 --iom 0 > /tmp/trex.log 2>&1 &' > /dev/null
 
 - There are traffic streams dynamically prepared for each test, based on traffic
   profiles. The traffic is sent and the statistics obtained using
 
 - There are traffic streams dynamically prepared for each test, based on traffic
   profiles. The traffic is sent and the statistics obtained using