From 23fa2a8925d65759bb14177b997b22f8a418e9ef Mon Sep 17 00:00:00 2001 From: Viliam Luc Date: Fri, 22 Oct 2021 12:17:38 +0200 Subject: [PATCH] docs: TRex static documentation Signed-off-by: Viliam Luc Change-Id: I41233f7044574505f32f7425fd897bb3176ac839 --- .../introduction/test_environment_changes_tg.rst | 14 ++++ .../introduction/test_environment_changes_vpp.rst | 32 +++++++++ .../report/introduction/test_environment_intro.rst | 35 +--------- .../trex_performance_tests/csit_release_notes.rst | 18 ++++- .../logical-TRex-nic2nic.svg | 1 + docs/report/trex_performance_tests/overview.rst | 80 ++++++++++++++++++++++ .../trex_performance_tests/test_environment.rst | 13 +++- .../trex_performance_tests/throughput_trending.rst | 10 +++ .../vpp_performance_tests/test_environment.rst | 2 + 9 files changed, 169 insertions(+), 36 deletions(-) create mode 100644 docs/report/introduction/test_environment_changes_tg.rst create mode 100644 docs/report/introduction/test_environment_changes_vpp.rst create mode 100755 docs/report/trex_performance_tests/logical-TRex-nic2nic.svg diff --git a/docs/report/introduction/test_environment_changes_tg.rst b/docs/report/introduction/test_environment_changes_tg.rst new file mode 100644 index 0000000000..21175f2ba5 --- /dev/null +++ b/docs/report/introduction/test_environment_changes_tg.rst @@ -0,0 +1,14 @@ +To identify performance changes due to TRex code development between previous +and current TRex version, both have been tested in CSIT environment of latest +version and compared against each other. All substantial progressions and +regressions have been marked up with RCA analysis. See :ref:`trex_known_issues`. + +Physical Testbeds +----------------- + +FD.io CSIT performance tests are executed in physical testbeds hosted by +:abbr:`LF (Linux Foundation)` for FD.io project. Physical testbed +topology used: + +- **1-Node Topology**: Consisting of TG with 1 NIC with 2 ports connected + together - loopback connection. diff --git a/docs/report/introduction/test_environment_changes_vpp.rst b/docs/report/introduction/test_environment_changes_vpp.rst new file mode 100644 index 0000000000..83a9eedffb --- /dev/null +++ b/docs/report/introduction/test_environment_changes_vpp.rst @@ -0,0 +1,32 @@ +To identify performance changes due to VPP code development between previous +and current VPP release version, both have been tested in CSIT environment of +latest version and compared against each other. All substantial progressions and +regressions have been marked up with RCA analysis. See +:ref:`vpp_throughput_comparisons` and :ref:`vpp_known_issues`. + +Physical Testbeds +----------------- + +FD.io CSIT performance tests are executed in physical testbeds hosted by +:abbr:`LF (Linux Foundation)` for FD.io project. Two physical testbed +topology types are used: + +- **3-Node Topology**: Consisting of two servers acting as SUTs + (Systems Under Test) and one server as TG (Traffic Generator), all + connected in ring topology. +- **2-Node Topology**: Consisting of one server acting as SUTs and one + server as TG both connected in ring topology. + +Tested SUT servers are based on a range of processors including Intel +Intel Xeon Skylake-SP, Intel Xeon Cascade Lake-SP, Arm, +Intel Atom. More detailed description is provided in +:ref:`tested_physical_topologies`. Tested logical topologies are +described in :ref:`tested_logical_topologies`. + +Server Specifications +--------------------- + +Complete technical specifications of compute servers used in CSIT +physical testbeds are maintained in FD.io CSIT repository: +`FD.io CSIT testbeds - Xeon Cascade Lake`_, +`FD.io CSIT testbeds - Xeon Skylake, Arm, Atom`_. diff --git a/docs/report/introduction/test_environment_intro.rst b/docs/report/introduction/test_environment_intro.rst index ddb14ce9a8..1999906669 100644 --- a/docs/report/introduction/test_environment_intro.rst +++ b/docs/report/introduction/test_environment_intro.rst @@ -10,7 +10,7 @@ CSIT test environment versioning has been introduced to track modifications of the test environment. Any benchmark anomalies (progressions, regressions) between releases of -a DUT application (e.g. VPP, DPDK), are determined by testing it in the +a DUT application (e.g. VPP, DPDK, TRex), are determined by testing it in the same test environment, to avoid test environment changes clouding the picture. @@ -99,36 +99,3 @@ Following is the list of CSIT versions to date: - Intel NIC 700/800 series firmware upgrade based on DPDK compatibility matrix: `depends on testbed type `_. - -To identify performance changes due to VPP code development between previous -and current VPP release version, both have been tested in CSIT environment of -latest version and compared against each other. All substantial progressions and -regressions have been marked up with RCA analysis. See -:ref:`vpp_throughput_comparisons` and :ref:`vpp_known_issues`. - -Physical Testbeds ------------------ - -FD.io CSIT performance tests are executed in physical testbeds hosted by -:abbr:`LF (Linux Foundation)` for FD.io project. Two physical testbed -topology types are used: - -- **3-Node Topology**: Consisting of two servers acting as SUTs - (Systems Under Test) and one server as TG (Traffic Generator), all - connected in ring topology. -- **2-Node Topology**: Consisting of one server acting as SUTs and one - server as TG both connected in ring topology. - -Tested SUT servers are based on a range of processors including Intel -Intel Xeon Skylake-SP, Intel Xeon Cascade Lake-SP, Arm, -Intel Atom. More detailed description is provided in -:ref:`tested_physical_topologies`. Tested logical topologies are -described in :ref:`tested_logical_topologies`. - -Server Specifications ---------------------- - -Complete technical specifications of compute servers used in CSIT -physical testbeds are maintained in FD.io CSIT repository: -`FD.io CSIT testbeds - Xeon Cascade Lake`_, -`FD.io CSIT testbeds - Xeon Skylake, Arm, Atom`_. diff --git a/docs/report/trex_performance_tests/csit_release_notes.rst b/docs/report/trex_performance_tests/csit_release_notes.rst index ec0175a81b..ac961cd759 100644 --- a/docs/report/trex_performance_tests/csit_release_notes.rst +++ b/docs/report/trex_performance_tests/csit_release_notes.rst @@ -4,9 +4,25 @@ Release Notes Changes in |csit-release| ------------------------- +#. TREX PERFORMANCE TESTS + + - **Intel Sky Lake**: Added initial tests for testing latency between + 2 ports on nic on the TRex. + Added tests: + - IP4Base + - IP4scale2m + - IP6Base + - IP6scale2m + - L2bscale1mmaclrn + #. TEST FRAMEWORK -#. TRex RELEASE VERSION CHANGE + - **CSIT test environment** added support for running TRex nic to nic + loopback tests. + +#. TRex RELEASE VERSION + - **TRex version used: 2.88** + .. _trex_known_issues: diff --git a/docs/report/trex_performance_tests/logical-TRex-nic2nic.svg b/docs/report/trex_performance_tests/logical-TRex-nic2nic.svg new file mode 100755 index 0000000000..f5ed028eff --- /dev/null +++ b/docs/report/trex_performance_tests/logical-TRex-nic2nic.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/docs/report/trex_performance_tests/overview.rst b/docs/report/trex_performance_tests/overview.rst index 802d341477..9740179df7 100644 --- a/docs/report/trex_performance_tests/overview.rst +++ b/docs/report/trex_performance_tests/overview.rst @@ -1,2 +1,82 @@ Overview ======== + +TREX performance test results are reported for a range of processors. +For description of physical testbeds used for TREX performance tests +please refer to :ref:`tested_physical_topologies`. + +Logical Topology +---------------- + +CSIT TREX performance tests are executed on physical testbeds described +in :ref:`tested_physical_topologies`. Logical topology use 1 nic that has +loopback connected ports. See figure below. + +.. only:: latex + + .. raw:: latex + + \begin{figure}[H] + \centering + \graphicspath{{../_tmp/src/trex_performance_tests/}} + \includegraphics[width=0.90\textwidth]{logical-TRex-nic2nic} + \label{fig:logical-TRex-nic2nic} + \end{figure} + +.. only:: html + + .. figure:: logical-TRex-nic2nic.svg + :alt: logical-TRex-nic2nic + :align: center + + +Performance Tests Coverage +-------------------------- + +Performance tests measure following metrics for tested TRex +topologies and configurations: + +- Packet Throughput: measured in accordance with :rfc:`2544`, using + FD.io CSIT Multiple Loss Ratio search (MLRsearch), an optimized binary + search algorithm, producing throughput at different Packet Loss Ratio + (PLR) values: + + - Non Drop Rate (NDR): packet throughput at PLR=0%. + - Partial Drop Rate (PDR): packet throughput at PLR=0.5%. + +- Two-way Packet Latency: measured both east-west and west-east at different + offered packet loads: + + - 90% of discovered PDR throughput. + - 50% of discovered PDR throughput. + - 10% of discovered PDR throughput. + - Minimal offered load. + +|csit-release| includes following TRex data plane functionality +performance tested across a range of NIC drivers and NIC models: + ++-----------------------+----------------------------------------------+ +| Functionality | Description | ++=======================+==============================================+ +| IPv4 Base | IPv4 routing. | ++-----------------------+----------------------------------------------+ +| IPv4 Scale | IPv4 routing with 2M entries. | ++-----------------------+----------------------------------------------+ +| IPv6 Base | IPv6 routing. | ++-----------------------+----------------------------------------------+ +| IPv6 Scale | IPv6 routing with 2M entries. | ++-----------------------+----------------------------------------------+ +| L2BD Scale | L2 Bridge-Domain switching of untagged | +| | Ethernet frames. | ++-----------------------+----------------------------------------------+ + + +Performance Tests Naming +------------------------ + +FD.io |csit-release| follows a common structured naming convention for +all performance and system functional tests, introduced in CSIT-17.01. + +The naming should be intuitive for majority of the tests. Complete +description of FD.io CSIT test naming convention is provided on +:ref:`csit_test_naming`. diff --git a/docs/report/trex_performance_tests/test_environment.rst b/docs/report/trex_performance_tests/test_environment.rst index cc422ff713..06b6d733a5 100644 --- a/docs/report/trex_performance_tests/test_environment.rst +++ b/docs/report/trex_performance_tests/test_environment.rst @@ -1,2 +1,13 @@ -DUT Settings - TRex +.. raw:: latex + + \clearpage + +.. include:: ../introduction/test_environment_intro.rst + +.. include:: ../introduction/test_environment_changes_tg.rst + + +SUT Settings - TRex ------------------- + +.. include:: ../introduction/test_environment_tg.rst diff --git a/docs/report/trex_performance_tests/throughput_trending.rst b/docs/report/trex_performance_tests/throughput_trending.rst index 3d56443f92..cb2751972f 100644 --- a/docs/report/trex_performance_tests/throughput_trending.rst +++ b/docs/report/trex_performance_tests/throughput_trending.rst @@ -1,2 +1,12 @@ Throughput Trending ------------------- + +In addition to reporting throughput comparison between TRex releases, +CSIT provides continuous performance trending for master branch: + +#. `TRex Trending Graphs `_: + per TRex test case throughput trend, trend compliance and summary of + detected anomalies. + +#. `TRex Latency Graphs `_: + per TRex build NDRPDR latency measurements against the trendline. diff --git a/docs/report/vpp_performance_tests/test_environment.rst b/docs/report/vpp_performance_tests/test_environment.rst index 64863e3d7b..3c3952f4e5 100644 --- a/docs/report/vpp_performance_tests/test_environment.rst +++ b/docs/report/vpp_performance_tests/test_environment.rst @@ -7,6 +7,8 @@ .. include:: ../introduction/test_environment_intro.rst +.. include:: ../introduction/test_environment_changes_vpp.rst + .. include:: ../introduction/test_environment_sut_conf_1.rst -- 2.16.6