X-Git-Url: https://gerrit.fd.io/r/gitweb?p=csit.git;a=blobdiff_plain;f=docs%2Freport%2Fdpdk_performance_tests%2Foverview.rst;h=499f0068e479ff3392d273036ff686695ff6e9c9;hp=e326de0b227ea8c00397895ce75542fb4fbba703;hb=c2ce046f71f7de1f247be68513e3d4e10c8c2a04;hpb=785519e26196b9e0a5016d0fc54ed099fd0a920f diff --git a/docs/report/dpdk_performance_tests/overview.rst b/docs/report/dpdk_performance_tests/overview.rst index e326de0b22..499f0068e4 100644 --- a/docs/report/dpdk_performance_tests/overview.rst +++ b/docs/report/dpdk_performance_tests/overview.rst @@ -1,159 +1,127 @@ Overview ======== -Tested Physical Topologies --------------------------- +For description of physical testbeds used for DPDK performance tests +please refer to :ref:`tested_physical_topologies`. -CSIT DPDK performance tests are executed on physical baremetal servers hosted -by LF FD.io project. Testbed physical topology is shown in the figure below. - -:: - - +------------------------+ +------------------------+ - | | | | - | +------------------+ | | +------------------+ | - | | | | | | | | - | | <-----------------> | | - | | DUT1 | | | | DUT2 | | - | +--^---------------+ | | +---------------^--+ | - | | | | | | - | | SUT1 | | SUT2 | | - +------------------------+ +------------------^-----+ - | | - | | - | +-----------+ | - | | | | - +------------------> TG <------------------+ - | | - +-----------+ - -SUT1 and SUT2 are two System Under Test servers (currently Cisco UCS C240, -each with two Intel XEON CPUs), TG is a Traffic Generator (TG, currently -another Cisco UCS C240, with two Intel XEON CPUs). SUTs run Testpmd/L3FWD SW -application in Linux user-mode as a Device Under Test (DUT). TG runs TRex SW -application as a packet Traffic Generator. Physical connectivity between SUTs -and to TG is provided using direct links (no L2 switches) connecting different -NIC models that need to be tested for performance. Currently installed and -tested NIC models include: - -#. 2port10GE X520-DA2 Intel. -#. 2port10GE X710 Intel. -#. 2port10GE VIC1227 Cisco. -#. 2port40GE VIC1385 Cisco. -#. 2port40GE XL710 Intel. - -For detailed LF FD.io test bed specification and physical topology please refer -to `LF FDio CSIT testbed wiki page `_. +Logical Topologies +------------------ -Performance Tests Coverage --------------------------- +CSIT DPDK performance tests are executed on physical testbeds described +in :ref:`tested_physical_topologies`. Based on the packet path through +server SUTs, one distinct logical topology type is used for DPDK DUT +data plane testing: + +#. NIC-to-NIC switching topologies. -Performance tests are split into the two main categories: +NIC-to-NIC Switching +~~~~~~~~~~~~~~~~~~~~ -- Throughput discovery - discovery of packet forwarding rate using binary search - in accordance with RFC2544. +The simplest logical topology for software data plane application like +DPDK is NIC-to-NIC switching. Tested topologies for 2-Node and 3-Node +testbeds are shown in figures below. - - NDR - discovery of Non Drop Rate packet throughput, at zero packet loss; - followed by packet one-way latency measurements at 10%, 50% and 100% of - discovered NDR throughput. - - PDR - discovery of Partial Drop Rate, with specified non-zero packet loss - currently set to 0.5%; followed by packet one-way latency measurements at - 100% of discovered PDR throughput. +.. only:: latex -- Throughput verification - verification of packet forwarding rate against - previously discovered NDR throughput. These tests are currently done against - 0.9 of reference NDR, with reference rates updated periodically. + .. raw:: latex -CSIT |release| includes following performance test suites, listed per NIC type: + \begin{figure}[H] + \centering + \graphicspath{{../_tmp/src/vpp_performance_tests/}} + \includegraphics[width=0.90\textwidth]{logical-2n-nic2nic} + \label{fig:logical-2n-nic2nic} + \end{figure} -- 2port10GE X520-DA2 Intel +.. only:: html - - **L2IntLoop** - L2 Interface Loop forwarding any Ethernet frames between - two Interfaces. + .. figure:: ../vpp_performance_tests/logical-2n-nic2nic.svg + :alt: logical-2n-nic2nic + :align: center -- 2port40GE XL710 Intel - - **L2IntLoop** - L2 Interface Loop forwarding any Ethernet frames between - two Interfaces. +.. only:: latex -- 2port10GE X520-DA2 Intel + .. raw:: latex - - **IPv4 Routed Forwarding** - L3 IP forwarding of Ethernet frames between - two Interfaces. + \begin{figure}[H] + \centering + \graphicspath{{../_tmp/src/vpp_performance_tests/}} + \includegraphics[width=0.90\textwidth]{logical-3n-nic2nic} + \label{fig:logical-3n-nic2nic} + \end{figure} -Execution of performance tests takes time, especially the throughput discovery -tests. Due to limited HW testbed resources available within FD.io labs hosted -by Linux Foundation, the number of tests for NICs other than X520 (a.k.a. -Niantic) has been limited to few baseline tests. Over time we expect the HW -testbed resources to grow, and will be adding complete set of performance -tests for all models of hardware to be executed regularly and(or) -continuously. +.. only:: html -Methodology: Multi-Thread and Multi-Core ----------------------------------------- + .. figure:: ../vpp_performance_tests/logical-3n-nic2nic.svg + :alt: logical-3n-nic2nic + :align: center -**HyperThreading** - CSIT |release| performance tests are executed with SUT -servers' Intel XEON CPUs configured in HyperThreading Disabled mode (BIOS -settings). This is the simplest configuration used to establish baseline -single-thread single-core SW packet processing and forwarding performance. -Subsequent releases of CSIT will add performance tests with Intel -HyperThreading Enabled (requires BIOS settings change and hard reboot). +Server Systems Under Test (SUT) runs DPDK Testpmd/L3FWD application in +Linux user-mode as a Device Under Test (DUT). Server Traffic Generator (TG) +runs T-Rex application. Physical connectivity between SUTs and TG is provided +using different drivers and NIC models that need to be tested for performance +(packet/bandwidth throughput and latency). -**Multi-core Test** - CSIT |release| multi-core tests are executed in the -following thread and core configurations: +From SUT and DUT perspectives, all performance tests involve forwarding +packets between two physical Ethernet ports (10GE, 25GE, 40GE, 100GE). +In most cases both physical ports on SUT are located on the same +NIC. The only exceptions are link bonding and 100GE tests. In the latter +case only one port per NIC can be driven at linerate due to PCIe Gen3 +x16 slot bandwidth limiations. 100GE NICs are not supported in PCIe Gen3 +x8 slots. -#. 1t1c - 1 pmd thread on 1 CPU physical core. -#. 2t2c - 2 pmd threads on 2 CPU physical cores. +Note that reported DPDK DUT performance results are specific to the SUTs +tested. SUTs with other processors than the ones used in FD.io lab are +likely to yield different results. A good rule of thumb, that can be +applied to estimate DPDK packet thoughput for NIC-to-NIC switching +topology, is to expect the forwarding performance to be proportional to +processor core frequency for the same processor architecture, assuming +processor is the only limiting factor and all other SUT parameters are +equivalent to FD.io CSIT environment. + +Performance Tests Coverage +-------------------------- -Note that in many tests running Testpmd/L3FWD reaches tested NIC I/O bandwidth -or packets-per-second limit. +Performance tests measure following metrics for tested DPDK DUT +topologies and configurations: -Methodology: Packet Throughput ------------------------------- +- Packet Throughput: measured in accordance with :rfc:`2544`, using + FD.io CSIT Multiple Loss Ratio search (MLRsearch), an optimized binary + search algorithm, producing throughput at different Packet Loss Ratio + (PLR) values: -Following values are measured and reported for packet throughput tests: + - Non Drop Rate (NDR): packet throughput at PLR=0%. + - Partial Drop Rate (PDR): packet throughput at PLR=0.5%. -- NDR binary search per RFC2544: +- One-Way Packet Latency: measured at different offered packet loads: - - Packet rate: "RATE: pps - (2x )" - - Aggregate bandwidth: "BANDWIDTH: Gbps (untagged)" + - 100% of discovered NDR throughput. + - 100% of discovered PDR throughput. -- PDR binary search per RFC2544: +- Maximum Receive Rate (MRR): measure packet forwarding rate under the + maximum load offered by traffic generator over a set trial duration, + regardless of packet loss. Maximum load for specified Ethernet frame + size is set to the bi-directional link rate. - - Packet rate: "RATE: pps (2x - )" - - Aggregate bandwidth: "BANDWIDTH: Gbps (untagged)" - - Packet loss tolerance: "LOSS_ACCEPTANCE "" +|csit-release| includes following performance test suites, listed per NIC type: -- NDR and PDR are measured for the following L2 frame sizes: +- **L2IntLoop** - L2 Interface Loop forwarding any Ethernet frames between + two Interfaces. - - IPv4: 64B, 1518B, 9000B. +- **IPv4 Routed Forwarding** - L3 IP forwarding of Ethernet frames between + two Interfaces. +Execution of performance tests takes time, especially the throughput +tests. Due to limited HW testbed resources available within FD.io labs +hosted by :abbr:`LF (Linux Foundation)`, the number of tests for some +NIC models has been limited to few baseline tests. -Methodology: Packet Latency ---------------------------- +Performance Tests Naming +------------------------ -TRex Traffic Generator (TG) is used for measuring latency of Testpmd DUTs. -Reported latency values are measured using following methodology: +FD.io |csit-release| follows a common structured naming convention for +all performance and system functional tests, introduced in CSIT-17.01. -- Latency tests are performed at 10%, 50% of discovered NDR rate (non drop rate) - for each NDR throughput test and packet size (except IMIX). -- TG sends dedicated latency streams, one per direction, each at the rate of - 10kpps at the prescribed packet size; these are sent in addition to the main - load streams. -- TG reports min/avg/max latency values per stream direction, hence two sets - of latency values are reported per test case; future release of TRex is - expected to report latency percentiles. -- Reported latency values are aggregate across two SUTs due to three node - topology used for all performance tests; for per SUT latency, reported value - should be divided by two. -- 1usec is the measurement accuracy advertised by TRex TG for the setup used in - FD.io labs used by CSIT project. -- TRex setup introduces an always-on error of about 2*2usec per latency flow - - additonal Tx/Rx interface latency induced by TRex SW writing and reading - packet timestamps on CPU cores without HW acceleration on NICs closer to the - interface line. +The naming should be intuitive for majority of the tests. Complete +description of FD.io CSIT test naming convention is provided on +:ref:`csit_test_naming`.