Overview
========
-Tested Physical Topologies
---------------------------
+VPP performance test results are reported for all three physical testbed
+types present in FD.io labs: 3-Node Xeon Haswell (3n-hsw), 3-Node Xeon
+Skylake (3n-skx), 2-Node Xeon Skylake (2n-skx) and installed NIC models.
+For description of physical testbeds used for VPP performance tests
+please refer to :ref:`tested_physical_topologies`.
-CSIT VPP performance tests are executed on physical baremetal servers hosted by
-LF FD.io project. Testbed physical topology is shown in the figure below.
-
-::
-
- +------------------------+ +------------------------+
- | | | |
- | +------------------+ | | +------------------+ |
- | | | | | | | |
- | | <-----------------> | |
- | | DUT1 | | | | DUT2 | |
- | +--^---------------+ | | +---------------^--+ |
- | | | | | |
- | | SUT1 | | SUT2 | |
- +------------------------+ +------------------^-----+
- | |
- | |
- | +-----------+ |
- | | | |
- +------------------> TG <------------------+
- | |
- +-----------+
-
-SUT1 and SUT2 are two System Under Test servers (Cisco UCS C240, each with two
-Intel XEON CPUs), TG is a Traffic Generator (TG, another Cisco UCS C240, with
-two Intel XEON CPUs). SUTs run VPP SW application in Linux user-mode as a
-Device Under Test (DUT). TG runs TRex SW application as a packet Traffic
-Generator. Physical connectivity between SUTs and to TG is provided using
-different NIC models that need to be tested for performance. Currently
-installed and tested NIC models include:
-
-#. 2port10GE X520-DA2 Intel.
-#. 2port10GE X710 Intel.
-#. 2port10GE VIC1227 Cisco.
-#. 2port40GE VIC1385 Cisco.
-#. 2port40GE XL710 Intel.
-
-From SUT and DUT perspective, all performance tests involve forwarding packets
-between two physical Ethernet ports (10GE or 40GE). Due to the number of
-listed NIC models tested and available PCI slot capacity in SUT servers, in
-all of the above cases both physical ports are located on the same NIC. In
-some test cases this results in measured packet throughput being limited not
-by VPP DUT but by either the physical interface or the NIC capacity.
-
-Going forward CSIT project will be looking to add more hardware into FD.io
-performance labs to address larger scale multi-interface and multi-NIC
-performance testing scenarios.
-
-For test cases that require DUT (VPP) to communicate with VM(s) over vhost-user
-interfaces, N of VM instances are created on SUT1 and SUT2. For N=1 DUT (VPP)
-forwards packets between vhostuser and physical interfaces. For N>1 DUT (VPP) a
-logical service chain forwarding topology is created on DUT (VPP) by applying L2
-or IPv4/IPv6 configuration depending on the test suite.
-DUT (VPP) test topology with N VM instances
-is shown in the figure below including applicable packet flow thru the DUTs and
-VMs (marked in the figure with ``***``).
-
-::
-
- +-------------------------+ +-------------------------+
- | +---------+ +---------+ | | +---------+ +---------+ |
- | | VM[1] | | VM[N] | | | | VM[1] | | VM[N] | |
- | | ***** | | ***** | | | | ***** | | ***** | |
- | +--^---^--+ +--^---^--+ | | +--^---^--+ +--^---^--+ |
- | *| |* *| |* | | *| |* *| |* |
- | +--v---v-------v---v--+ | | +--v---v-------v---v--+ |
- | | * * * * |*|***********|*| * * * * | |
- | | * ********* ***<-|-----------|->*** ********* * | |
- | | * DUT1 | | | | DUT2 * | |
- | +--^------------------+ | | +------------------^--+ |
- | *| | | |* |
- | *| SUT1 | | SUT2 |* |
- +-------------------------+ +-------------------------+
- *| |*
- *| |*
- *| +-----------+ |*
- *| | | |*
- *+--------------------> TG <--------------------+*
- **********************| |**********************
- +-----------+
-
-For VM tests, packets are switched by DUT (VPP) multiple times: twice for a
-single VM, three times for two VMs, N+1 times for N VMs.
-Hence the external
-throughput rates measured by TG and listed in this report must be multiplied
-by (N+1) to represent the actual DUT aggregate packet forwarding rate.
-
-Note that reported VPP performance results are specific to the SUTs tested.
-Current LF FD.io SUTs are based on Intel XEON E5-2699v3 2.3GHz CPUs. SUTs with
-other CPUs are likely to yield different results. A good rule of thumb, that
-can be applied to estimate VPP packet thoughput for Phy-to-Phy (NIC-to-NIC,
-PCI-to-PCI) topology, is to expect the forwarding performance to be
-proportional to CPU core frequency, assuming CPU is the only limiting factor
-and all other SUT parameters equivalent to FD.io CSIT environment. The same rule
-of thumb can be also applied for Phy-to-VM-to-Phy (NIC-to-VM-to-NIC) topology,
-but due to much higher dependency on intensive memory operations and
-sensitivity to Linux kernel scheduler settings and behaviour, this estimation
-may not always yield good enough accuracy.
-
-For detailed LF FD.io test bed specification and physical topology please refer
-to `LF FDio CSIT testbed wiki page <https://wiki.fd.io/view/CSIT/CSIT_LF_testbed>`_.
+.. _tested_logical_topologies:
-Performance Tests Coverage
---------------------------
+Logical Topologies
+------------------
-Performance tests are split into the two main categories:
-
-- Throughput discovery - discovery of packet forwarding rate using binary search
- in accordance to RFC2544.
-
- - NDR - discovery of Non Drop Rate packet throughput, at zero packet loss;
- followed by one-way packet latency measurements at 10%, 50% and 100% of
- discovered NDR throughput.
- - PDR - discovery of Partial Drop Rate, with specified non-zero packet loss
- currently set to 0.5%; followed by one-way packet latency measurements at
- 100% of discovered PDR throughput.
-
-- Throughput verification - verification of packet forwarding rate against
- previously discovered throughput rate. These tests are currently done against
- 0.9 of reference NDR, with reference rates updated periodically.
-
-CSIT |release| includes following performance test suites, listed per NIC type:
-
-- 2port10GE X520-DA2 Intel
-
- - **L2XC** - L2 Cross-Connect switched-forwarding of untagged, dot1q, dot1ad
- VLAN tagged Ethernet frames.
- - **L2BD** - L2 Bridge-Domain switched-forwarding of untagged Ethernet frames
- with MAC learning; disabled MAC learning i.e. static MAC tests to be added.
- - **IPv4** - IPv4 routed-forwarding.
- - **IPv6** - IPv6 routed-forwarding.
- - **IPv4 Scale** - IPv4 routed-forwarding with 20k, 200k and 2M FIB entries.
- - **IPv6 Scale** - IPv6 routed-forwarding with 20k, 200k and 2M FIB entries.
- - **VMs with vhost-user** - virtual topologies with 1 VM and service chains
- of 2 VMs using vhost-user interfaces, with VPP forwarding modes incl. L2
- Cross-Connect, L2 Bridge-Domain, VXLAN with L2BD, IPv4 routed-forwarding.
- - **COP** - IPv4 and IPv6 routed-forwarding with COP address security.
- - **iACL** - IPv4 and IPv6 routed-forwarding with iACL address security.
- - **LISP** - LISP overlay tunneling for IPv4-over-IPv4, IPv6-over-IPv4,
- IPv6-over-IPv6, IPv4-over-IPv6 in IPv4 and IPv6 routed-forwarding modes.
- - **VXLAN** - VXLAN overlay tunnelling integration with L2XC and L2BD.
- - **QoS Policer** - ingress packet rate measuring, marking and limiting
- (IPv4).
- - **CGNAT** - Carrier Grade Network Address Translation tests with varying
- number of users and ports per user.
-
-- 2port40GE XL710 Intel
-
- - **L2XC** - L2 Cross-Connect switched-forwarding of untagged Ethernet frames.
- - **L2BD** - L2 Bridge-Domain switched-forwarding of untagged Ethernet frames
- with MAC learning.
- - **IPv4** - IPv4 routed-forwarding.
- - **IPv6** - IPv6 routed-forwarding.
- - **VMs with vhost-user** - virtual topologies with 1 VM and service chains
- of 2 VMs using vhost-user interfaces, with VPP forwarding modes incl. L2
- Cross-Connect, L2 Bridge-Domain, VXLAN with L2BD, IPv4 routed-forwarding.
- - **IPSec** - IPSec encryption with AES-GCM, CBC-SHA1 ciphers, in combination
- with IPv4 routed-forwarding.
- - **IPSec+LISP** - IPSec encryption with CBC-SHA1 ciphers, in combination
- with LISP-GPE overlay tunneling for IPv4-over-IPv4.
-
-- 2port10GE X710 Intel
-
- - **L2BD** - L2 Bridge-Domain switched-forwarding of untagged Ethernet frames
- with MAC learning.
- - **VMs with vhost-user** - virtual topologies with 1 VM using vhost-user
- interfaces, with VPP forwarding modes incl. L2 Bridge-Domain.
-
-- 2port10GE VIC1227 Cisco
-
- - **L2BD** - L2 Bridge-Domain switched-forwarding of untagged Ethernet frames
- with MAC learning.
-
-- 2port40GE VIC1385 Cisco
-
- - **L2BD** - L2 Bridge-Domain switched-forwarding of untagged Ethernet frames
- with MAC learning.
-
-Execution of performance tests takes time, especially the throughput discovery
-tests. Due to limited HW testbed resources available within FD.io labs hosted
-by Linux Foundation, the number of tests for NICs other than X520 (a.k.a.
-Niantic) has been limited to few baseline tests. Over time we expect the HW
-testbed resources to grow, and will be adding complete set of performance
-tests for all models of hardware to be executed regularly and(or)
-continuously.
+CSIT VPP performance tests are executed on physical testbeds described
+in :ref:`tested_physical_topologies`. Based on the packet path thru
+server SUTs, three distinct logical topology types are used for VPP DUT
+data plane testing:
-Performance Tests Naming
-------------------------
+#. NIC-to-NIC switching topologies.
+#. VM service switching topologies.
+#. Container service switching topologies.
-CSIT |release| follows a common structured naming convention for all
-performance and system functional tests, introduced in CSIT rls1701.
+NIC-to-NIC Switching
+~~~~~~~~~~~~~~~~~~~~
-The naming should be intuitive for majority of the tests. Complete
-description of CSIT test naming convention is provided on `CSIT test naming wiki
-<https://wiki.fd.io/view/CSIT/csit-test-naming>`_.
-
-Here few illustrative examples of the new naming usage for performance test
-suites:
-
-#. **Physical port to physical port - a.k.a. NIC-to-NIC, Phy-to-Phy, P2P**
-
- - *PortNICConfig-WireEncapsulation-PacketForwardingFunction-
- PacketProcessingFunction1-...-PacketProcessingFunctionN-TestType*
- - *10ge2p1x520-dot1q-l2bdbasemaclrn-ndrdisc.robot* => 2 ports of 10GE on
- Intel x520 NIC, dot1q tagged Ethernet, L2 bridge-domain baseline switching
- with MAC learning, NDR throughput discovery.
- - *10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-ndrchk.robot* => 2 ports of 10GE
- on Intel x520 NIC, IPv4 VXLAN Ethernet, L2 bridge-domain baseline
- switching with MAC learning, NDR throughput discovery.
- - *10ge2p1x520-ethip4-ip4base-ndrdisc.robot* => 2 ports of 10GE on Intel
- x520 NIC, IPv4 baseline routed forwarding, NDR throughput discovery.
- - *10ge2p1x520-ethip6-ip6scale200k-ndrdisc.robot* => 2 ports of 10GE on
- Intel x520 NIC, IPv6 scaled up routed forwarding, NDR throughput
- discovery.
-
-#. **Physical port to VM (or VM chain) to physical port - a.k.a. NIC2VM2NIC,
- P2V2P, NIC2VMchain2NIC, P2V2V2P**
-
- - *PortNICConfig-WireEncapsulation-PacketForwardingFunction-
- PacketProcessingFunction1-...-PacketProcessingFunctionN-VirtEncapsulation-
- VirtPortConfig-VMconfig-TestType*
- - *10ge2p1x520-dot1q-l2bdbasemaclrn-eth-2vhost-1vm-ndrdisc.robot* => 2 ports
- of 10GE on Intel x520 NIC, dot1q tagged Ethernet, L2 bridge-domain
- switching to/from two vhost interfaces and one VM, NDR throughput
- discovery.
- - *10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-eth-2vhost-1vm-ndrdisc.robot* => 2
- ports of 10GE on Intel x520 NIC, IPv4 VXLAN Ethernet, L2 bridge-domain
- switching to/from two vhost interfaces and one VM, NDR throughput
- discovery.
- - *10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-eth-4vhost-2vm-ndrdisc.robot* => 2
- ports of 10GE on Intel x520 NIC, IPv4 VXLAN Ethernet, L2 bridge-domain
- switching to/from four vhost interfaces and two VMs, NDR throughput
- discovery.
-
-Methodology: Multi-Thread and Multi-Core
-----------------------------------------
-
-**HyperThreading** - CSIT |release| performance tests are executed with SUT
-servers' Intel XEON CPUs configured in HyperThreading Disabled mode (BIOS
-settings). This is the simplest configuration used to establish baseline
-single-thread single-core SW packet processing and forwarding performance.
-Subsequent releases of CSIT will add performance tests with Intel
-HyperThreading Enabled (requires BIOS settings change and hard reboot).
-
-**Multi-core Test** - CSIT |release| multi-core tests are executed in the
-following VPP thread and core configurations:
-
-#. 1t1c - 1 VPP worker thread on 1 CPU physical core.
-#. 2t2c - 2 VPP worker threads on 2 CPU physical cores.
-
-Note that in quite a few test cases running VPP on 2 physical cores hits
-the tested NIC I/O bandwidth or packets-per-second limit.
-
-Methodology: Packet Throughput
-------------------------------
-
-Following values are measured and reported for packet throughput tests:
-
-- NDR binary search per RFC2544:
-
- - Packet rate: "RATE: <aggregate packet rate in packets-per-second> pps
- (2x <per direction packets-per-second>)"
- - Aggregate bandwidth: "BANDWIDTH: <aggregate bandwidth in Gigabits per
- second> Gbps (untagged)"
-
-- PDR binary search per RFC2544:
-
- - Packet rate: "RATE: <aggregate packet rate in packets-per-second> pps (2x
- <per direction packets-per-second>)"
- - Aggregate bandwidth: "BANDWIDTH: <aggregate bandwidth in Gigabits per
- second> Gbps (untagged)"
- - Packet loss tolerance: "LOSS_ACCEPTANCE <accepted percentage of packets
- lost at PDR rate>""
-
-- NDR and PDR are measured for the following L2 frame sizes:
-
- - IPv4: 64B, IMIX_v4_1 (28x64B,16x570B,4x1518B), 1518B, 9000B.
- - IPv6: 78B, 1518B, 9000B.
-
-
-Methodology: Packet Latency
----------------------------
-
-TRex Traffic Generator (TG) is used for measuring latency of VPP DUTs. Reported
-latency values are measured using following methodology:
-
-- Latency tests are performed at 10%, 50% of discovered NDR rate (non drop rate)
- for each NDR throughput test and packet size (except IMIX).
-- TG sends dedicated latency streams, one per direction, each at the rate of
- 10kpps at the prescribed packet size; these are sent in addition to the main
- load streams.
-- TG reports min/avg/max latency values per stream direction, hence two sets
- of latency values are reported per test case; future release of TRex is
- expected to report latency percentiles.
-- Reported latency values are aggregate across two SUTs due to three node
- topology used for all performance tests; for per SUT latency, reported value
- should be divided by two.
-- 1usec is the measurement accuracy advertised by TRex TG for the setup used in
- FD.io labs used by CSIT project.
-- TRex setup introduces an always-on error of about 2*2usec per latency flow -
- additonal Tx/Rx interface latency induced by TRex SW writing and reading
- packet timestamps on CPU cores without HW acceleration on NICs closer to the
- interface line.
-
-
-Methodology: KVM VM vhost
--------------------------
-
-CSIT |release| introduced environment configuration changes to KVM Qemu vhost-
-user tests in order to more representatively measure VPP-17.04 performance in
-configurations with vhost-user interfaces and VMs.
-
-Current setup of CSIT FD.io performance lab is using tuned settings for more
-optimal performance of KVM Qemu:
+The simplest logical topology for software data plane application like
+VPP is NIC-to-NIC switching. Tested topologies for 2-Node and 3-Node
+testbeds are shown in figures below.
+
+.. only:: latex
+
+ .. raw:: latex
+
+ \begin{figure}[H]
+ \centering
+ \graphicspath{{../_tmp/src/vpp_performance_tests/}}
+ \includegraphics[width=0.90\textwidth]{logical-2n-nic2nic}
+ \label{fig:logical-2n-nic2nic}
+ \end{figure}
+
+.. only:: html
+
+ .. figure:: logical-2n-nic2nic.svg
+ :alt: logical-2n-nic2nic
+ :align: center
+
+
+.. only:: latex
+
+ .. raw:: latex
+
+ \begin{figure}[H]
+ \centering
+ \graphicspath{{../_tmp/src/vpp_performance_tests/}}
+ \includegraphics[width=0.90\textwidth]{logical-3n-nic2nic}
+ \label{fig:logical-3n-nic2nic}
+ \end{figure}
+
+.. only:: html
+
+ .. figure:: logical-3n-nic2nic.svg
+ :alt: logical-3n-nic2nic
+ :align: center
+
+Server Systems Under Test (SUT) run VPP application in Linux user-mode
+as a Device Under Test (DUT). Server Traffic Generator (TG) runs T-Rex
+application. Physical connectivity between SUTs and TG is provided using
+different drivers and NIC models that need to be tested for performance
+(packet/bandwidth throughput and latency).
+
+From SUT and DUT perspectives, all performance tests involve forwarding
+packets between two (or more) physical Ethernet ports (10GE, 25GE, 40GE,
+100GE). In most cases both physical ports on SUT are located on the same
+NIC. The only exceptions are link bonding and 100GE tests. In the latter
+case only one port per NIC can be driven at linerate due to PCIe Gen3
+x16 slot bandwidth limiations. 100GE NICs are not supported in PCIe Gen3
+x8 slots.
+
+Note that reported VPP DUT performance results are specific to the SUTs
+tested. SUTs with other processors than the ones used in FD.io lab are
+likely to yield different results. A good rule of thumb, that can be
+applied to estimate VPP packet thoughput for NIC-to-NIC switching
+topology, is to expect the forwarding performance to be proportional to
+processor core frequency for the same processor architecture, assuming
+processor is the only limiting factor and all other SUT parameters are
+equivalent to FD.io CSIT environment.
+
+VM Service Switching
+~~~~~~~~~~~~~~~~~~~~
+
+VM service switching topology test cases require VPP DUT to communicate
+with Virtual Machines (VMs) over vhost-user virtual interfaces.
+
+Two types of VM service topologies are tested in |csit-release|:
+
+#. "Parallel" topology with packets flowing within SUT from NIC(s) via
+ VPP DUT to VM, back to VPP DUT, then out thru NIC(s).
+
+#. "Chained" topology (a.k.a. "Snake") with packets flowing within SUT
+ from NIC(s) via VPP DUT to VM, back to VPP DUT, then to the next VM,
+ back to VPP DUT and so on and so forth until the last VM in a chain,
+ then back to VPP DUT and out thru NIC(s).
-- Qemu virtio queue size has been increased from default value of 256 to 1024
- descriptors.
-- Adjusted Linux kernel CFS scheduler settings, as detailed on this CSIT wiki
- page: https://wiki.fd.io/view/CSIT/csit-perf-env-tuning-ubuntu1604.
-
-Adjusted Linux kernel CFS settings make the NDR and PDR throughput performance
-of VPP+VM system less sensitive to other Linux OS system tasks by reducing
-their interference on CPU cores that are designated for critical software
-tasks under test, namely VPP worker threads in host and Testpmd threads in
-guest dealing with data plan.
+For each of the above topologies, VPP DUT is tested in a range of L2
+or IPv4/IPv6 configurations depending on the test suite. Sample VPP DUT
+"Chained" VM service topologies for 2-Node and 3-Node testbeds with each
+SUT running N of VM instances is shown in the figures below.
-Methodology: IPSec with Intel QAT HW cards
-------------------------------------------
+.. only:: latex
-VPP IPSec performance tests are using DPDK cryptodev device driver in
-combination with HW cryptodev devices - Intel QAT 8950 50G - present in
-LF FD.io physical testbeds. DPDK cryptodev can be used for all IPSec
-data plane functions supported by VPP.
+ .. raw:: latex
-Currently CSIT |release| implements following IPSec test cases:
+ \begin{figure}[H]
+ \centering
+ \graphicspath{{../_tmp/src/vpp_performance_tests/}}
+ \includegraphics[width=0.90\textwidth]{logical-2n-vm-vhost}
+ \label{fig:logical-2n-vm-vhost}
+ \end{figure}
-- AES-GCM, CBC-SHA1 ciphers, in combination with IPv4 routed-forwarding
- with Intel xl710 NIC.
-- CBC-SHA1 ciphers, in combination with LISP-GPE overlay tunneling for
- IPv4-over-IPv4 with Intel xl710 NIC.
+.. only:: html
-Methodology: TRex Traffic Generator Usage
------------------------------------------
+ .. figure:: logical-2n-vm-vhost.svg
+ :alt: logical-2n-vm-vhost
+ :align: center
-The `TRex traffic generator <https://wiki.fd.io/view/TRex>`_ is used for all
-CSIT performance tests. TRex stateless mode is used to measure NDR and PDR
-throughputs using binary search (NDR and PDR discovery tests) and for quick
-checks of DUT performance against the reference NDRs (NDR check tests) for
-specific configuration.
-TRex is installed and run on the TG compute node. The typical procedure is:
+.. only:: latex
- - If the TRex is not already installed on TG, it is installed in the
- suite setup phase - see `TRex intallation <https://gerrit.fd.io/r/gitweb?p=csit.git;a=blob;f=resources/tools/t-rex/t-rex-installer.sh;h=8090b7568327ac5f869e82664bc51b24f89f603f;hb=refs/heads/rls1704>`_.
- - TRex configuration is set in its configuration file
- ::
+ .. raw:: latex
- /etc/trex_cfg.yaml
+ \begin{figure}[H]
+ \centering
+ \graphicspath{{../_tmp/src/vpp_performance_tests/}}
+ \includegraphics[width=0.90\textwidth]{logical-3n-vm-vhost}
+ \label{fig:logical-3n-vm-vhost}
+ \end{figure}
- - TRex is started in the background mode
- ::
+.. only:: html
- sh -c 'cd /opt/trex-core-2.22/scripts/ && sudo nohup ./t-rex-64 -i -c 7 --iom 0 > /dev/null 2>&1 &' > /dev/null
+ .. figure:: logical-3n-vm-vhost.svg
+ :alt: logical-3n-vm-vhost
+ :align: center
- - There are traffic streams dynamically prepared for each test. The traffic
- is sent and the statistics obtained using trex_stl_lib.api.STLClient.
+In "Chained" VM topologies, packets are switched by VPP DUT multiple
+times: twice for a single VM, three times for two VMs, N+1 times for N
+VMs. Hence the external throughput rates measured by TG and listed in
+this report must be multiplied by N+1 to represent the actual VPP DUT
+aggregate packet forwarding rate.
-**Measuring packet loss**
+For "Parallel" service topology packets are always switched twice by VPP
+DUT per service chain.
+
+Note that reported VPP DUT performance results are specific to the SUTs
+tested. SUTs with other processor than the ones used in FD.io lab are
+likely to yield different results. Similarly to NIC-to-NIC switching
+topology, here one can also expect the forwarding performance to be
+proportional to processor core frequency for the same processor
+architecture, assuming processor is the only limiting factor. However
+due to much higher dependency on intensive memory operations in VM
+service chained topologies and sensitivity to Linux scheduler settings
+and behaviour, this estimation may not always yield good enough
+accuracy.
- - Create an instance of STLClient
- - Connect to the client
- - Add all streams
- - Clear statistics
- - Send the traffic for defined time
- - Get the statistics
+Container Service Switching
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Container service switching topology test cases require VPP DUT to
+communicate with Containers (Ctrs) over memif virtual interfaces.
-If there is a warm-up phase required, the traffic is sent also before test and
-the statistics are ignored.
+Three types of VM service topologies are tested in |csit-release|:
-**Measuring latency**
-
-If measurement of latency is requested, two more packet streams are created (one
-for each direction) with TRex flow_stats parameter set to STLFlowLatencyStats. In
-that case, returned statistics will also include min/avg/max latency values.
+#. "Parallel" topology with packets flowing within SUT from NIC(s) via
+ VPP DUT to Container, back to VPP DUT, then out thru NIC(s).
+
+#. "Chained" topology (a.k.a. "Snake") with packets flowing within SUT
+ from NIC(s) via VPP DUT to Container, back to VPP DUT, then to the
+ next Container, back to VPP DUT and so on and so forth until the
+ last Container in a chain, then back to VPP DUT and out thru NIC(s).
+
+#. "Horizontal" topology with packets flowing within SUT from NIC(s) via
+ VPP DUT to Container, then via "horizontal" memif to the next
+ Container, and so on and so forth until the last Container, then
+ back to VPP DUT and out thru NIC(s).
+
+For each of the above topologies, VPP DUT is tested in a range of L2
+or IPv4/IPv6 configurations depending on the test suite. Sample VPP DUT
+"Chained" Container service topologies for 2-Node and 3-Node testbeds
+with each SUT running N of Container instances is shown in the figures
+below.
+
+.. only:: latex
+
+ .. raw:: latex
+
+ \begin{figure}[H]
+ \centering
+ \graphicspath{{../_tmp/src/vpp_performance_tests/}}
+ \includegraphics[width=0.90\textwidth]{logical-2n-container-memif}
+ \label{fig:logical-2n-container-memif}
+ \end{figure}
+
+.. only:: html
+
+ .. figure:: logical-2n-container-memif.svg
+ :alt: logical-2n-container-memif
+ :align: center
+
+
+.. only:: latex
+
+ .. raw:: latex
+
+ \begin{figure}[H]
+ \centering
+ \graphicspath{{../_tmp/src/vpp_performance_tests/}}
+ \includegraphics[width=0.90\textwidth]{logical-3n-container-memif}
+ \label{fig:logical-3n-container-memif}
+ \end{figure}
+
+.. only:: html
+
+ .. figure:: logical-3n-container-memif.svg
+ :alt: logical-3n-container-memif
+ :align: center
+
+In "Chained" Container topologies, packets are switched by VPP DUT
+multiple times: twice for a single Container, three times for two
+Containers, N+1 times for N Containers. Hence the external throughput
+rates measured by TG and listed in this report must be multiplied by N+1
+to represent the actual VPP DUT aggregate packet forwarding rate.
+
+For a "Parallel" and "Horizontal" service topologies packets are always
+switched by VPP DUT twice per service chain.
+
+Note that reported VPP DUT performance results are specific to the SUTs
+tested. SUTs with other processor than the ones used in FD.io lab are
+likely to yield different results. Similarly to NIC-to-NIC switching
+topology, here one can also expect the forwarding performance to be
+proportional to processor core frequency for the same processor
+architecture, assuming processor is the only limiting factor. However
+due to much higher dependency on intensive memory operations in
+Container service chained topologies and sensitivity to Linux scheduler
+settings and behaviour, this estimation may not always yield good enough
+accuracy.
+
+Performance Tests Coverage
+--------------------------
+
+Performance tests measure following metrics for tested VPP DUT
+topologies and configurations:
+
+- Packet Throughput: measured in accordance with :rfc:`2544`, using
+ FD.io CSIT Multiple Loss Ratio search (MLRsearch), an optimized binary
+ search algorithm, producing throughput at different Packet Loss Ratio
+ (PLR) values:
+
+ - Non Drop Rate (NDR): packet throughput at PLR=0%.
+ - Partial Drop Rate (PDR): packet throughput at PLR=0.5%.
+
+- One-Way Packet Latency: measured at different offered packet loads:
+
+ - 100% of discovered NDR throughput.
+ - 100% of discovered PDR throughput.
+
+- Maximum Receive Rate (MRR): measure packet forwarding rate under the
+ maximum load offered by traffic generator over a set trial duration,
+ regardless of packet loss. Maximum load for specified Ethernet frame
+ size is set to the bi-directional link rate.
+
+|csit-release| includes following VPP data plane functionality
+performance tested across a range of NIC drivers and NIC models:
+
++-----------------------+----------------------------------------------+
+| Functionality | Description |
++=======================+==============================================+
+| ACL | L2 Bridge-Domain switching and |
+| | IPv4and IPv6 routing with iACL and oACL IP |
+| | address, MAC address and L4 port security. |
++-----------------------+----------------------------------------------+
+| COP | IPv4 and IPv6 routing with COP address |
+| | security. |
++-----------------------+----------------------------------------------+
+| IPv4 | IPv4 routing. |
++-----------------------+----------------------------------------------+
+| IPv6 | IPv6 routing. |
++-----------------------+----------------------------------------------+
+| IPv4 Scale | IPv4 routing with 20k, 200k and 2M FIB |
+| | entries. |
++-----------------------+----------------------------------------------+
+| IPv6 Scale | IPv6 routing with 20k, 200k and 2M FIB |
+| | entries. |
++-----------------------+----------------------------------------------+
+| IPSecHW | IPSec encryption with AES-GCM, CBC-SHA1 |
+| | ciphers, in combination with IPv4 routing. |
+| | Intel QAT HW acceleration. |
++-----------------------+----------------------------------------------+
+| IPSec+LISP | IPSec encryption with CBC-SHA1 ciphers, in |
+| | combination with LISP-GPE overlay tunneling |
+| | for IPv4-over-IPv4. |
++-----------------------+----------------------------------------------+
+| IPSecSW | IPSec encryption with AES-GCM, CBC-SHA1 |
+| | ciphers, in combination with IPv4 routing. |
++-----------------------+----------------------------------------------+
+| K8s Containers Memif | K8s orchestrated container VPP service chain |
+| | topologies connected over the memif virtual |
+| | interface. |
++-----------------------+----------------------------------------------+
+| KVM VMs vhost-user | Virtual topologies with service |
+| | chains of 1 and 2 VMs using vhost-user |
+| | interfaces, with different VPP forwarding |
+| | modes incl. L2XC, L2BD, VXLAN with L2BD, |
+| | IPv4 routing. |
++-----------------------+----------------------------------------------+
+| L2BD | L2 Bridge-Domain switching of untagged |
+| | Ethernet frames with MAC learning; disabled |
+| | MAC learning i.e. static MAC tests to be |
+| | added. |
++-----------------------+----------------------------------------------+
+| L2BD Scale | L2 Bridge-Domain switching of untagged |
+| | Ethernet frames with MAC learning; disabled |
+| | MAC learning i.e. static MAC tests to be |
+| | added with 20k, 200k and 2M FIB entries. |
++-----------------------+----------------------------------------------+
+| L2XC | L2 Cross-Connect switching of untagged, |
+| | dot1q, dot1ad VLAN tagged Ethernet frames. |
++-----------------------+----------------------------------------------+
+| LISP | LISP overlay tunneling for IPv4-over-IPv4, |
+| | IPv6-over-IPv4, IPv6-over-IPv6, |
+| | IPv4-over-IPv6 in IPv4 and IPv6 routing |
+| | modes. |
++-----------------------+----------------------------------------------+
+| LXC/DRC Containers | Container VPP memif virtual interface tests |
+| Memif | with different VPP forwarding modes incl. |
+| | L2XC, L2BD. |
++-----------------------+----------------------------------------------+
+| NAT | (Source) Network Address Translation tests |
+| | with varying number of users and ports per |
+| | user. |
++-----------------------+----------------------------------------------+
+| QoS Policer | Ingress packet rate measuring, marking and |
+| | limiting (IPv4). |
++-----------------------+----------------------------------------------+
+| SRv6 Routing | Segment Routing IPv6 tests. |
++-----------------------+----------------------------------------------+
+| VPP TCP/IP stack | Tests of VPP TCP/IP stack used with VPP |
+| | built-in HTTP server. |
++-----------------------+----------------------------------------------+
+| VTS | Virtual Topology System use case tests |
+| | combining VXLAN overlay tunneling with L2BD, |
+| | ACL and KVM VM vhost-user features. |
++-----------------------+----------------------------------------------+
+| VXLAN | VXLAN overlay tunnelling integration with |
+| | L2XC and L2BD. |
++-----------------------+----------------------------------------------+
+
+Execution of performance tests takes time, especially the throughput
+tests. Due to limited HW testbed resources available within FD.io labs
+hosted by :abbr:`LF (Linux Foundation)`, the number of tests for some
+NIC models has been limited to few baseline tests.
+
+Performance Tests Naming
+------------------------
+
+FD.io |csit-release| follows a common structured naming convention for
+all performance and system functional tests, introduced in CSIT-17.01.
+
+The naming should be intuitive for majority of the tests. Complete
+description of FD.io CSIT test naming convention is provided on
+:ref:`csit_test_naming`.