+#. "Parallel" topology with packets flowing within SUT from NIC(s) via
+ VPP DUT to Container, back to VPP DUT, then out thru NIC(s).
+
+#. "Chained" topology (a.k.a. "Snake") with packets flowing within SUT
+ from NIC(s) via VPP DUT to Container, back to VPP DUT, then to the
+ next Container, back to VPP DUT and so on and so forth until the
+ last Container in a chain, then back to VPP DUT and out thru NIC(s).
+
+#. "Horizontal" topology with packets flowing within SUT from NIC(s) via
+ VPP DUT to Container, then via "horizontal" memif to the next
+ Container, and so on and so forth until the last Container, then
+ back to VPP DUT and out thru NIC(s).
+
+For each of the above topologies, VPP DUT is tested in a range of L2
+or IPv4/IPv6 configurations depending on the test suite. Sample VPP DUT
+"Chained" Container service topologies for 2-Node and 3-Node testbeds
+with each SUT running N of Container instances is shown in the figures
+below.
+
+.. only:: latex
+
+ .. raw:: latex
+
+ \begin{figure}[H]
+ \centering
+ \graphicspath{{../_tmp/src/vpp_performance_tests/}}
+ \includegraphics[width=0.90\textwidth]{logical-2n-container-memif}
+ \label{fig:logical-2n-container-memif}
+ \end{figure}
+
+.. only:: html
+
+ .. figure:: logical-2n-container-memif.svg
+ :alt: logical-2n-container-memif
+ :align: center
+
+
+.. only:: latex
+
+ .. raw:: latex
+
+ \begin{figure}[H]
+ \centering
+ \graphicspath{{../_tmp/src/vpp_performance_tests/}}
+ \includegraphics[width=0.90\textwidth]{logical-3n-container-memif}
+ \label{fig:logical-3n-container-memif}
+ \end{figure}
+
+.. only:: html
+
+ .. figure:: logical-3n-container-memif.svg
+ :alt: logical-3n-container-memif
+ :align: center
+
+In "Chained" Container topologies, packets are switched by VPP DUT
+multiple times: twice for a single Container, three times for two
+Containers, N+1 times for N Containers. Hence the external throughput
+rates measured by TG and listed in this report must be multiplied by N+1
+to represent the actual VPP DUT aggregate packet forwarding rate.
+
+For a "Parallel" and "Horizontal" service topologies packets are always
+switched by VPP DUT twice per service chain.
+
+Note that reported VPP DUT performance results are specific to the SUTs
+tested. SUTs with other processor than the ones used in FD.io lab are
+likely to yield different results. Similarly to NIC-to-NIC switching
+topology, here one can also expect the forwarding performance to be
+proportional to processor core frequency for the same processor
+architecture, assuming processor is the only limiting factor. However
+due to much higher dependency on intensive memory operations in
+Container service chained topologies and sensitivity to Linux scheduler
+settings and behaviour, this estimation may not always yield good enough
+accuracy.
+
+Performance Tests Coverage
+--------------------------
+
+Performance tests measure following metrics for tested VPP DUT
+topologies and configurations:
+
+- Packet Throughput: measured in accordance with :rfc:`2544`, using
+ FD.io CSIT Multiple Loss Ratio search (MLRsearch), an optimized binary
+ search algorithm, producing throughput at different Packet Loss Ratio
+ (PLR) values:
+
+ - Non Drop Rate (NDR): packet throughput at PLR=0%.
+ - Partial Drop Rate (PDR): packet throughput at PLR=0.5%.
+
+- One-Way Packet Latency: measured at different offered packet loads:
+
+ - 100% of discovered NDR throughput.
+ - 100% of discovered PDR throughput.
+
+- Maximum Receive Rate (MRR): measure packet forwarding rate under the
+ maximum load offered by traffic generator over a set trial duration,
+ regardless of packet loss. Maximum load for specified Ethernet frame
+ size is set to the bi-directional link rate.
+
+|csit-release| includes following VPP data plane functionality
+performance tested across a range of NIC drivers and NIC models:
+
++-----------------------+----------------------------------------------+
+| Functionality | Description |
++=======================+==============================================+
+| ACL | L2 Bridge-Domain switching and |
+| | IPv4and IPv6 routing with iACL and oACL IP |
+| | address, MAC address and L4 port security. |
++-----------------------+----------------------------------------------+
+| COP | IPv4 and IPv6 routing with COP address |
+| | security. |
++-----------------------+----------------------------------------------+
+| IPv4 | IPv4 routing. |
++-----------------------+----------------------------------------------+
+| IPv6 | IPv6 routing. |
++-----------------------+----------------------------------------------+
+| IPv4 Scale | IPv4 routing with 20k, 200k and 2M FIB |
+| | entries. |
++-----------------------+----------------------------------------------+
+| IPv6 Scale | IPv6 routing with 20k, 200k and 2M FIB |
+| | entries. |
++-----------------------+----------------------------------------------+
+| IPSecHW | IPSec encryption with AES-GCM, CBC-SHA1 |
+| | ciphers, in combination with IPv4 routing. |
+| | Intel QAT HW acceleration. |
++-----------------------+----------------------------------------------+
+| IPSec+LISP | IPSec encryption with CBC-SHA1 ciphers, in |
+| | combination with LISP-GPE overlay tunneling |
+| | for IPv4-over-IPv4. |
++-----------------------+----------------------------------------------+
+| IPSecSW | IPSec encryption with AES-GCM, CBC-SHA1 |
+| | ciphers, in combination with IPv4 routing. |
++-----------------------+----------------------------------------------+
+| K8s Containers Memif | K8s orchestrated container VPP service chain |
+| | topologies connected over the memif virtual |
+| | interface. |
++-----------------------+----------------------------------------------+
+| KVM VMs vhost-user | Virtual topologies with service |
+| | chains of 1 and 2 VMs using vhost-user |
+| | interfaces, with different VPP forwarding |
+| | modes incl. L2XC, L2BD, VXLAN with L2BD, |
+| | IPv4 routing. |
++-----------------------+----------------------------------------------+
+| L2BD | L2 Bridge-Domain switching of untagged |
+| | Ethernet frames with MAC learning; disabled |
+| | MAC learning i.e. static MAC tests to be |
+| | added. |
++-----------------------+----------------------------------------------+
+| L2BD Scale | L2 Bridge-Domain switching of untagged |
+| | Ethernet frames with MAC learning; disabled |
+| | MAC learning i.e. static MAC tests to be |
+| | added with 20k, 200k and 2M FIB entries. |
++-----------------------+----------------------------------------------+
+| L2XC | L2 Cross-Connect switching of untagged, |
+| | dot1q, dot1ad VLAN tagged Ethernet frames. |
++-----------------------+----------------------------------------------+
+| LISP | LISP overlay tunneling for IPv4-over-IPv4, |
+| | IPv6-over-IPv4, IPv6-over-IPv6, |
+| | IPv4-over-IPv6 in IPv4 and IPv6 routing |
+| | modes. |
++-----------------------+----------------------------------------------+
+| LXC/DRC Containers | Container VPP memif virtual interface tests |
+| Memif | with different VPP forwarding modes incl. |
+| | L2XC, L2BD. |
++-----------------------+----------------------------------------------+
+| NAT | (Source) Network Address Translation tests |
+| | with varying number of users and ports per |
+| | user. |
++-----------------------+----------------------------------------------+
+| QoS Policer | Ingress packet rate measuring, marking and |
+| | limiting (IPv4). |
++-----------------------+----------------------------------------------+
+| SRv6 Routing | Segment Routing IPv6 tests. |
++-----------------------+----------------------------------------------+
+| VPP TCP/IP stack | Tests of VPP TCP/IP stack used with VPP |
+| | built-in HTTP server. |
++-----------------------+----------------------------------------------+
+| VTS | Virtual Topology System use case tests |
+| | combining VXLAN overlay tunneling with L2BD, |
+| | ACL and KVM VM vhost-user features. |
++-----------------------+----------------------------------------------+
+| VXLAN | VXLAN overlay tunnelling integration with |
+| | L2XC and L2BD. |
++-----------------------+----------------------------------------------+
+
+Execution of performance tests takes time, especially the throughput
+tests. Due to limited HW testbed resources available within FD.io labs
+hosted by :abbr:`LF (Linux Foundation)`, the number of tests for some
+NIC models has been limited to few baseline tests.
+
+Performance Tests Naming
+------------------------
+
+FD.io |csit-release| follows a common structured naming convention for
+all performance and system functional tests, introduced in CSIT-17.01.
+
+The naming should be intuitive for majority of the tests. Complete
+description of FD.io CSIT test naming convention is provided on
+:ref:`csit_test_naming`.