X-Git-Url: https://gerrit.fd.io/r/gitweb?p=csit.git;a=blobdiff_plain;f=docs%2Freport%2Fvpp_performance_tests%2Foverview.rst;h=9647edeabdde2f28dd420e92b1ac9b473a0faef9;hp=86116bf7c5a97f65ca59e9b54a95de0089d51386;hb=67d3b2a44d49dd2c4a1b2f722849f57432787137;hpb=c7b2541ae5ff737691547daef2e4b25f9d232eba diff --git a/docs/report/vpp_performance_tests/overview.rst b/docs/report/vpp_performance_tests/overview.rst index 86116bf7c5..9647edeabd 100644 --- a/docs/report/vpp_performance_tests/overview.rst +++ b/docs/report/vpp_performance_tests/overview.rst @@ -1,6 +1,7 @@ Overview ======== +VPP performance test results are reported for a range of processors. For description of physical testbeds used for VPP performance tests please refer to :ref:`tested_physical_topologies`. @@ -30,9 +31,10 @@ testbeds are shown in figures below. .. raw:: latex \begin{figure}[H] - \centering - \includesvg[width=0.90\textwidth]{../_tmp/src/vpp_performance_tests/logical-2n-nic2nic} - \label{fig:logical-2n-nic2nic} + \centering + \graphicspath{{../_tmp/src/vpp_performance_tests/}} + \includegraphics[width=0.90\textwidth]{logical-2n-nic2nic} + \label{fig:logical-2n-nic2nic} \end{figure} .. only:: html @@ -47,9 +49,10 @@ testbeds are shown in figures below. .. raw:: latex \begin{figure}[H] - \centering - \includesvg[width=0.90\textwidth]{../_tmp/src/vpp_performance_tests/logical-3n-nic2nic} - \label{fig:logical-3n-nic2nic} + \centering + \graphicspath{{../_tmp/src/vpp_performance_tests/}} + \includegraphics[width=0.90\textwidth]{logical-3n-nic2nic} + \label{fig:logical-3n-nic2nic} \end{figure} .. only:: html @@ -107,9 +110,10 @@ SUT running N of VM instances is shown in the figures below. .. raw:: latex \begin{figure}[H] - \centering - \includesvg[width=0.90\textwidth]{../_tmp/src/vpp_performance_tests/logical-2n-vm-vhost} - \label{fig:logical-2n-vm-vhost} + \centering + \graphicspath{{../_tmp/src/vpp_performance_tests/}} + \includegraphics[width=0.90\textwidth]{logical-2n-vm-vhost} + \label{fig:logical-2n-vm-vhost} \end{figure} .. only:: html @@ -124,9 +128,10 @@ SUT running N of VM instances is shown in the figures below. .. raw:: latex \begin{figure}[H] - \centering - \includesvg[width=0.90\textwidth]{../_tmp/src/vpp_performance_tests/logical-3n-vm-vhost} - \label{fig:logical-3n-vm-vhost} + \centering + \graphicspath{{../_tmp/src/vpp_performance_tests/}} + \includegraphics[width=0.90\textwidth]{logical-3n-vm-vhost} + \label{fig:logical-3n-vm-vhost} \end{figure} .. only:: html @@ -187,9 +192,10 @@ below. .. raw:: latex \begin{figure}[H] - \centering - \includesvg[width=0.90\textwidth]{../_tmp/src/vpp_performance_tests/logical-2n-container-memif} - \label{fig:logical-2n-container-memif} + \centering + \graphicspath{{../_tmp/src/vpp_performance_tests/}} + \includegraphics[width=0.90\textwidth]{logical-2n-container-memif} + \label{fig:logical-2n-container-memif} \end{figure} .. only:: html @@ -204,9 +210,10 @@ below. .. raw:: latex \begin{figure}[H] - \centering - \includesvg[width=0.90\textwidth]{../_tmp/src/vpp_performance_tests/logical-3n-container-memif} - \label{fig:logical-3n-container-memif} + \centering + \graphicspath{{../_tmp/src/vpp_performance_tests/}} + \includegraphics[width=0.90\textwidth]{logical-3n-container-memif} + \label{fig:logical-3n-container-memif} \end{figure} .. only:: html @@ -251,27 +258,36 @@ topologies and configurations: - One-Way Packet Latency: measured at different offered packet loads: - - 100% of discovered NDR throughput. - - 100% of discovered PDR throughput. + - 90% of discovered PDR throughput. + - 50% of discovered PDR throughput. + - 10% of discovered PDR throughput. + - Minimal offered load. - Maximum Receive Rate (MRR): measure packet forwarding rate under the maximum load offered by traffic generator over a set trial duration, regardless of packet loss. Maximum load for specified Ethernet frame - size is set to the bi-directional link rate. + size is set to the bi-directional link rate, unless there is a known + limitation preventing Traffic Generator from achieving the line rate. -|csit-release| includes following performance test areas covered across -a range of NIC drivers and NIC models: +.. todo:: + + - Connections per second (CPS): TODO + +|csit-release| includes following VPP data plane functionality +performance tested across a range of NIC drivers and NIC models: +-----------------------+----------------------------------------------+ -| Test Area | Description | +| Functionality | Description | +=======================+==============================================+ | ACL | L2 Bridge-Domain switching and | | | IPv4and IPv6 routing with iACL and oACL IP | | | address, MAC address and L4 port security. | +-----------------------+----------------------------------------------+ -| COP | IPv4 and IPv6 routing with COP address | +| ADL | IPv4 and IPv6 routing with ADL address | | | security. | +-----------------------+----------------------------------------------+ +| GENEVE | GENEVE tunnels for IPv4 routing. | ++-----------------------+----------------------------------------------+ | IPv4 | IPv4 routing. | +-----------------------+----------------------------------------------+ | IPv6 | IPv6 routing. | @@ -282,7 +298,11 @@ a range of NIC drivers and NIC models: | IPv6 Scale | IPv6 routing with 20k, 200k and 2M FIB | | | entries. | +-----------------------+----------------------------------------------+ -| IPSecHW | IPSec encryption with AES-GCM, CBC-SHA1 | +| IPSecAsyncHW | IPSec encryption with AES-GCM, CBC-SHA-256 | +| | ciphers in async mode, in combination with | +| | IPv4 routing. Intel QAT HW acceleration. | ++-----------------------+----------------------------------------------+ +| IPSecHW | IPSec encryption with AES-GCM, CBC-SHA-256 | | | ciphers, in combination with IPv4 routing. | | | Intel QAT HW acceleration. | +-----------------------+----------------------------------------------+ @@ -290,15 +310,11 @@ a range of NIC drivers and NIC models: | | combination with LISP-GPE overlay tunneling | | | for IPv4-over-IPv4. | +-----------------------+----------------------------------------------+ -| IPSecSW | IPSec encryption with AES-GCM, CBC-SHA1 | +| IPSecSW | IPSec encryption with AES-GCM, CBC-SHA-256 | | | ciphers, in combination with IPv4 routing. | +-----------------------+----------------------------------------------+ -| K8s Containers Memif | K8s orchestrated container VPP service chain | -| | topologies connected over the memif virtual | -| | interface. | -+-----------------------+----------------------------------------------+ | KVM VMs vhost-user | Virtual topologies with service | -| | chains of 1 and 2 VMs using vhost-user | +| | chains of 1 VM using vhost-user | | | interfaces, with different VPP forwarding | | | modes incl. L2XC, L2BD, VXLAN with L2BD, | | | IPv4 routing. | @@ -325,9 +341,10 @@ a range of NIC drivers and NIC models: | Memif | with different VPP forwarding modes incl. | | | L2XC, L2BD. | +-----------------------+----------------------------------------------+ -| NAT | (Source) Network Address Translation tests | -| | with varying number of users and ports per | -| | user. | +| NAT44 | (Source) Network Address Translation | +| | deterministic mode and endpoint-dependent | +| | mode tests with varying number of users and | +| | ports per user for IPv4. | +-----------------------+----------------------------------------------+ | QoS Policer | Ingress packet rate measuring, marking and | | | limiting (IPv4). | @@ -337,6 +354,10 @@ a range of NIC drivers and NIC models: | VPP TCP/IP stack | Tests of VPP TCP/IP stack used with VPP | | | built-in HTTP server. | +-----------------------+----------------------------------------------+ +| VTS | Virtual Topology System use case tests | +| | combining VXLAN overlay tunneling with L2BD, | +| | ACL and KVM VM vhost-user features. | ++-----------------------+----------------------------------------------+ | VXLAN | VXLAN overlay tunnelling integration with | | | L2XC and L2BD. | +-----------------------+----------------------------------------------+ @@ -350,7 +371,7 @@ Performance Tests Naming ------------------------ FD.io |csit-release| follows a common structured naming convention for -all performance and system functional tests, introduced in CSIT rls1701. +all performance and system functional tests, introduced in CSIT-17.01. The naming should be intuitive for majority of the tests. Complete description of FD.io CSIT test naming convention is provided on