X-Git-Url: https://gerrit.fd.io/r/gitweb?a=blobdiff_plain;f=docs%2Freport%2Fvpp_performance_tests%2Foverview.rst;h=7fabebc05b5c223d60e980e8f0edae40d7a17ac5;hb=04c1160d1d3dbc6d666db198ab92960f48a18b29;hp=3c95919c559c829655ec7ca7c0032be08b3b0d92;hpb=5a1f4570778a7511415d94e58cc2299d02b871cd;p=csit.git diff --git a/docs/report/vpp_performance_tests/overview.rst b/docs/report/vpp_performance_tests/overview.rst index 3c95919c55..7fabebc05b 100644 --- a/docs/report/vpp_performance_tests/overview.rst +++ b/docs/report/vpp_performance_tests/overview.rst @@ -1,9 +1,7 @@ Overview ======== -VPP performance test results are reported for all three physical testbed -types present in FD.io labs: 3-Node Xeon Haswell (3n-hsw), 3-Node Xeon -Skylake (3n-skx), 2-Node Xeon Skylake (2n-skx) and installed NIC models. +VPP performance test results are reported for a range of processors. For description of physical testbeds used for VPP performance tests please refer to :ref:`tested_physical_topologies`. @@ -260,13 +258,16 @@ topologies and configurations: - One-Way Packet Latency: measured at different offered packet loads: - - 100% of discovered NDR throughput. - - 100% of discovered PDR throughput. + - 90% of discovered PDR throughput. + - 50% of discovered PDR throughput. + - 10% of discovered PDR throughput. + - Minimal offered load. - Maximum Receive Rate (MRR): measure packet forwarding rate under the maximum load offered by traffic generator over a set trial duration, regardless of packet loss. Maximum load for specified Ethernet frame - size is set to the bi-directional link rate. + size is set to the bi-directional link rate, unless there is a known + limitation preventing Traffic Generator from achieving the line rate. |csit-release| includes following VPP data plane functionality performance tested across a range of NIC drivers and NIC models: @@ -291,7 +292,7 @@ performance tested across a range of NIC drivers and NIC models: | IPv6 Scale | IPv6 routing with 20k, 200k and 2M FIB | | | entries. | +-----------------------+----------------------------------------------+ -| IPSecHW | IPSec encryption with AES-GCM, CBC-SHA1 | +| IPSecHW | IPSec encryption with AES-GCM, CBC-SHA-256 | | | ciphers, in combination with IPv4 routing. | | | Intel QAT HW acceleration. | +-----------------------+----------------------------------------------+ @@ -299,15 +300,11 @@ performance tested across a range of NIC drivers and NIC models: | | combination with LISP-GPE overlay tunneling | | | for IPv4-over-IPv4. | +-----------------------+----------------------------------------------+ -| IPSecSW | IPSec encryption with AES-GCM, CBC-SHA1 | +| IPSecSW | IPSec encryption with AES-GCM, CBC-SHA-256 | | | ciphers, in combination with IPv4 routing. | +-----------------------+----------------------------------------------+ -| K8s Containers Memif | K8s orchestrated container VPP service chain | -| | topologies connected over the memif virtual | -| | interface. | -+-----------------------+----------------------------------------------+ | KVM VMs vhost-user | Virtual topologies with service | -| | chains of 1 and 2 VMs using vhost-user | +| | chains of 1 VM using vhost-user | | | interfaces, with different VPP forwarding | | | modes incl. L2XC, L2BD, VXLAN with L2BD, | | | IPv4 routing. |