X-Git-Url: https://gerrit.fd.io/r/gitweb?p=csit.git;a=blobdiff_plain;f=docs%2Freport%2Fvpp_performance_tests%2Foverview.rst;h=d835df4684ebbd194a8b683fa44a0c290b726f98;hp=b7aecc1c38290d5897c1850270bfea47303779d7;hb=f4cd1c230a2328fd647fd88da5d9149fbad556e3;hpb=7a307e1c3df11defdd7cc46b5f07c30cd3770e72 diff --git a/docs/report/vpp_performance_tests/overview.rst b/docs/report/vpp_performance_tests/overview.rst index b7aecc1c38..d835df4684 100644 --- a/docs/report/vpp_performance_tests/overview.rst +++ b/docs/report/vpp_performance_tests/overview.rst @@ -54,18 +54,18 @@ performance labs to address larger scale multi-interface and multi-NIC performance testing scenarios. For test cases that require DUT (VPP) to communicate with -VirtualMachines (VMs) / LinuxContainers (LXCs) over vhost-user/memif -interfaces, N of VM/LXC instances are created on SUT1 and SUT2. For N=1 -DUT forwards packets between vhost/memif and physical interfaces. For -N>1 DUT a logical service chain forwarding topology is created on DUT by -applying L2 or IPv4/IPv6 configuration depending on the test suite. DUT -test topology with N VM/LXC instances is shown in the figure below -including applicable packet flow thru the DUTs and VMs/LXCs (marked in -the figure with ``***``).:: +VirtualMachines (VMs) / Linux or Docker Containers (Ctrs) over +vhost-user/memif interfaces, N of VM/Ctr instances are created on SUT1 +and SUT2. For N=1 DUT forwards packets between vhost/memif and physical +interfaces. For N>1 DUT a logical service chain forwarding topology is +created on DUT by applying L2 or IPv4/IPv6 configuration depending on +the test suite. DUT test topology with N VM/Ctr instances is shown in +the figure below including applicable packet flow thru the DUTs and +VMs/Ctrs (marked in the figure with ``***``).:: +-------------------------+ +-------------------------+ | +---------+ +---------+ | | +---------+ +---------+ | - | |VM/LXC[1]| |VM/LXC[N]| | | |VM/LXC[1]| |VM/LXC[N]| | + | |VM/Ctr[1]| |VM/Ctr[N]| | | |VM/Ctr[1]| |VM/Ctr[N]| | | | ***** | | ***** | | | | ***** | | ***** | | | +--^---^--+ +--^---^--+ | | +--^---^--+ +--^---^--+ | | *| |* *| |* | | *| |* *| |* | @@ -85,8 +85,8 @@ the figure with ``***``).:: **********************| |********************** +-----------+ -For VM/LXC tests, packets are switched by DUT multiple times: twice for -a single VM/LXC, three times for two VMs/LXCs, N+1 times for N VMs/LXCs. +For VM/Ctr tests, packets are switched by DUT multiple times: twice for +a single VM/Ctr, three times for two VMs/Ctrs, N+1 times for N VMs/Ctrs. Hence the external throughput rates measured by TG and listed in this report must be multiplied by (N+1) to represent the actual DUT aggregate packet forwarding rate. @@ -99,14 +99,19 @@ thoughput for Phy-to-Phy (NIC-to-NIC, PCI-to-PCI) topology, is to expect the forwarding performance to be proportional to CPU core frequency, assuming CPU is the only limiting factor and all other SUT parameters equivalent to FD.io CSIT environment. The same rule of thumb can be also -applied for Phy-to-VM/LXC-to-Phy (NIC-to-VM/LXC-to-NIC) topology, but due to +applied for Phy-to-VM/Ctr-to-Phy (NIC-to-VM/Ctr-to-NIC) topology, but due to much higher dependency on intensive memory operations and sensitivity to Linux kernel scheduler settings and behaviour, this estimation may not always yield good enough accuracy. -For detailed :abbr:`LF (Linux Foundation)` FD.io test bed specification and -physical topology please refer to `LF FD.io CSIT testbed wiki page -`_. +For detailed FD.io CSIT testbed specification and topology, as well as +configuration and setup of SUTs and DUTs testbeds please refer to +:ref:`test_environment`. + +Similar SUT compute node and DUT VPP settings can be arrived to in a +standalone VPP setup by using a `vpp-config configuration tool +`_ developed within the +VPP project using CSIT recommended settings and scripts. Performance Tests Coverage -------------------------- @@ -157,7 +162,7 @@ CSIT |release| includes following performance test suites, listed per NIC type: number of users and ports per user. - **Container memif connections** - VPP memif virtual interface tests to interconnect VPP instances with L2XC and L2BD. - - **Container Orchestrated Topologies** - Container topologies connected over + - **Container K8s Orchestrated Topologies** - Container topologies connected over the memif virtual interface. - 2port40GE XL710 Intel @@ -170,10 +175,14 @@ CSIT |release| includes following performance test suites, listed per NIC type: - **VMs with vhost-user** - virtual topologies with 1 VM and service chains of 2 VMs using vhost-user interfaces, with VPP forwarding modes incl. L2 Cross-Connect, L2 Bridge-Domain, VXLAN with L2BD, IPv4 routed-forwarding. - - **IPSec** - IPSec encryption with AES-GCM, CBC-SHA1 ciphers, in combination - with IPv4 routed-forwarding. + - **IPSecSW** - IPSec encryption with AES-GCM, CBC-SHA1 ciphers, in + combination with IPv4 routed-forwarding. + - **IPSecHW** - IPSec encryption with AES-GCM, CBC-SHA1 ciphers, in + combination with IPv4 routed-forwarding. Intel QAT HW acceleration. - **IPSec+LISP** - IPSec encryption with CBC-SHA1 ciphers, in combination with LISP-GPE overlay tunneling for IPv4-over-IPv4. + - **VPP TCP/IP stack** - VPP builtin TCP based HTTP server. WRK traffic + generator is used. - 2port10GE X710 Intel @@ -394,7 +403,7 @@ TRex is installed and run on the TG compute node. The typical procedure is: - TRex is started in the background mode :: - $ sh -c 'cd /opt/trex-core-2.25/scripts/ && sudo nohup ./t-rex-64 -i -c 7 --iom 0 > /dev/null 2>&1 &' > /dev/null + $ sh -c 'cd /scripts/ && sudo nohup ./t-rex-64 -i -c 7 --iom 0 > /tmp/trex.log 2>&1 &' > /dev/null - There are traffic streams dynamically prepared for each test, based on traffic profiles. The traffic is sent and the statistics obtained using