+Overview\r
+========\r
+\r
+Tested Physical Topologies\r
+--------------------------\r
+\r
+CSIT VPP performance tests are executed on physical baremetal servers hosted by LF\r
+FD.io project. Testbed physical topology is shown in the figure below.\r
+\r
+::\r
+\r
+ +------------------------+ +------------------------+\r
+ | | | |\r
+ | +------------------+ | | +------------------+ |\r
+ | | | | | | | |\r
+ | | <-----------------> | |\r
+ | | DUT1 | | | | DUT2 | |\r
+ | +--^---------------+ | | +---------------^--+ |\r
+ | | | | | |\r
+ | | SUT1 | | SUT2 | |\r
+ +------------------------+ +------------------^-----+\r
+ | |\r
+ | |\r
+ | +-----------+ |\r
+ | | | |\r
+ +------------------> TG <------------------+\r
+ | |\r
+ +-----------+\r
+\r
+SUT1 and SUT2 are two System Under Test servers (Cisco UCS C240, each with two\r
+Intel XEON CPUs), TG is a Traffic Generator (TG, another Cisco UCS C240, with\r
+two Intel XEON CPUs). SUTs run VPP SW application in Linux user-mode as a\r
+Device Under Test (DUT). TG runs TRex SW application as a packet Traffic\r
+Generator. Physical connectivity between SUTs and to TG is provided using\r
+different NIC models that need to be tested for performance. Currently\r
+installed and tested NIC models include:\r
+\r
+#. 2port10GE X520-DA2 Intel.\r
+#. 2port10GE X710 Intel.\r
+#. 2port10GE VIC1227 Cisco.\r
+#. 2port40GE VIC1385 Cisco.\r
+#. 2port40GE XL710 Intel.\r
+\r
+From SUT and DUT perspective, all performance tests involve forwarding packets\r
+between two physical Ethernet ports (10GE or 40GE). Due to the number of\r
+listed NIC models tested and available PCI slot capacity in SUT servers, in\r
+all of the above cases both physical ports are located on the same NIC. In\r
+some test cases this results in measured packet throughput being limited not\r
+by VPP DUT but by either the physical interface or the NIC capacity.\r
+\r
+Going forward CSIT project will be looking to add more hardware into FD.io\r
+performance labs to address larger scale multi-interface and multi-NIC\r
+performance testing scenarios.\r
+\r
+For test cases that require DUT (VPP) to communicate with VM over vhost-user\r
+interfaces, a VM is created on SUT1 and SUT2. DUT (VPP) test topology with VM\r
+is shown in the figure below including applicable packet flow thru the VM\r
+(marked in the figure with ``***``).\r
+\r
+::\r
+\r
+ +------------------------+ +------------------------+\r
+ | +----------+ | | +----------+ |\r
+ | | VM | | | | VM | |\r
+ | | ****** | | | | ****** | |\r
+ | +--^----^--+ | | +--^----^--+ |\r
+ | *| |* | | *| |* |\r
+ | +------v----v------+ | | +------v----v------+ |\r
+ | | * * |**|***********|**| * * | |\r
+ | | ***** *******<----------------->******* ***** | |\r
+ | | * DUT1 | | | | DUT2 * | |\r
+ | +--^---------------+ | | +---------------^--+ |\r
+ | *| | | |* |\r
+ | *| SUT1 | | SUT2 |* |\r
+ +------------------------+ +------------------^-----+\r
+ *| |*\r
+ *| |*\r
+ *| +-----------+ |*\r
+ *| | | |*\r
+ *+------------------> TG <------------------+*\r
+ ******************* | |********************\r
+ +-----------+\r
+\r
+For VM tests, packets are switched by DUT (VPP) twice, hence the\r
+throughput rates measured by TG (and listed in this report) must be multiplied\r
+by two to represent the actual DUT aggregate packet forwarding rate.\r
+\r
+Note that reported VPP performance results are specific to the SUT tested.\r
+Current LF FD.io SUTs are based on Intel XEON E5-2699v3 2.3GHz CPUs. SUTs with\r
+other CPUs are likely to yield different results. A good rule of thumb, that\r
+can be applied to estimate VPP packet thoughput for Phy-to-Phy (NIC-to-NIC,\r
+PCI-to-PCI) topology, is to expect the forwarding performance to be\r
+proportional to CPU core frequency, assuming CPU is the only limiting factor\r
+and all other SUT aspects equal to FD.io CSIT environment. The same rule of\r
+thumb can be also applied for Phy-to-VM-to-Phy (NIC-to-VM-to-NIC) topology,\r
+but due to much higher dependency on very high frequency memory operations and\r
+sensitivity to Linux kernel scheduler settings and behaviour, this estimation\r
+may not always yield good enough accuracy.\r
+\r
+Detailed LF FD.io test bed specification and physical topology are described\r
+in `wiki CSIT LF FDio testbed <https://wiki.fd.io/view/CSIT/CSIT_LF_testbed>`_.\r
+\r
+Performance Tests Coverage\r
+--------------------------\r
+\r
+Performance tests are split into the two main categories:\r
+\r
+- Throughput discovery - discovery of packet forwarding rate using binary search\r
+ in accordance to RFC2544.\r
+\r
+ - NDR - discovery of Non Drop Rate packet throughput, at zero packet loss;\r
+ followed by packet one-way latency measurements at 10%, 50% and 100% of\r
+ discovered NDR throughput.\r
+ - PDR - discovery of Partial Drop Rate, with specified non-zero packet loss\r
+ currently set to 0.5%; followed by packet one-way latency measurements at\r
+ 100% of discovered PDR throughput.\r
+\r
+- Throughput verification - verification of packet forwarding rate against\r
+ previously discovered throughput rate. These tests are currently done against\r
+ 0.9 of reference NDR, with reference rates updated periodically.\r
+\r
+CSIT |release| includes following performance test suites, listed per NIC type:\r
+\r
+- 2port10GE X520-DA2 Intel\r
+\r
+ - **L2XC** - L2 Cross-Connect switched-forwarding of untagged, dot1q, dot1ad\r
+ VLAN tagged Ethernet frames.\r
+ - **L2BD** - L2 Bridge-Domain switched-forwarding of untagged Ethernet frames\r
+ with MAC learning; disabled MAC learning i.e. static MAC tests to be added.\r
+ - **IPv4** - IPv4 routed-forwarding.\r
+ - **IPv6** - IPv6 routed-forwarding.\r
+ - **IPv4 Scale** - IPv4 routed-forwarding with 20k, 200k and 2M FIB entries.\r
+ - **IPv6 Scale** - IPv6 routed-forwarding with 20k, 200k and 2M FIB entries.\r
+ - **VM with vhost-user** - switching between NIC ports and VM over vhost-user\r
+ interfaces in different switching modes incl. L2 Cross-Connect, L2\r
+ Bridge-Domain, VXLAN with L2BD, IPv4 routed-forwarding.\r
+ - **COP** - IPv4 and IPv6 routed-forwarding with COP address security.\r
+ - **iACL** - IPv4 and IPv6 routed-forwarding with iACL address security.\r
+ - **LISP** - LISP overlay tunneling for IPv4-over-IPV4, IPv6-over-IPv4,\r
+ IPv6-over-IPv6, IPv4-over-IPv6 in IPv4 and IPv6 routed-forwarding modes.\r
+ - **VXLAN** - VXLAN overlay tunnelling integration with L2XC and L2BD.\r
+ - **QoS Policer** - ingress packet rate measuring, marking and limiting\r
+ (IPv4).\r
+\r
+- 2port40GE XL710 Intel\r
+\r
+ - **L2XC** - L2 Cross-Connect switched-forwarding of untagged Ethernet frames.\r
+ - **L2BD** - L2 Bridge-Domain switched-forwarding of untagged Ethernet frames\r
+ with MAC learning.\r
+ - **IPv4** - IPv4 routed-forwarding.\r
+ - **IPv6** - IPv6 routed-forwarding.\r
+ - **VM with vhost-user** - switching between NIC ports and VM over vhost-user\r
+ interfaces in different switching modes incl. L2 Bridge-Domain.\r
+\r
+- 2port10GE X710 Intel\r
+\r
+ - **L2BD** - L2 Bridge-Domain switched-forwarding of untagged Ethernet frames\r
+ with MAC learning.\r
+ - **VM with vhost-user** - switching between NIC ports and VM over vhost-user\r
+ interfaces in different switching modes incl. L2 Bridge-Domain.\r
+\r
+- 2port10GE VIC1227 Cisco\r
+\r
+ - **L2BD** - L2 Bridge-Domain switched-forwarding of untagged Ethernet frames\r
+ with MAC learning.\r
+\r
+- 2port40GE VIC1385 Cisco\r
+\r
+ - **L2BD** - L2 Bridge-Domain switched-forwarding of untagged Ethernet frames\r
+ with MAC learning.\r
+\r
+Execution of performance tests takes time, especially the throughput discovery\r
+tests. Due to limited HW testbed resources available within FD.io labs hosted\r
+by Linux Foundation, the number of tests for NICs other than X520 (a.k.a.\r
+Niantic) has been limited to few baseline tests. Over time we expect the HW\r
+testbed resources to grow, and will be adding complete set of performance\r
+tests for all models of hardware to be executed regularly and(or)\r
+continuously.\r
+\r
+Performance Tests Naming\r
+------------------------\r
+\r
+CSIT |release| introduced a common structured naming convention for all\r
+performance and functional tests. This change was driven by substantially\r
+growing number and type of CSIT test cases. Firstly, the original practice did\r
+not always follow any strict naming convention. Secondly test names did not\r
+always clearly capture tested packet encapsulations, and the actual type or\r
+content of the tests. Thirdly HW configurations in terms of NICs, ports and\r
+their locality were not captured either. These were but few reasons that drove\r
+the decision to change and define a new more complete and stricter test naming\r
+convention, and to apply this to all existing and new test cases.\r
+\r
+The new naming should be intuitive for majority of the tests. The complete\r
+description of CSIT test naming convention is provided on `CSIT test naming wiki\r
+<https://wiki.fd.io/view/CSIT/csit-test-naming>`_.\r
+\r
+Here few illustrative examples of the new naming usage for performance test\r
+suites:\r
+\r
+#. **Physical port to physical port - a.k.a. NIC-to-NIC, Phy-to-Phy, P2P**\r
+\r
+ - *PortNICConfig-WireEncapsulation-PacketForwardingFunction-\r
+ PacketProcessingFunction1-...-PacketProcessingFunctionN-TestType*\r
+ - *10ge2p1x520-dot1q-l2bdbasemaclrn-ndrdisc.robot* => 2 ports of 10GE on\r
+ Intel x520 NIC, dot1q tagged Ethernet, L2 bridge-domain baseline switching\r
+ with MAC learning, NDR throughput discovery.\r
+ - *10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-ndrchk.robot* => 2 ports of 10GE\r
+ on Intel x520 NIC, IPv4 VXLAN Ethernet, L2 bridge-domain baseline\r
+ switching with MAC learning, NDR throughput discovery.\r
+ - *10ge2p1x520-ethip4-ip4base-ndrdisc.robot* => 2 ports of 10GE on Intel\r
+ x520 NIC, IPv4 baseline routed forwarding, NDR throughput discovery.\r
+ - *10ge2p1x520-ethip6-ip6scale200k-ndrdisc.robot* => 2 ports of 10GE on\r
+ Intel x520 NIC, IPv6 scaled up routed forwarding, NDR throughput\r
+ discovery.\r
+\r
+#. **Physical port to VM (or VM chain) to physical port - a.k.a. NIC2VM2NIC,\r
+ P2V2P, NIC2VMchain2NIC, P2V2V2P**\r
+\r
+ - *PortNICConfig-WireEncapsulation-PacketForwardingFunction-\r
+ PacketProcessingFunction1-...-PacketProcessingFunctionN-VirtEncapsulation-\r
+ VirtPortConfig-VMconfig-TestType*\r
+ - *10ge2p1x520-dot1q-l2bdbasemaclrn-eth-2vhost-1vm-ndrdisc.robot* => 2 ports\r
+ of 10GE on Intel x520 NIC, dot1q tagged Ethernet, L2 bridge-domain\r
+ switching to/from two vhost interfaces and one VM, NDR throughput\r
+ discovery.\r
+ - *10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-eth-2vhost-1vm-ndrdisc.robot* => 2\r
+ ports of 10GE on Intel x520 NIC, IPv4 VXLAN Ethernet, L2 bridge-domain\r
+ switching to/from two vhost interfaces and one VM, NDR throughput\r
+ discovery.\r
+ - *10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-eth-4vhost-2vm-ndrdisc.robot* => 2\r
+ ports of 10GE on Intel x520 NIC, IPv4 VXLAN Ethernet, L2 bridge-domain\r
+ switching to/from four vhost interfaces and two VMs, NDR throughput\r
+ discovery.\r
+\r
+Methodology: Multi-Thread and Multi-Core\r
+----------------------------------------\r
+\r
+**HyperThreading** - CSIT |release| performance tests are executed with SUT\r
+servers' Intel XEON CPUs configured in HyperThreading Disabled mode (BIOS\r
+settings). This is the simplest configuration used to establish baseline\r
+single-thread single-core SW packet processing and forwarding performance.\r
+Subsequent releases of CSIT will add performance tests with Intel\r
+HyperThreading Enabled (requires BIOS settings change and hard reboot).\r
+\r
+**Multi-core Test** - CSIT |release| multi-core tests are executed in the\r
+following VPP thread and core configurations:\r
+\r
+#. 1t1c - 1 VPP worker thread on 1 CPU physical core.\r
+#. 2t2c - 2 VPP worker threads on 2 CPU physical cores.\r
+#. 4t4c - 4 VPP threads on 4 CPU physical cores.\r
+\r
+Note that in quite a few test cases running VPP on 2 or 4 physical cores hits\r
+the tested NIC I/O bandwidth or packets-per-second limit.\r
+\r
+Methodology: Packet Throughput\r
+------------------------------\r
+\r
+Following values are measured and reported for packet throughput tests:\r
+\r
+- NDR binary search per RFC2544:\r
+\r
+ - Packet rate: "RATE: <aggregate packet rate in packets-per-second> pps\r
+ (2x <per direction packets-per-second>)"\r
+ - Aggregate bandwidth: "BANDWIDTH: <aggregate bandwidth in Gigabits per\r
+ second> Gbps (untagged)"\r
+\r
+- PDR binary search per RFC2544:\r
+\r
+ - Packet rate: "RATE: <aggregate packet rate in packets-per-second> pps (2x\r
+ <per direction packets-per-second>)"\r
+ - Aggregate bandwidth: "BANDWIDTH: <aggregate bandwidth in Gigabits per\r
+ second> Gbps (untagged)"\r
+ - Packet loss tolerance: "LOSS_ACCEPTANCE <accepted percentage of packets\r
+ lost at PDR rate>""\r
+\r
+- NDR and PDR are measured for the following L2 frame sizes:\r
+\r
+ - IPv4: 64B, IMIX_v4_1 (28x64B,16x570B,4x1518B), 1518B, 9000B.\r
+ - IPv6: 78B, 1518B, 9000B.\r
+\r
+\r
+Methodology: Packet Latency\r
+---------------------------\r
+\r
+TRex Traffic Generator (TG) is used for measuring latency of VPP DUTs. Reported\r
+latency values are measured using following methodology:\r
+\r
+- Latency tests are performed at 10%, 50% of discovered NDR rate (non drop rate)\r
+ for each NDR throughput test and packet size (except IMIX).\r
+- TG sends dedicated latency streams, one per direction, each at the rate of\r
+ 10kpps at the prescribed packet size; these are sent in addition to the main\r
+ load streams.\r
+- TG reports min/avg/max latency values per stream direction, hence two sets\r
+ of latency values are reported per test case; future release of TRex is\r
+ expected to report latency percentiles.\r
+- Reported latency values are aggregate across two SUTs due to three node\r
+ topology used for all performance tests; for per SUT latency, reported value\r
+ should be divided by two.\r
+- 1usec is the measurement accuracy advertised by TRex TG for the setup used in\r
+ FD.io labs used by CSIT project.\r
+- TRex setup introduces an always-on error of about 2*2usec per latency flow -\r
+ additonal Tx/Rx interface latency induced by TRex SW writing and reading\r
+ packet timestamps on CPU cores without HW acceleration on NICs closer to the\r
+ interface line.\r
+\r
+\r
+Methodology: KVM VM vhost\r
+-------------------------\r
+\r
+CSIT |release| introduced environment configuration changes to KVM Qemu vhost-\r
+user tests in order to more representatively measure VPP-17.01 performance in\r
+configurations with vhost-user interfaces and VMs.\r
+\r
+Current setup of CSIT FD.io performance lab is using tuned settings for more\r
+optimal performance of KVM Qemu:\r
+\r
+- Default Qemu virtio queue size of 256 descriptors.\r
+- Adjusted Linux kernel CFS scheduler settings, as detailed on this CSIT wiki\r
+ page: https://wiki.fd.io/view/CSIT/csit-perf-env-tuning-ubuntu1604.\r
+\r
+Adjusted Linux kernel CFS settings make the NDR and PDR throughput performance\r
+of VPP+VM system less sensitive to other Linux OS system tasks by reducing\r
+their interference on CPU cores that are designated for critical software\r
+tasks under test, namely VPP worker threads in host and Testpmd threads in\r
+guest dealing with data plan.\r