4 Tested Physical Topologies
5 --------------------------
7 CSIT VPP performance tests are executed on physical baremetal servers hosted by
8 LF FD.io project. Testbed physical topology is shown in the figure below.
12 +------------------------+ +------------------------+
14 | +------------------+ | | +------------------+ |
16 | | <-----------------> | |
17 | | DUT1 | | | | DUT2 | |
18 | +--^---------------+ | | +---------------^--+ |
21 +------------------------+ +------------------^-----+
26 +------------------> TG <------------------+
30 SUT1 and SUT2 are two System Under Test servers (Cisco UCS C240, each with two
31 Intel XEON CPUs), TG is a Traffic Generator (TG, another Cisco UCS C240, with
32 two Intel XEON CPUs). SUTs run VPP SW application in Linux user-mode as a
33 Device Under Test (DUT). TG runs TRex SW application as a packet Traffic
34 Generator. Physical connectivity between SUTs and to TG is provided using
35 different NIC models that need to be tested for performance. Currently
36 installed and tested NIC models include:
38 #. 2port10GE X520-DA2 Intel.
39 #. 2port10GE X710 Intel.
40 #. 2port10GE VIC1227 Cisco.
41 #. 2port40GE VIC1385 Cisco.
42 #. 2port40GE XL710 Intel.
44 From SUT and DUT perspective, all performance tests involve forwarding packets
45 between two physical Ethernet ports (10GE or 40GE). Due to the number of
46 listed NIC models tested and available PCI slot capacity in SUT servers, in
47 all of the above cases both physical ports are located on the same NIC. In
48 some test cases this results in measured packet throughput being limited not
49 by VPP DUT but by either the physical interface or the NIC capacity.
51 Going forward CSIT project will be looking to add more hardware into FD.io
52 performance labs to address larger scale multi-interface and multi-NIC
53 performance testing scenarios.
55 For test cases that require DUT (VPP) to communicate with
56 VirtualMachines(VMs)/LinuxContainers(LXCs) over vhost-user/memif
57 interfaces, N of VM/LXC instances are created on SUT1 and SUT2. For N=1
58 DUT forwards packets between vhost/memif and physical interfaces. For
59 N>1 DUT a logical service chain forwarding topology is created on DUT by
60 applying L2 or IPv4/IPv6 configuration depending on the test suite. DUT
61 test topology with N VM/LXC instances is shown in the figure below
62 including applicable packet flow thru the DUTs and VMs/LXCs (marked in
63 the figure with ``***``).
67 +-------------------------+ +-------------------------+
68 | +---------+ +---------+ | | +---------+ +---------+ |
69 | |VM/LXC[1]| |VM/LXC[N]| | | |VM/LXC[1]| |VM/LXC[N]| |
70 | | ***** | | ***** | | | | ***** | | ***** | |
71 | +--^---^--+ +--^---^--+ | | +--^---^--+ +--^---^--+ |
72 | *| |* *| |* | | *| |* *| |* |
73 | +--v---v-------v---v--+ | | +--v---v-------v---v--+ |
74 | | * * * * |*|***********|*| * * * * | |
75 | | * ********* ***<-|-----------|->*** ********* * | |
76 | | * DUT1 | | | | DUT2 * | |
77 | +--^------------------+ | | +------------------^--+ |
79 | *| SUT1 | | SUT2 |* |
80 +-------------------------+ +-------------------------+
85 *+--------------------> TG <--------------------+*
86 **********************| |**********************
89 For VM/LXC tests, packets are switched by DUT multiple times: twice for
90 a single VM/LXC, three times for two VMs/LXCs, N+1 times for N VMs/LXCs.
91 Hence the external throughput rates measured by TG and listed in this
92 report must be multiplied by (N+1) to represent the actual DUT aggregate
93 packet forwarding rate.
95 Note that reported DUT (VPP) performance results are specific to the
96 SUTs tested. Current LF FD.io SUTs are based on Intel XEON E5-2699v3
97 2.3GHz CPUs. SUTs with other CPUs are likely to yield different results.
98 A good rule of thumb, that can be applied to estimate VPP packet
99 thoughput for Phy-to-Phy (NIC-to-NIC, PCI-to-PCI) topology, is to expect
100 the forwarding performance to be proportional to CPU core frequency,
101 assuming CPU is the only limiting factor and all other SUT parameters
102 equivalent to FD.io CSIT environment. The same rule of thumb can be also
103 applied for Phy-to-VM/LXC-to-Phy (NIC-to-VM/LXC-to-NIC) topology, but
104 due to much higher dependency on intensive memory operations and
105 sensitivity to Linux kernel scheduler settings and behaviour, this
106 estimation may not always yield good enough accuracy.
108 For detailed LF FD.io test bed specification and physical topology
110 `LF FD.io CSIT testbed wiki page <https://wiki.fd.io/view/CSIT/CSIT_LF_testbed>`_.
112 Performance Tests Coverage
113 --------------------------
115 Performance tests are split into the two main categories:
117 - Throughput discovery - discovery of packet forwarding rate using binary search
118 in accordance to RFC2544.
120 - NDR - discovery of Non Drop Rate packet throughput, at zero packet loss;
121 followed by one-way packet latency measurements at 10%, 50% and 100% of
122 discovered NDR throughput.
123 - PDR - discovery of Partial Drop Rate, with specified non-zero packet loss
124 currently set to 0.5%; followed by one-way packet latency measurements at
125 100% of discovered PDR throughput.
127 - Throughput verification - verification of packet forwarding rate against
128 previously discovered throughput rate. These tests are currently done against
129 0.9 of reference NDR, with reference rates updated periodically.
131 CSIT |release| includes following performance test suites, listed per NIC type:
133 - 2port10GE X520-DA2 Intel
135 - **L2XC** - L2 Cross-Connect switched-forwarding of untagged, dot1q, dot1ad
136 VLAN tagged Ethernet frames.
137 - **L2BD** - L2 Bridge-Domain switched-forwarding of untagged Ethernet frames
138 with MAC learning; disabled MAC learning i.e. static MAC tests to be added.
139 - **IPv4** - IPv4 routed-forwarding.
140 - **IPv6** - IPv6 routed-forwarding.
141 - **IPv4 Scale** - IPv4 routed-forwarding with 20k, 200k and 2M FIB entries.
142 - **IPv6 Scale** - IPv6 routed-forwarding with 20k, 200k and 2M FIB entries.
143 - **VMs with vhost-user** - virtual topologies with 1 VM and service chains
144 of 2 VMs using vhost-user interfaces, with VPP forwarding modes incl. L2
145 Cross-Connect, L2 Bridge-Domain, VXLAN with L2BD, IPv4 routed-forwarding.
146 - **COP** - IPv4 and IPv6 routed-forwarding with COP address security.
147 - **iACL** - IPv4 and IPv6 routed-forwarding with iACL address security.
148 - **LISP** - LISP overlay tunneling for IPv4-over-IPv4, IPv6-over-IPv4,
149 IPv6-over-IPv6, IPv4-over-IPv6 in IPv4 and IPv6 routed-forwarding modes.
150 - **VXLAN** - VXLAN overlay tunnelling integration with L2XC and L2BD.
151 - **QoS Policer** - ingress packet rate measuring, marking and limiting
153 - **CGNAT** - Carrier Grade Network Address Translation tests with varying
154 number of users and ports per user.
156 - 2port40GE XL710 Intel
158 - **L2XC** - L2 Cross-Connect switched-forwarding of untagged Ethernet frames.
159 - **L2BD** - L2 Bridge-Domain switched-forwarding of untagged Ethernet frames
161 - **IPv4** - IPv4 routed-forwarding.
162 - **IPv6** - IPv6 routed-forwarding.
163 - **VMs with vhost-user** - virtual topologies with 1 VM and service chains
164 of 2 VMs using vhost-user interfaces, with VPP forwarding modes incl. L2
165 Cross-Connect, L2 Bridge-Domain, VXLAN with L2BD, IPv4 routed-forwarding.
166 - **IPSec** - IPSec encryption with AES-GCM, CBC-SHA1 ciphers, in combination
167 with IPv4 routed-forwarding.
168 - **IPSec+LISP** - IPSec encryption with CBC-SHA1 ciphers, in combination
169 with LISP-GPE overlay tunneling for IPv4-over-IPv4.
171 - 2port10GE X710 Intel
173 - **L2BD** - L2 Bridge-Domain switched-forwarding of untagged Ethernet frames
175 - **VMs with vhost-user** - virtual topologies with 1 VM using vhost-user
176 interfaces, with VPP forwarding modes incl. L2 Bridge-Domain.
178 - 2port10GE VIC1227 Cisco
180 - **L2BD** - L2 Bridge-Domain switched-forwarding of untagged Ethernet frames
183 - 2port40GE VIC1385 Cisco
185 - **L2BD** - L2 Bridge-Domain switched-forwarding of untagged Ethernet frames
188 Execution of performance tests takes time, especially the throughput discovery
189 tests. Due to limited HW testbed resources available within FD.io labs hosted
190 by Linux Foundation, the number of tests for NICs other than X520 (a.k.a.
191 Niantic) has been limited to few baseline tests. Over time we expect the HW
192 testbed resources to grow, and will be adding complete set of performance
193 tests for all models of hardware to be executed regularly and(or)
196 Performance Tests Naming
197 ------------------------
199 CSIT |release| follows a common structured naming convention for all
200 performance and system functional tests, introduced in CSIT |release-1|.
202 The naming should be intuitive for majority of the tests. Complete
203 description of CSIT test naming convention is provided on `CSIT test naming wiki
204 <https://wiki.fd.io/view/CSIT/csit-test-naming>`_.
206 Here few illustrative examples of the new naming usage for performance test
209 #. **Physical port to physical port - a.k.a. NIC-to-NIC, Phy-to-Phy, P2P**
211 - *PortNICConfig-WireEncapsulation-PacketForwardingFunction-
212 PacketProcessingFunction1-...-PacketProcessingFunctionN-TestType*
213 - *10ge2p1x520-dot1q-l2bdbasemaclrn-ndrdisc.robot* => 2 ports of 10GE on
214 Intel x520 NIC, dot1q tagged Ethernet, L2 bridge-domain baseline switching
215 with MAC learning, NDR throughput discovery.
216 - *10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-ndrchk.robot* => 2 ports of 10GE
217 on Intel x520 NIC, IPv4 VXLAN Ethernet, L2 bridge-domain baseline
218 switching with MAC learning, NDR throughput discovery.
219 - *10ge2p1x520-ethip4-ip4base-ndrdisc.robot* => 2 ports of 10GE on Intel
220 x520 NIC, IPv4 baseline routed forwarding, NDR throughput discovery.
221 - *10ge2p1x520-ethip6-ip6scale200k-ndrdisc.robot* => 2 ports of 10GE on
222 Intel x520 NIC, IPv6 scaled up routed forwarding, NDR throughput
225 #. **Physical port to VM (or VM chain) to physical port - a.k.a. NIC2VM2NIC,
226 P2V2P, NIC2VMchain2NIC, P2V2V2P**
228 - *PortNICConfig-WireEncapsulation-PacketForwardingFunction-
229 PacketProcessingFunction1-...-PacketProcessingFunctionN-VirtEncapsulation-
230 VirtPortConfig-VMconfig-TestType*
231 - *10ge2p1x520-dot1q-l2bdbasemaclrn-eth-2vhost-1vm-ndrdisc.robot* => 2 ports
232 of 10GE on Intel x520 NIC, dot1q tagged Ethernet, L2 bridge-domain
233 switching to/from two vhost interfaces and one VM, NDR throughput
235 - *10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-eth-2vhost-1vm-ndrdisc.robot* => 2
236 ports of 10GE on Intel x520 NIC, IPv4 VXLAN Ethernet, L2 bridge-domain
237 switching to/from two vhost interfaces and one VM, NDR throughput
239 - *10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-eth-4vhost-2vm-ndrdisc.robot* => 2
240 ports of 10GE on Intel x520 NIC, IPv4 VXLAN Ethernet, L2 bridge-domain
241 switching to/from four vhost interfaces and two VMs, NDR throughput
244 Methodology: Multi-Thread and Multi-Core
245 ----------------------------------------
247 **HyperThreading** - CSIT |release| performance tests are executed with SUT
248 servers' Intel XEON CPUs configured in HyperThreading Disabled mode (BIOS
249 settings). This is the simplest configuration used to establish baseline
250 single-thread single-core SW packet processing and forwarding performance.
251 Subsequent releases of CSIT will add performance tests with Intel
252 HyperThreading Enabled (requires BIOS settings change and hard reboot).
254 **Multi-core Test** - CSIT |release| multi-core tests are executed in the
255 following VPP thread and core configurations:
257 #. 1t1c - 1 VPP worker thread on 1 CPU physical core.
258 #. 2t2c - 2 VPP worker threads on 2 CPU physical cores.
260 Note that in quite a few test cases running VPP on 2 physical cores hits
261 the tested NIC I/O bandwidth or packets-per-second limit.
263 Methodology: Packet Throughput
264 ------------------------------
266 Following values are measured and reported for packet throughput tests:
268 - NDR binary search per RFC2544:
270 - Packet rate: "RATE: <aggregate packet rate in packets-per-second> pps
271 (2x <per direction packets-per-second>)"
272 - Aggregate bandwidth: "BANDWIDTH: <aggregate bandwidth in Gigabits per
273 second> Gbps (untagged)"
275 - PDR binary search per RFC2544:
277 - Packet rate: "RATE: <aggregate packet rate in packets-per-second> pps (2x
278 <per direction packets-per-second>)"
279 - Aggregate bandwidth: "BANDWIDTH: <aggregate bandwidth in Gigabits per
280 second> Gbps (untagged)"
281 - Packet loss tolerance: "LOSS_ACCEPTANCE <accepted percentage of packets
284 - NDR and PDR are measured for the following L2 frame sizes:
286 - IPv4: 64B, IMIX_v4_1 (28x64B,16x570B,4x1518B), 1518B, 9000B.
287 - IPv6: 78B, 1518B, 9000B.
290 Methodology: Packet Latency
291 ---------------------------
293 TRex Traffic Generator (TG) is used for measuring latency of VPP DUTs. Reported
294 latency values are measured using following methodology:
296 - Latency tests are performed at 10%, 50% of discovered NDR rate (non drop rate)
297 for each NDR throughput test and packet size (except IMIX).
298 - TG sends dedicated latency streams, one per direction, each at the rate of
299 10kpps at the prescribed packet size; these are sent in addition to the main
301 - TG reports min/avg/max latency values per stream direction, hence two sets
302 of latency values are reported per test case; future release of TRex is
303 expected to report latency percentiles.
304 - Reported latency values are aggregate across two SUTs due to three node
305 topology used for all performance tests; for per SUT latency, reported value
306 should be divided by two.
307 - 1usec is the measurement accuracy advertised by TRex TG for the setup used in
308 FD.io labs used by CSIT project.
309 - TRex setup introduces an always-on error of about 2*2usec per latency flow -
310 additonal Tx/Rx interface latency induced by TRex SW writing and reading
311 packet timestamps on CPU cores without HW acceleration on NICs closer to the
315 Methodology: KVM VM vhost
316 -------------------------
318 CSIT |release| introduced environment configuration changes to KVM Qemu vhost-
319 user tests in order to more representatively measure |vpp-release| performance
320 in configurations with vhost-user interfaces and VMs.
322 Current setup of CSIT FD.io performance lab is using tuned settings for more
323 optimal performance of KVM Qemu:
325 - Qemu virtio queue size has been increased from default value of 256 to 1024
327 - Adjusted Linux kernel CFS scheduler settings, as detailed on this CSIT wiki
328 page: https://wiki.fd.io/view/CSIT/csit-perf-env-tuning-ubuntu1604.
330 Adjusted Linux kernel CFS settings make the NDR and PDR throughput performance
331 of VPP+VM system less sensitive to other Linux OS system tasks by reducing
332 their interference on CPU cores that are designated for critical software
333 tasks under test, namely VPP worker threads in host and Testpmd threads in
334 guest dealing with data plan.
336 Methodology: IPSec with Intel QAT HW cards
337 ------------------------------------------
339 VPP IPSec performance tests are using DPDK cryptodev device driver in
340 combination with HW cryptodev devices - Intel QAT 8950 50G - present in
341 LF FD.io physical testbeds. DPDK cryptodev can be used for all IPSec
342 data plane functions supported by VPP.
344 Currently CSIT |release| implements following IPSec test cases:
346 - AES-GCM, CBC-SHA1 ciphers, in combination with IPv4 routed-forwarding
347 with Intel xl710 NIC.
348 - CBC-SHA1 ciphers, in combination with LISP-GPE overlay tunneling for
349 IPv4-over-IPv4 with Intel xl710 NIC.
351 Methodology: TRex Traffic Generator Usage
352 -----------------------------------------
354 The `TRex traffic generator <https://wiki.fd.io/view/TRex>`_ is used for all
355 CSIT performance tests. TRex stateless mode is used to measure NDR and PDR
356 throughputs using binary search (NDR and PDR discovery tests) and for quick
357 checks of DUT performance against the reference NDRs (NDR check tests) for
358 specific configuration.
360 TRex is installed and run on the TG compute node. The typical procedure is:
362 - If the TRex is not already installed on TG, it is installed in the
363 suite setup phase - see `TRex intallation`_.
364 - TRex configuration is set in its configuration file
369 - TRex is started in the background mode
372 $ sh -c 'cd /opt/trex-core-2.25/scripts/ && sudo nohup ./t-rex-64 -i -c 7 --iom 0 > /dev/null 2>&1 &' > /dev/null
374 - There are traffic streams dynamically prepared for each test. The traffic
375 is sent and the statistics obtained using trex_stl_lib.api.STLClient.
377 **Measuring packet loss**
379 - Create an instance of STLClient
380 - Connect to the client
383 - Send the traffic for defined time
386 If there is a warm-up phase required, the traffic is sent also before test and
387 the statistics are ignored.
389 **Measuring latency**
391 If measurement of latency is requested, two more packet streams are created (one
392 for each direction) with TRex flow_stats parameter set to STLFlowLatencyStats. In
393 that case, returned statistics will also include min/avg/max latency values.