4 Tested Physical Topologies
5 --------------------------
7 CSIT VPP performance tests are executed on physical baremetal servers hosted by
8 LF FD.io project. Testbed physical topology is shown in the figure below.
12 +------------------------+ +------------------------+
14 | +------------------+ | | +------------------+ |
16 | | <-----------------> | |
17 | | DUT1 | | | | DUT2 | |
18 | +--^---------------+ | | +---------------^--+ |
21 +------------------------+ +------------------^-----+
26 +------------------> TG <------------------+
30 SUT1 and SUT2 are two System Under Test servers (Cisco UCS C240, each with two
31 Intel XEON CPUs), TG is a Traffic Generator (TG, another Cisco UCS C240, with
32 two Intel XEON CPUs). SUTs run VPP SW application in Linux user-mode as a
33 Device Under Test (DUT). TG runs TRex SW application as a packet Traffic
34 Generator. Physical connectivity between SUTs and to TG is provided using
35 different NIC models that need to be tested for performance. Currently
36 installed and tested NIC models include:
38 #. 2port10GE X520-DA2 Intel.
39 #. 2port10GE X710 Intel.
40 #. 2port10GE VIC1227 Cisco.
41 #. 2port40GE VIC1385 Cisco.
42 #. 2port40GE XL710 Intel.
44 From SUT and DUT perspective, all performance tests involve forwarding packets
45 between two physical Ethernet ports (10GE or 40GE). Due to the number of
46 listed NIC models tested and available PCI slot capacity in SUT servers, in
47 all of the above cases both physical ports are located on the same NIC. In
48 some test cases this results in measured packet throughput being limited not
49 by VPP DUT but by either the physical interface or the NIC capacity.
51 Going forward CSIT project will be looking to add more hardware into FD.io
52 performance labs to address larger scale multi-interface and multi-NIC
53 performance testing scenarios.
55 For test cases that require DUT (VPP) to communicate with VM(s) over vhost-user
56 interfaces, N of VM instances are created on SUT1 and SUT2. For N=1 DUT (VPP)
57 forwards packets between vhostuser and physical interfaces. For N>1 DUT (VPP) a
58 logical service chain forwarding topology is created on DUT (VPP) by applying L2
59 or IPv4/IPv6 configuration depending on the test suite.
60 DUT (VPP) test topology with N VM instances
61 is shown in the figure below including applicable packet flow thru the DUTs and
62 VMs (marked in the figure with ``***``).
66 +-------------------------+ +-------------------------+
67 | +---------+ +---------+ | | +---------+ +---------+ |
68 | | VM[1] | | VM[N] | | | | VM[1] | | VM[N] | |
69 | | ***** | | ***** | | | | ***** | | ***** | |
70 | +--^---^--+ +--^---^--+ | | +--^---^--+ +--^---^--+ |
71 | *| |* *| |* | | *| |* *| |* |
72 | +--v---v-------v---v--+ | | +--v---v-------v---v--+ |
73 | | * * * * |*|***********|*| * * * * | |
74 | | * ********* ***<-|-----------|->*** ********* * | |
75 | | * DUT1 | | | | DUT2 * | |
76 | +--^------------------+ | | +------------------^--+ |
78 | *| SUT1 | | SUT2 |* |
79 +-------------------------+ +-------------------------+
84 *+--------------------> TG <--------------------+*
85 **********************| |**********************
88 For VM tests, packets are switched by DUT (VPP) multiple times: twice for a
89 single VM, three times for two VMs, N+1 times for N VMs.
91 throughput rates measured by TG and listed in this report must be multiplied
92 by (N+1) to represent the actual DUT aggregate packet forwarding rate.
94 Note that reported VPP performance results are specific to the SUTs tested.
95 Current LF FD.io SUTs are based on Intel XEON E5-2699v3 2.3GHz CPUs. SUTs with
96 other CPUs are likely to yield different results. A good rule of thumb, that
97 can be applied to estimate VPP packet thoughput for Phy-to-Phy (NIC-to-NIC,
98 PCI-to-PCI) topology, is to expect the forwarding performance to be
99 proportional to CPU core frequency, assuming CPU is the only limiting factor
100 and all other SUT parameters equivalent to FD.io CSIT environment. The same rule
101 of thumb can be also applied for Phy-to-VM-to-Phy (NIC-to-VM-to-NIC) topology,
102 but due to much higher dependency on intensive memory operations and
103 sensitivity to Linux kernel scheduler settings and behaviour, this estimation
104 may not always yield good enough accuracy.
106 For detailed LF FD.io test bed specification and physical topology please refer
107 to `LF FDio CSIT testbed wiki page <https://wiki.fd.io/view/CSIT/CSIT_LF_testbed>`_.
109 Performance Tests Coverage
110 --------------------------
112 Performance tests are split into the two main categories:
114 - Throughput discovery - discovery of packet forwarding rate using binary search
115 in accordance to RFC2544.
117 - NDR - discovery of Non Drop Rate packet throughput, at zero packet loss;
118 followed by one-way packet latency measurements at 10%, 50% and 100% of
119 discovered NDR throughput.
120 - PDR - discovery of Partial Drop Rate, with specified non-zero packet loss
121 currently set to 0.5%; followed by one-way packet latency measurements at
122 100% of discovered PDR throughput.
124 - Throughput verification - verification of packet forwarding rate against
125 previously discovered throughput rate. These tests are currently done against
126 0.9 of reference NDR, with reference rates updated periodically.
128 CSIT |release| includes following performance test suites, listed per NIC type:
130 - 2port10GE X520-DA2 Intel
132 - **L2XC** - L2 Cross-Connect switched-forwarding of untagged, dot1q, dot1ad
133 VLAN tagged Ethernet frames.
134 - **L2BD** - L2 Bridge-Domain switched-forwarding of untagged Ethernet frames
135 with MAC learning; disabled MAC learning i.e. static MAC tests to be added.
136 - **IPv4** - IPv4 routed-forwarding.
137 - **IPv6** - IPv6 routed-forwarding.
138 - **IPv4 Scale** - IPv4 routed-forwarding with 20k, 200k and 2M FIB entries.
139 - **IPv6 Scale** - IPv6 routed-forwarding with 20k, 200k and 2M FIB entries.
140 - **VMs with vhost-user** - virtual topologies with 1 VM and service chains
141 of 2 VMs using vhost-user interfaces, with VPP forwarding modes incl. L2
142 Cross-Connect, L2 Bridge-Domain, VXLAN with L2BD, IPv4 routed-forwarding.
143 - **COP** - IPv4 and IPv6 routed-forwarding with COP address security.
144 - **iACL** - IPv4 and IPv6 routed-forwarding with iACL address security.
145 - **LISP** - LISP overlay tunneling for IPv4-over-IPv4, IPv6-over-IPv4,
146 IPv6-over-IPv6, IPv4-over-IPv6 in IPv4 and IPv6 routed-forwarding modes.
147 - **VXLAN** - VXLAN overlay tunnelling integration with L2XC and L2BD.
148 - **QoS Policer** - ingress packet rate measuring, marking and limiting
151 - 2port40GE XL710 Intel
153 - **L2XC** - L2 Cross-Connect switched-forwarding of untagged Ethernet frames.
154 - **L2BD** - L2 Bridge-Domain switched-forwarding of untagged Ethernet frames
156 - **IPv4** - IPv4 routed-forwarding.
157 - **IPv6** - IPv6 routed-forwarding.
158 - **VMs with vhost-user** - virtual topologies with 1 VM and service chains
159 of 2 VMs using vhost-user interfaces, with VPP forwarding modes incl. L2
160 Cross-Connect, L2 Bridge-Domain, VXLAN with L2BD, IPv4 routed-forwarding.
161 - **IPSec** - IPSec encryption with AES-GCM, CBC-SHA1 ciphers, in combination
162 with IPv4 routed-forwarding.
163 - **IPSec+LISP** - IPSec encryption with CBC-SHA1 ciphers, in combination
164 with LISP-GPE overlay tunneling for IPv4-over-IPv4.
166 - 2port10GE X710 Intel
168 - **L2BD** - L2 Bridge-Domain switched-forwarding of untagged Ethernet frames
170 - **VMs with vhost-user** - virtual topologies with 1 VM using vhost-user
171 interfaces, with VPP forwarding modes incl. L2 Bridge-Domain.
173 - 2port10GE VIC1227 Cisco
175 - **L2BD** - L2 Bridge-Domain switched-forwarding of untagged Ethernet frames
178 - 2port40GE VIC1385 Cisco
180 - **L2BD** - L2 Bridge-Domain switched-forwarding of untagged Ethernet frames
183 Execution of performance tests takes time, especially the throughput discovery
184 tests. Due to limited HW testbed resources available within FD.io labs hosted
185 by Linux Foundation, the number of tests for NICs other than X520 (a.k.a.
186 Niantic) has been limited to few baseline tests. Over time we expect the HW
187 testbed resources to grow, and will be adding complete set of performance
188 tests for all models of hardware to be executed regularly and(or)
191 Performance Tests Naming
192 ------------------------
194 CSIT |release| follows a common structured naming convention for all
195 performance and system functional tests, introduced in CSIT rls1701.
197 The naming should be intuitive for majority of the tests. Complete
198 description of CSIT test naming convention is provided on `CSIT test naming wiki
199 <https://wiki.fd.io/view/CSIT/csit-test-naming>`_.
201 Here few illustrative examples of the new naming usage for performance test
204 #. **Physical port to physical port - a.k.a. NIC-to-NIC, Phy-to-Phy, P2P**
206 - *PortNICConfig-WireEncapsulation-PacketForwardingFunction-
207 PacketProcessingFunction1-...-PacketProcessingFunctionN-TestType*
208 - *10ge2p1x520-dot1q-l2bdbasemaclrn-ndrdisc.robot* => 2 ports of 10GE on
209 Intel x520 NIC, dot1q tagged Ethernet, L2 bridge-domain baseline switching
210 with MAC learning, NDR throughput discovery.
211 - *10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-ndrchk.robot* => 2 ports of 10GE
212 on Intel x520 NIC, IPv4 VXLAN Ethernet, L2 bridge-domain baseline
213 switching with MAC learning, NDR throughput discovery.
214 - *10ge2p1x520-ethip4-ip4base-ndrdisc.robot* => 2 ports of 10GE on Intel
215 x520 NIC, IPv4 baseline routed forwarding, NDR throughput discovery.
216 - *10ge2p1x520-ethip6-ip6scale200k-ndrdisc.robot* => 2 ports of 10GE on
217 Intel x520 NIC, IPv6 scaled up routed forwarding, NDR throughput
220 #. **Physical port to VM (or VM chain) to physical port - a.k.a. NIC2VM2NIC,
221 P2V2P, NIC2VMchain2NIC, P2V2V2P**
223 - *PortNICConfig-WireEncapsulation-PacketForwardingFunction-
224 PacketProcessingFunction1-...-PacketProcessingFunctionN-VirtEncapsulation-
225 VirtPortConfig-VMconfig-TestType*
226 - *10ge2p1x520-dot1q-l2bdbasemaclrn-eth-2vhost-1vm-ndrdisc.robot* => 2 ports
227 of 10GE on Intel x520 NIC, dot1q tagged Ethernet, L2 bridge-domain
228 switching to/from two vhost interfaces and one VM, NDR throughput
230 - *10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-eth-2vhost-1vm-ndrdisc.robot* => 2
231 ports of 10GE on Intel x520 NIC, IPv4 VXLAN Ethernet, L2 bridge-domain
232 switching to/from two vhost interfaces and one VM, NDR throughput
234 - *10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-eth-4vhost-2vm-ndrdisc.robot* => 2
235 ports of 10GE on Intel x520 NIC, IPv4 VXLAN Ethernet, L2 bridge-domain
236 switching to/from four vhost interfaces and two VMs, NDR throughput
239 Methodology: Multi-Thread and Multi-Core
240 ----------------------------------------
242 **HyperThreading** - CSIT |release| performance tests are executed with SUT
243 servers' Intel XEON CPUs configured in HyperThreading Disabled mode (BIOS
244 settings). This is the simplest configuration used to establish baseline
245 single-thread single-core SW packet processing and forwarding performance.
246 Subsequent releases of CSIT will add performance tests with Intel
247 HyperThreading Enabled (requires BIOS settings change and hard reboot).
249 **Multi-core Test** - CSIT |release| multi-core tests are executed in the
250 following VPP thread and core configurations:
252 #. 1t1c - 1 VPP worker thread on 1 CPU physical core.
253 #. 2t2c - 2 VPP worker threads on 2 CPU physical cores.
255 Note that in quite a few test cases running VPP on 2 physical cores hits
256 the tested NIC I/O bandwidth or packets-per-second limit.
258 Methodology: Packet Throughput
259 ------------------------------
261 Following values are measured and reported for packet throughput tests:
263 - NDR binary search per RFC2544:
265 - Packet rate: "RATE: <aggregate packet rate in packets-per-second> pps
266 (2x <per direction packets-per-second>)"
267 - Aggregate bandwidth: "BANDWIDTH: <aggregate bandwidth in Gigabits per
268 second> Gbps (untagged)"
270 - PDR binary search per RFC2544:
272 - Packet rate: "RATE: <aggregate packet rate in packets-per-second> pps (2x
273 <per direction packets-per-second>)"
274 - Aggregate bandwidth: "BANDWIDTH: <aggregate bandwidth in Gigabits per
275 second> Gbps (untagged)"
276 - Packet loss tolerance: "LOSS_ACCEPTANCE <accepted percentage of packets
279 - NDR and PDR are measured for the following L2 frame sizes:
281 - IPv4: 64B, IMIX_v4_1 (28x64B,16x570B,4x1518B), 1518B, 9000B.
282 - IPv6: 78B, 1518B, 9000B.
285 Methodology: Packet Latency
286 ---------------------------
288 TRex Traffic Generator (TG) is used for measuring latency of VPP DUTs. Reported
289 latency values are measured using following methodology:
291 - Latency tests are performed at 10%, 50% of discovered NDR rate (non drop rate)
292 for each NDR throughput test and packet size (except IMIX).
293 - TG sends dedicated latency streams, one per direction, each at the rate of
294 10kpps at the prescribed packet size; these are sent in addition to the main
296 - TG reports min/avg/max latency values per stream direction, hence two sets
297 of latency values are reported per test case; future release of TRex is
298 expected to report latency percentiles.
299 - Reported latency values are aggregate across two SUTs due to three node
300 topology used for all performance tests; for per SUT latency, reported value
301 should be divided by two.
302 - 1usec is the measurement accuracy advertised by TRex TG for the setup used in
303 FD.io labs used by CSIT project.
304 - TRex setup introduces an always-on error of about 2*2usec per latency flow -
305 additonal Tx/Rx interface latency induced by TRex SW writing and reading
306 packet timestamps on CPU cores without HW acceleration on NICs closer to the
310 Methodology: KVM VM vhost
311 -------------------------
313 CSIT |release| introduced environment configuration changes to KVM Qemu vhost-
314 user tests in order to more representatively measure VPP-17.04 performance in
315 configurations with vhost-user interfaces and VMs.
317 Current setup of CSIT FD.io performance lab is using tuned settings for more
318 optimal performance of KVM Qemu:
320 - Qemu virtio queue size has been increased from default value of 256 to 1024
322 - Adjusted Linux kernel CFS scheduler settings, as detailed on this CSIT wiki
323 page: https://wiki.fd.io/view/CSIT/csit-perf-env-tuning-ubuntu1604.
325 Adjusted Linux kernel CFS settings make the NDR and PDR throughput performance
326 of VPP+VM system less sensitive to other Linux OS system tasks by reducing
327 their interference on CPU cores that are designated for critical software
328 tasks under test, namely VPP worker threads in host and Testpmd threads in
329 guest dealing with data plan.