4 .. _tested_physical_topologies:
6 Tested Physical Topologies
7 --------------------------
9 CSIT VPP performance tests are executed on physical baremetal servers hosted by
10 :abbr:`LF (Linux Foundation)` FD.io project. Testbed physical topology is shown
11 in the figure below.::
13 +------------------------+ +------------------------+
15 | +------------------+ | | +------------------+ |
17 | | <-----------------> | |
18 | | DUT1 | | | | DUT2 | |
19 | +--^---------------+ | | +---------------^--+ |
22 +------------------------+ +------------------^-----+
27 +------------------> TG <------------------+
31 SUT1 and SUT2 are two System Under Test servers (Cisco UCS C240, each with two
32 Intel XEON CPUs), TG is a Traffic Generator (TG, another Cisco UCS C240, with
33 two Intel XEON CPUs). SUTs run VPP SW application in Linux user-mode as a
34 Device Under Test (DUT). TG runs TRex SW application as a packet Traffic
35 Generator. Physical connectivity between SUTs and to TG is provided using
36 different NIC models that need to be tested for performance. Currently
37 installed and tested NIC models include:
39 #. 2port10GE X520-DA2 Intel.
40 #. 2port10GE X710 Intel.
41 #. 2port10GE VIC1227 Cisco.
42 #. 2port40GE VIC1385 Cisco.
43 #. 2port40GE XL710 Intel.
45 From SUT and DUT perspective, all performance tests involve forwarding packets
46 between two physical Ethernet ports (10GE or 40GE). Due to the number of
47 listed NIC models tested and available PCI slot capacity in SUT servers, in
48 all of the above cases both physical ports are located on the same NIC. In
49 some test cases this results in measured packet throughput being limited not
50 by VPP DUT but by either the physical interface or the NIC capacity.
52 Going forward CSIT project will be looking to add more hardware into FD.io
53 performance labs to address larger scale multi-interface and multi-NIC
54 performance testing scenarios.
56 For test cases that require DUT (VPP) to communicate with
57 VirtualMachines (VMs) / LinuxContainers (LXCs) over vhost-user/memif
58 interfaces, N of VM/LXC instances are created on SUT1 and SUT2. For N=1
59 DUT forwards packets between vhost/memif and physical interfaces. For
60 N>1 DUT a logical service chain forwarding topology is created on DUT by
61 applying L2 or IPv4/IPv6 configuration depending on the test suite. DUT
62 test topology with N VM/LXC instances is shown in the figure below
63 including applicable packet flow thru the DUTs and VMs/LXCs (marked in
64 the figure with ``***``).::
66 +-------------------------+ +-------------------------+
67 | +---------+ +---------+ | | +---------+ +---------+ |
68 | |VM/LXC[1]| |VM/LXC[N]| | | |VM/LXC[1]| |VM/LXC[N]| |
69 | | ***** | | ***** | | | | ***** | | ***** | |
70 | +--^---^--+ +--^---^--+ | | +--^---^--+ +--^---^--+ |
71 | *| |* *| |* | | *| |* *| |* |
72 | +--v---v-------v---v--+ | | +--v---v-------v---v--+ |
73 | | * * * * |*|***********|*| * * * * | |
74 | | * ********* ***<-|-----------|->*** ********* * | |
75 | | * DUT1 | | | | DUT2 * | |
76 | +--^------------------+ | | +------------------^--+ |
78 | *| SUT1 | | SUT2 |* |
79 +-------------------------+ +-------------------------+
84 *+--------------------> TG <--------------------+*
85 **********************| |**********************
88 For VM/LXC tests, packets are switched by DUT multiple times: twice for
89 a single VM/LXC, three times for two VMs/LXCs, N+1 times for N VMs/LXCs.
90 Hence the external throughput rates measured by TG and listed in this
91 report must be multiplied by (N+1) to represent the actual DUT aggregate
92 packet forwarding rate.
94 Note that reported DUT (VPP) performance results are specific to the SUTs
95 tested. Current :abbr:`LF (Linux Foundation)` FD.io SUTs are based on Intel
96 XEON E5-2699v3 2.3GHz CPUs. SUTs with other CPUs are likely to yield different
97 results. A good rule of thumb, that can be applied to estimate VPP packet
98 thoughput for Phy-to-Phy (NIC-to-NIC, PCI-to-PCI) topology, is to expect
99 the forwarding performance to be proportional to CPU core frequency,
100 assuming CPU is the only limiting factor and all other SUT parameters
101 equivalent to FD.io CSIT environment. The same rule of thumb can be also
102 applied for Phy-to-VM/LXC-to-Phy (NIC-to-VM/LXC-to-NIC) topology, but due to
103 much higher dependency on intensive memory operations and sensitivity to Linux
104 kernel scheduler settings and behaviour, this estimation may not always yield
105 good enough accuracy.
107 For detailed :abbr:`LF (Linux Foundation)` FD.io test bed specification and
108 physical topology please refer to `LF FD.io CSIT testbed wiki page
109 <https://wiki.fd.io/view/CSIT/CSIT_LF_testbed>`_.
111 Performance Tests Coverage
112 --------------------------
114 Performance tests are split into two main categories:
116 - Throughput discovery - discovery of packet forwarding rate using binary search
117 in accordance to :rfc:`2544`.
119 - NDR - discovery of Non Drop Rate packet throughput, at zero packet loss;
120 followed by one-way packet latency measurements at 10%, 50% and 100% of
121 discovered NDR throughput.
122 - PDR - discovery of Partial Drop Rate, with specified non-zero packet loss
123 currently set to 0.5%; followed by one-way packet latency measurements at
124 100% of discovered PDR throughput.
126 - Throughput verification - verification of packet forwarding rate against
127 previously discovered throughput rate. These tests are currently done against
128 0.9 of reference NDR, with reference rates updated periodically.
130 CSIT |release| includes following performance test suites, listed per NIC type:
132 - 2port10GE X520-DA2 Intel
134 - **L2XC** - L2 Cross-Connect switched-forwarding of untagged, dot1q, dot1ad
135 VLAN tagged Ethernet frames.
136 - **L2BD** - L2 Bridge-Domain switched-forwarding of untagged Ethernet frames
137 with MAC learning; disabled MAC learning i.e. static MAC tests to be added.
138 - **L2BD Scale** - L2 Bridge-Domain switched-forwarding of untagged Ethernet
139 frames with MAC learning; disabled MAC learning i.e. static MAC tests to be
140 added with 20k, 200k and 2M FIB entries.
141 - **IPv4** - IPv4 routed-forwarding.
142 - **IPv6** - IPv6 routed-forwarding.
143 - **IPv4 Scale** - IPv4 routed-forwarding with 20k, 200k and 2M FIB entries.
144 - **IPv6 Scale** - IPv6 routed-forwarding with 20k, 200k and 2M FIB entries.
145 - **VMs with vhost-user** - virtual topologies with 1 VM and service chains
146 of 2 VMs using vhost-user interfaces, with VPP forwarding modes incl. L2
147 Cross-Connect, L2 Bridge-Domain, VXLAN with L2BD, IPv4 routed-forwarding.
148 - **COP** - IPv4 and IPv6 routed-forwarding with COP address security.
149 - **ACL** - L2, IPv4 and IPv6 routed-forwarding with ACL address security.
150 - **LISP** - LISP overlay tunneling for IPv4-over-IPv4, IPv6-over-IPv4,
151 IPv6-over-IPv6, IPv4-over-IPv6 in IPv4 and IPv6 routed-forwarding modes.
152 - **VXLAN** - VXLAN overlay tunnelling integration with L2XC and L2BD.
153 - **QoS Policer** - ingress packet rate measuring, marking and limiting
155 - **NAT** - (Source) Network Address Translation tests with varying
156 number of users and ports per user.
157 - **Container memif connections** - VPP memif virtual interface tests to
158 interconnect VPP instances.
159 - **Container Orchestrated Topologies** - Container topologies connected over
160 the memif virtual interface.
162 - 2port40GE XL710 Intel
164 - **L2XC** - L2 Cross-Connect switched-forwarding of untagged Ethernet frames.
165 - **L2BD** - L2 Bridge-Domain switched-forwarding of untagged Ethernet frames
167 - **IPv4** - IPv4 routed-forwarding.
168 - **IPv6** - IPv6 routed-forwarding.
169 - **VMs with vhost-user** - virtual topologies with 1 VM and service chains
170 of 2 VMs using vhost-user interfaces, with VPP forwarding modes incl. L2
171 Cross-Connect, L2 Bridge-Domain, VXLAN with L2BD, IPv4 routed-forwarding.
172 - **IPSec** - IPSec encryption with AES-GCM, CBC-SHA1 ciphers, in combination
173 with IPv4 routed-forwarding.
174 - **IPSec+LISP** - IPSec encryption with CBC-SHA1 ciphers, in combination
175 with LISP-GPE overlay tunneling for IPv4-over-IPv4.
177 - 2port10GE X710 Intel
179 - **L2BD** - L2 Bridge-Domain switched-forwarding of untagged Ethernet frames
181 - **VMs with vhost-user** - virtual topologies with 1 VM using vhost-user
182 interfaces, with VPP forwarding modes incl. L2 Bridge-Domain.
184 - 2port10GE VIC1227 Cisco
186 - **L2BD** - L2 Bridge-Domain switched-forwarding of untagged Ethernet frames
189 - 2port40GE VIC1385 Cisco
191 - **L2BD** - L2 Bridge-Domain switched-forwarding of untagged Ethernet frames
194 Execution of performance tests takes time, especially the throughput discovery
195 tests. Due to limited HW testbed resources available within FD.io labs hosted
196 by :abbr:`LF (Linux Foundation)`, the number of tests for NICs other than X520
197 (a.k.a. Niantic) has been limited to few baseline tests. CSIT team expect the
198 HW testbed resources to grow over time, so that complete set of performance
199 tests can be regularly and(or) continuously executed against all models of
200 hardware present in FD.io labs.
202 Performance Tests Naming
203 ------------------------
205 CSIT |release| follows a common structured naming convention for all performance
206 and system functional tests, introduced in CSIT |release-1|.
208 The naming should be intuitive for majority of the tests. Complete description
209 of CSIT test naming convention is provided on `CSIT test naming wiki
210 <https://wiki.fd.io/view/CSIT/csit-test-naming>`_.
212 Methodology: Multi-Core and Multi-Threading
213 -------------------------------------------
215 **Intel Hyper-Threading** - CSIT |release| performance tests are executed with
216 SUT servers' Intel XEON processors configured in Intel Hyper-Threading Disabled
217 mode (BIOS setting). This is the simplest configuration used to establish
218 baseline single-thread single-core application packet processing and forwarding
219 performance. Subsequent releases of CSIT will add performance tests with Intel
220 Hyper-Threading Enabled (requires BIOS settings change and hard reboot of
223 **Multi-core Tests** - CSIT |release| multi-core tests are executed in the
224 following VPP thread and core configurations:
226 #. 1t1c - 1 VPP worker thread on 1 CPU physical core.
227 #. 2t2c - 2 VPP worker threads on 2 CPU physical cores.
229 VPP worker threads are the data plane threads. VPP control thread is running on
230 a separate non-isolated core together with other Linux processes. Note that in
231 quite a few test cases running VPP workers on 2 physical cores hits the tested
232 NIC I/O bandwidth or packets-per-second limit.
234 Methodology: Packet Throughput
235 ------------------------------
237 Following values are measured and reported for packet throughput tests:
239 - NDR binary search per :rfc:`2544`:
241 - Packet rate: "RATE: <aggregate packet rate in packets-per-second> pps
242 (2x <per direction packets-per-second>)"
243 - Aggregate bandwidth: "BANDWIDTH: <aggregate bandwidth in Gigabits per
244 second> Gbps (untagged)"
246 - PDR binary search per :rfc:`2544`:
248 - Packet rate: "RATE: <aggregate packet rate in packets-per-second> pps (2x
249 <per direction packets-per-second>)"
250 - Aggregate bandwidth: "BANDWIDTH: <aggregate bandwidth in Gigabits per
251 second> Gbps (untagged)"
252 - Packet loss tolerance: "LOSS_ACCEPTANCE <accepted percentage of packets
255 - NDR and PDR are measured for the following L2 frame sizes:
257 - IPv4: 64B, IMIX_v4_1 (28x64B,16x570B,4x1518B), 1518B, 9000B.
258 - IPv6: 78B, 1518B, 9000B.
260 All rates are reported from external Traffic Generator perspective.
262 Methodology: Packet Latency
263 ---------------------------
265 TRex Traffic Generator (TG) is used for measuring latency of VPP DUTs. Reported
266 latency values are measured using following methodology:
268 - Latency tests are performed at 10%, 50% of discovered NDR rate (non drop rate)
269 for each NDR throughput test and packet size (except IMIX).
270 - TG sends dedicated latency streams, one per direction, each at the rate of
271 10kpps at the prescribed packet size; these are sent in addition to the main
273 - TG reports min/avg/max latency values per stream direction, hence two sets
274 of latency values are reported per test case; future release of TRex is
275 expected to report latency percentiles.
276 - Reported latency values are aggregate across two SUTs due to three node
277 topology used for all performance tests; for per SUT latency, reported value
278 should be divided by two.
279 - 1usec is the measurement accuracy advertised by TRex TG for the setup used in
280 FD.io labs used by CSIT project.
281 - TRex setup introduces an always-on error of about 2*2usec per latency flow -
282 additonal Tx/Rx interface latency induced by TRex SW writing and reading
283 packet timestamps on CPU cores without HW acceleration on NICs closer to the
287 Methodology: KVM VM vhost
288 -------------------------
290 CSIT |release| introduced test environment configuration changes to KVM Qemu
291 vhost-user tests in order to more representatively measure |vpp-release|
292 performance in configurations with vhost-user interfaces and different Qemu
295 FD.io CSIT performance lab is testing VPP vhost with KVM VMs using following
296 environment settings:
298 - Tests with varying Qemu virtio queue (a.k.a. vring) sizes: [vr256] default 256
299 descriptors, [vr1024] 1024 descriptors to optimize for packet throughput;
301 - Tests with varying Linux :abbr:`CFS (Completely Fair Scheduler)` settings:
302 [cfs] default settings, [cfsrr1] CFS RoundRobin(1) policy applied to all data
303 plane threads handling test packet path including all VPP worker threads and
304 all Qemu testpmd poll-mode threads;
306 - Resulting test cases are all combinations with [vr256,vr1024] and
307 [cfs,cfsrr1] settings;
309 - Adjusted Linux kernel :abbr:`CFS (Completely Fair Scheduler)` scheduler policy
310 for data plane threads used in CSIT is documented in
311 `CSIT Performance Environment Tuning wiki <https://wiki.fd.io/view/CSIT/csit-perf-env-tuning-ubuntu1604>`_.
312 The purpose is to verify performance impact (NDR, PDR throughput) and
313 same test measurements repeatability, by making VPP and VM data plane
314 threads less susceptible to other Linux OS system tasks hijacking CPU
315 cores running those data plane threads.
317 Methodology: LXC and Docker Containers memif
318 --------------------------------------------
320 CSIT |release| introduced additional tests taking advantage of VPP memif
321 virtual interface (shared memory interface) tests to interconnect VPP
322 instances. VPP vswitch instance runs in bare-metal user-mode handling
323 Intel x520 NIC 10GbE interfaces and connecting over memif (Master side)
324 virtual interfaces to more instances of VPP running in :abbr:`LXC (Linux
325 Container)` or in Docker Containers, both with memif virtual interfaces
326 (Slave side). LXCs and Docker Containers run in a priviliged mode with
327 VPP data plane worker threads pinned to dedicated physical CPU cores per
328 usual CSIT practice. All VPP instances run the same version of software.
329 This test topology is equivalent to existing tests with vhost-user and
330 VMs as described earlier in :ref:`tested_physical_topologies`.
332 More information about CSIT LXC and Docker Container setup and control
333 is available in :ref:`containter_orchestration_in_csit`.
335 Methodology: Container Topologies Orchestrated by K8s
336 -----------------------------------------------------
338 CSIT |release| introduced new tests of Container topologies connected
339 over the memif virtual interface (shared memory interface). In order to
340 provide simple topology coding flexibility and extensibility container
341 orchestration is done with `Kubernetes <https://github.com/kubernetes>`_
342 using `Docker <https://github.com/docker>`_ images for all container
343 applications including VPP. `Ligato <https://github.com/ligato>`_ is
344 used to address the container networking orchestration that is
345 integrated with K8s, including memif support.
347 For these tests VPP vswitch instance runs in a Docker Container handling
348 Intel x520 NIC 10GbE interfaces and connecting over memif (Master side)
349 virtual interfaces to more instances of VPP running in Docker Containers
350 with memif virtual interfaces (Slave side). All Docker Containers run in
351 a priviliged mode with VPP data plane worker threads pinned to dedicated
352 physical CPU cores per usual CSIT practice. All VPP instances run the
353 same version of software. This test topology is equivalent to existing
354 tests with vhost-user and VMs as described earlier in
355 :ref:`tested_physical_topologies`.
357 More information about CSIT Container Topologies Orchestrated by K8s is
358 available in :ref:`containter_orchestration_in_csit`.
360 Methodology: IPSec with Intel QAT HW cards
361 ------------------------------------------
363 VPP IPSec performance tests are using DPDK cryptodev device driver in
364 combination with HW cryptodev devices - Intel QAT 8950 50G - present in
365 LF FD.io physical testbeds. DPDK cryptodev can be used for all IPSec
366 data plane functions supported by VPP.
368 Currently CSIT |release| implements following IPSec test cases:
370 - AES-GCM, CBC-SHA1 ciphers, in combination with IPv4 routed-forwarding
371 with Intel xl710 NIC.
372 - CBC-SHA1 ciphers, in combination with LISP-GPE overlay tunneling for
373 IPv4-over-IPv4 with Intel xl710 NIC.
375 Methodology: TRex Traffic Generator Usage
376 -----------------------------------------
378 The `TRex traffic generator <https://wiki.fd.io/view/TRex>`_ is used for all
379 CSIT performance tests. TRex stateless mode is used to measure NDR and PDR
380 throughputs using binary search (NDR and PDR discovery tests) and for quick
381 checks of DUT performance against the reference NDRs (NDR check tests) for
382 specific configuration.
384 TRex is installed and run on the TG compute node. The typical procedure is:
386 - If the TRex is not already installed on TG, it is installed in the
387 suite setup phase - see `TRex intallation`_.
388 - TRex configuration is set in its configuration file
393 - TRex is started in the background mode
396 $ sh -c 'cd /opt/trex-core-2.25/scripts/ && sudo nohup ./t-rex-64 -i -c 7 --iom 0 > /dev/null 2>&1 &' > /dev/null
398 - There are traffic streams dynamically prepared for each test, based on traffic
399 profiles. The traffic is sent and the statistics obtained using
400 :command:`trex_stl_lib.api.STLClient`.
402 **Measuring packet loss**
404 - Create an instance of STLClient
405 - Connect to the client
408 - Send the traffic for defined time
411 If there is a warm-up phase required, the traffic is sent also before test and
412 the statistics are ignored.
414 **Measuring latency**
416 If measurement of latency is requested, two more packet streams are created (one
417 for each direction) with TRex flow_stats parameter set to STLFlowLatencyStats. In
418 that case, returned statistics will also include min/avg/max latency values.