10 VPP is tested in a number of L2 and IP packet lookup and forwarding
11 modes. Within each mode baseline and scale tests are executed, the
12 latter with varying number of lookup entries.
17 VPP is tested in three L2 forwarding modes:
19 - *l2patch*: L2 patch, the fastest point-to-point L2 path that loops
20 packets between two interfaces without any Ethernet frame checks or
22 - *l2xc*: L2 cross-connect, point-to-point L2 path with all Ethernet
23 frame checks, but no MAC learning and no MAC lookup.
24 - *l2bd*: L2 bridge-domain, multipoint-to-multipoint L2 path with all
25 Ethernet frame checks, with MAC learning (unless static MACs are used)
28 l2bd tests are executed in baseline and scale configurations:
30 - *l2bdbase*: low number of L2 flows (253 per direction) is switched by
31 VPP. They drive the content of MAC FIB size (506 total MAC entries).
32 Both source and destination MAC addresses are incremented on a packet
35 - *l2bdscale*: high number of L2 flows is switched by VPP. Tested MAC
36 FIB sizes include: i) 10k (5k unique flows per direction), ii) 100k
37 (2x 50k flows) and iii) 1M (2x 500k). Both source and destination MAC
38 addresses are incremented on a packet by packet basis, ensuring new
39 entries are learn refreshed and looked up at every packet, making it
40 the worst case scenario.
42 Ethernet wire encapsulations tested include: untagged, dot1q, dot1ad.
47 IPv4 routing tests are executed in baseline and scale configurations:
49 - *ip4base*: low number of IPv4 flows (253 per direction) is routed by
50 VPP. They drive the content of IPv4 FIB size (506 total /32 prefixes).
51 Destination IPv4 addresses are incremented on a packet by packet
54 - *ip4scale*: high number of IPv4 flows is routed by VPP. Tested IPv4
55 FIB sizes of /32 prefixes include: i) 20k (10k unique flows per
56 direction), ii) 200k (2x 100k flows) and iii) 2M (2x 1M). Destination
57 IPv4 addresses are incremented on a packet by packet basis, ensuring
58 new FIB entries are looked up at every packet, making it the worst
64 IPv6 routing tests are executed in baseline and scale configurations:
66 - *ip6base*: low number of IPv6 flows (253 per direction) is routed by
67 VPP. They drive the content of IPv6 FIB size (506 total /128 prefixes).
68 Destination IPv6 addresses are incremented on a packet by packet
71 - *ip6scale*: high number of IPv6 flows is routed by VPP. Tested IPv6
72 FIB sizes of /128 prefixes include: i) 20k (10k unique flows per
73 direction), ii) 200k (2x 100k flows) and iii) 2M (2x 1M). Destination
74 IPv6 addresses are incremented on a packet by packet basis, ensuring
75 new FIB entries are looked up at every packet, making it the worst
81 SRv6 routing tests are executed in a number of baseline configurations,
82 in each case SR policy and steering policy are configured for one
83 direction and one (or two) SR behaviours (functions) in the other
86 - *srv6enc1sid*: One SID (no SRH present), one SR function - End.
87 - *srv6enc2sids*: Two SIDs (SRH present), two SR functions - End and
89 - *srv6enc2sids-nodecaps*: Two SIDs (SRH present) without decapsulation,
90 one SR function - End.
91 - *srv6proxy-dyn*: Dynamic SRv6 proxy, one SR function - End.AD.
92 - *srv6proxy-masq*: Masquerading SRv6 proxy, one SR function - End.AM.
93 - *srv6proxy-stat*: Static SRv6 proxy, one SR function - End.AS.
95 In all listed cases low number of IPv6 flows (253 per direction) is
101 Tunnel encapsulations testing is grouped based on the type of outer
102 header: IPv4 or IPv6.
107 VPP is tested in the following IPv4 tunnel baseline configurations:
109 - *ip4vxlan-l2bdbase*: VXLAN over IPv4 tunnels with L2 bridge-domain MAC
111 - *ip4vxlan-l2xcbase*: VXLAN over IPv4 tunnels with L2 cross-connect.
112 - *ip4lispip4-ip4base*: LISP over IPv4 tunnels with IPv4 routing.
113 - *ip4lispip6-ip6base*: LISP over IPv4 tunnels with IPv6 routing.
115 In all cases listed above low number of MAC, IPv4, IPv6 flows (253 per
116 direction) is switched or routed by VPP.
118 In addition selected IPv4 tunnels are tested at scale:
120 - *dot1q--ip4vxlanscale-l2bd*: VXLAN over IPv4 tunnels with L2 bridge-
121 domain MAC switching, with scaled up dot1q VLANs (10, 100, 1k),
122 mapped to scaled up L2 bridge-domains (10, 100, 1k), that are in turn
123 mapped to (10, 100, 1k) VXLAN tunnels. 64.5k flows are transmitted per
129 VPP is tested in the following IPv6 tunnel baseline configurations:
131 - *ip6lispip4-ip4base*: LISP over IPv4 tunnels with IPv4 routing.
132 - *ip6lispip6-ip6base*: LISP over IPv4 tunnels with IPv6 routing.
134 In all cases listed above low number of IPv4, IPv6 flows (253 per
135 direction) is routed by VPP.
140 VPP is tested in a number of data plane feature configurations across
141 different forwarding modes. Following sections list features tested.
146 Both stateless and stateful access control lists (ACL), also known as
147 security-groups, are supported by VPP.
149 Following ACL configurations are tested for MAC switching with L2
152 - *l2bdbasemaclrn-iacl{E}sl-{F}flows*: Input stateless ACL, with {E}
153 entries and {F} flows.
154 - *l2bdbasemaclrn-oacl{E}sl-{F}flows*: Output stateless ACL, with {E}
155 entries and {F} flows.
156 - *l2bdbasemaclrn-iacl{E}sf-{F}flows*: Input stateful ACL, with {E}
157 entries and {F} flows.
158 - *l2bdbasemaclrn-oacl{E}sf-{F}flows*: Output stateful ACL, with {E}
159 entries and {F} flows.
161 Following ACL configurations are tested with IPv4 routing:
163 - *ip4base-iacl{E}sl-{F}flows*: Input stateless ACL, with {E} entries
165 - *ip4base-oacl{E}sl-{F}flows*: Output stateless ACL, with {E} entries
167 - *ip4base-iacl{E}sf-{F}flows*: Input stateful ACL, with {E} entries and
169 - *ip4base-oacl{E}sf-{F}flows*: Output stateful ACL, with {E} entries
172 ACL tests are executed with the following combinations of ACL entries
175 - ACL entry definitions
177 - flow non-matching deny entry: (src-ip4, dst-ip4, src-port, dst-port).
178 - flow matching permit ACL entry: (src-ip4, dst-ip4).
180 - {E} - number of non-matching deny ACL entries, {E} = [1, 10, 50].
181 - {F} - number of UDP flows with different tuple (src-ip4, dst-ip4,
182 src-port, dst-port), {F} = [100, 10k, 100k].
183 - All {E}x{F} combinations are tested per ACL type, total of 9.
188 MAC-IP binding ACLs are tested for MAC switching with L2 bridge-domains:
190 - *l2bdbasemaclrn-macip-iacl{E}sl-{F}flows*: Input stateless ACL, with
191 {E} entries and {F} flows.
193 MAC-IP ACL tests are executed with the following combinations of ACL
194 entries and number of flows:
196 - ACL entry definitions
198 - flow non-matching deny entry: (dst-ip4, dst-mac, bit-mask)
199 - flow matching permit ACL entry: (dst-ip4, dst-mac, bit-mask)
201 - {E} - number of non-matching deny ACL entries, {E} = [1, 10, 50]
202 - {F} - number of UDP flows with different tuple (dst-ip4, dst-mac),
203 {F} = [100, 10k, 100k]
204 - All {E}x{F} combinations are tested per ACL type, total of 9.
209 NAT44 is tested in baseline and scale configurations with IPv4 routing:
211 - *ip4base-nat44*: baseline test with single NAT entry (addr, port),
213 - *ip4base-udpsrcscale{U}-nat44*: baseline test with {U} NAT entries
214 (addr, {U}ports), {U}=15.
215 - *ip4scale{R}-udpsrcscale{U}-nat44*: scale tests with {R}*{U} NAT
216 entries ({R}addr, {U}ports), {R}=[100, 1k, 2k, 4k], {U}=15.
218 Data Plane Throughput
219 ---------------------
221 Network data plane packet and bandwidth throughput are measured in
222 accordance with :rfc:`2544`, using FD.io CSIT Multiple Loss Ratio search
223 (MLRsearch), an optimized throughput search algorithm, that measures
224 SUT/DUT packet throughput rates at different Packet Loss Ratio (PLR)
227 Following MLRsearch values are measured across a range of L2 frame sizes
230 - NON DROP RATE (NDR): packet and bandwidth throughput at PLR=0%.
232 - **Aggregate packet rate**: NDR_LOWER <bi-directional packet rate>
234 - **Aggregate bandwidth rate**: NDR_LOWER <bi-directional bandwidth
237 - PARTIAL DROP RATE (PDR): packet and bandwidth throughput at PLR=0.5%.
239 - **Aggregate packet rate**: PDR_LOWER <bi-directional packet rate>
241 - **Aggregate bandwidth rate**: PDR_LOWER <bi-directional bandwidth
244 NDR and PDR are measured for the following L2 frame sizes (untagged
247 - IPv4 payload: 64B, IMIX (28x64B, 16x570B, 4x1518B), 1518B, 9000B.
248 - IPv6 payload: 78B, IMIX (28x78B, 16x570B, 4x1518B), 1518B, 9000B.
250 All rates are reported from external Traffic Generator perspective.
252 .. _mlrsearch_algorithm:
257 Multiple Loss Rate search (MLRsearch) tests use new search algorithm
258 implemented in FD.io CSIT project. MLRsearch discovers multiple packet
259 throughput rates in a single search, with each rate associated with a
260 distinct Packet Loss Ratio (PLR) criteria. MLRsearch is being
261 standardized in IETF with `draft-vpolak-mkonstan-mlrsearch-XX
262 <https://tools.ietf.org/html/draft-vpolak-mkonstan-mlrsearch-00>`_.
264 Two throughput measurements used in FD.io CSIT are Non-Drop Rate (NDR,
265 with zero packet loss, PLR=0) and Partial Drop Rate (PDR, with packet
266 loss rate not greater than the configured non-zero PLR). MLRsearch
267 discovers NDR and PDR in a single pass reducing required execution time
268 compared to separate binary searches for NDR and PDR. MLRsearch reduces
269 execution time even further by relying on shorter trial durations
270 of intermediate steps, with only the final measurements
271 conducted at the specified final trial duration.
272 This results in the shorter overall search
273 execution time when compared to a standard NDR/PDR binary search,
274 while guaranteeing the same or similar results.
276 If needed, MLRsearch can be easily adopted to discover more throughput rates
277 with different pre-defined PLRs.
279 .. Note:: All throughput rates are *always* bi-directional
280 aggregates of two equal (symmetric) uni-directional packet rates
281 received and reported by an external traffic generator.
286 The main properties of MLRsearch:
288 - MLRsearch is a duration aware multi-phase multi-rate search algorithm.
290 - Initial phase determines promising starting interval for the search.
291 - Intermediate phases progress towards defined final search criteria.
292 - Final phase executes measurements according to the final search
297 - Uses link rate as a starting transmit rate and discovers the Maximum
298 Receive Rate (MRR) used as an input to the first intermediate phase.
300 - *Intermediate phases*:
302 - Start with initial trial duration (in the first phase) and converge
303 geometrically towards the final trial duration (in the final phase).
304 - Track two values for NDR and two for PDR.
306 - The values are called (NDR or PDR) lower_bound and upper_bound.
307 - Each value comes from a specific trial measurement
308 (most recent for that transmit rate),
309 and as such the value is associated with that measurement's duration and loss.
310 - A bound can be invalid, for example if NDR lower_bound
311 has been measured with nonzero loss.
312 - Invalid bounds are not real boundaries for the searched value,
313 but are needed to track interval widths.
314 - Valid bounds are real boundaries for the searched value.
315 - Each non-initial phase ends with all bounds valid.
317 - Start with a large (lower_bound, upper_bound) interval width and
318 geometrically converge towards the width goal (measurement resolution)
319 of the phase. Each phase halves the previous width goal.
320 - Use internal and external searches:
322 - External search - measures at transmit rates outside the (lower_bound,
323 upper_bound) interval. Activated when a bound is invalid,
324 to search for a new valid bound by doubling the interval width.
325 It is a variant of `exponential search`_.
326 - Internal search - `binary search`_, measures at transmit rates within the
327 (lower_bound, upper_bound) valid interval, halving the interval width.
329 - *Final phase* is executed with the final test trial duration, and the final
330 width goal that determines resolution of the overall search.
331 Intermediate phases together with the final phase are called non-initial phases.
333 The main benefits of MLRsearch vs. binary search include:
335 - In general MLRsearch is likely to execute more search trials overall, but
336 less trials at a set final duration.
337 - In well behaving cases it greatly reduces (>50%) the overall duration
338 compared to a single PDR (or NDR) binary search duration,
339 while finding multiple drop rates.
340 - In all cases MLRsearch yields the same or similar results to binary search.
341 - Note: both binary search and MLRsearch are susceptible to reporting
342 non-repeatable results across multiple runs for very bad behaving
347 - Worst case MLRsearch can take longer than a binary search e.g. in case of
348 drastic changes in behaviour for trials at varying durations.
350 Search Implementation
351 ~~~~~~~~~~~~~~~~~~~~~
353 Following is a brief description of the current MLRsearch
354 implementation in FD.io CSIT.
359 #. *maximum_transmit_rate* - maximum packet transmit rate to be used by
360 external traffic generator, limited by either the actual Ethernet
361 link rate or traffic generator NIC model capabilities. Sample
362 defaults: 2 * 14.88 Mpps for 64B 10GE link rate,
363 2 * 18.75 Mpps for 64B 40GE NIC maximum rate.
364 #. *minimum_transmit_rate* - minimum packet transmit rate to be used for
365 measurements. MLRsearch fails if lower transmit rate needs to be
366 used to meet search criteria. Default: 2 * 10 kpps (could be higher).
367 #. *final_trial_duration* - required trial duration for final rate
368 measurements. Default: 30 sec.
369 #. *initial_trial_duration* - trial duration for initial MLRsearch phase.
371 #. *final_relative_width* - required measurement resolution expressed as
372 (lower_bound, upper_bound) interval width relative to upper_bound.
374 #. *packet_loss_ratio* - maximum acceptable PLR search criteria for
375 PDR measurements. Default: 0.5%.
376 #. *number_of_intermediate_phases* - number of phases between the initial
377 phase and the final phase. Impacts the overall MLRsearch duration.
378 Less phases are required for well behaving cases, more phases
379 may be needed to reduce the overall search duration for worse behaving cases.
380 Default (2). (Value chosen based on limited experimentation to date.
381 More experimentation needed to arrive to clearer guidelines.)
386 1. First trial measures at maximum rate and discovers MRR.
388 a. *in*: trial_duration = initial_trial_duration.
389 b. *in*: offered_transmit_rate = maximum_transmit_rate.
390 c. *do*: single trial.
391 d. *out*: measured loss ratio.
392 e. *out*: mrr = measured receive rate.
394 2. Second trial measures at MRR and discovers MRR2.
396 a. *in*: trial_duration = initial_trial_duration.
397 b. *in*: offered_transmit_rate = MRR.
398 c. *do*: single trial.
399 d. *out*: measured loss ratio.
400 e. *out*: mrr2 = measured receive rate.
402 3. Third trial measures at MRR2.
404 a. *in*: trial_duration = initial_trial_duration.
405 b. *in*: offered_transmit_rate = MRR2.
406 c. *do*: single trial.
407 d. *out*: measured loss ratio.
414 a. *in*: trial_duration for the current phase.
415 Set to initial_trial_duration for the first intermediate phase;
416 to final_trial_duration for the final phase;
417 or to the element of interpolating geometric sequence
418 for other intermediate phases.
419 For example with two intermediate phases, trial_duration
420 of the second intermediate phase is the geometric average
421 of initial_strial_duration and final_trial_duration.
422 b. *in*: relative_width_goal for the current phase.
423 Set to final_relative_width for the final phase;
424 doubled for each preceding phase.
425 For example with two intermediate phases,
426 the first intermediate phase uses quadruple of final_relative_width
427 and the second intermediate phase uses double of final_relative_width.
428 c. *in*: ndr_interval, pdr_interval from the previous main loop iteration
429 or the previous phase.
430 If the previous phase is the initial phase, both intervals have
431 lower_bound = MRR2, uper_bound = MRR.
432 Note that the initial phase is likely to create intervals with invalid bounds.
433 d. *do*: According to the procedure described in point 2,
434 either exit the phase (by jumping to 1.g.),
435 or prepare new transmit rate to measure with.
436 e. *do*: Perform the trial measurement at the new transmit rate
437 and trial_duration, compute its loss ratio.
438 f. *do*: Update the bounds of both intervals, based on the new measurement.
439 The actual update rules are numerous, as NDR external search
440 can affect PDR interval and vice versa, but the result
441 agrees with rules of both internal and external search.
442 For example, any new measurement below an invalid lower_bound
443 becomes the new lower_bound, while the old measurement
444 (previously acting as the invalid lower_bound)
445 becomes a new and valid upper_bound.
446 Go to next iteration (1.c.), taking the updated intervals as new input.
447 g. *out*: current ndr_interval and pdr_interval.
448 In the final phase this is also considered
449 to be the result of the whole search.
450 For other phases, the next phase loop is started
451 with the current results as an input.
453 2. New transmit rate (or exit) calculation (for 1.d.):
455 - If there is an invalid bound then prepare for external search:
457 - *If* the most recent measurement at NDR lower_bound transmit rate
458 had the loss higher than zero, then
459 the new transmit rate is NDR lower_bound
460 decreased by two NDR interval widths.
461 - Else, *if* the most recent measurement at PDR lower_bound
462 transmit rate had the loss higher than PLR, then
463 the new transmit rate is PDR lower_bound
464 decreased by two PDR interval widths.
465 - Else, *if* the most recent measurement at NDR upper_bound
466 transmit rate had no loss, then
467 the new transmit rate is NDR upper_bound
468 increased by two NDR interval widths.
469 - Else, *if* the most recent measurement at PDR upper_bound
470 transmit rate had the loss lower or equal to PLR, then
471 the new transmit rate is PDR upper_bound
472 increased by two PDR interval widths.
473 - If interval width is higher than the current phase goal:
475 - Else, *if* NDR interval does not meet the current phase width goal,
476 prepare for internal search. The new transmit rate is
477 (NDR lower bound + NDR upper bound) / 2.
478 - Else, *if* PDR interval does not meet the current phase width goal,
479 prepare for internal search. The new transmit rate is
480 (PDR lower bound + PDR upper bound) / 2.
481 - Else, *if* some bound has still only been measured at a lower duration,
482 prepare to re-measure at the current duration (and the same transmit rate).
483 The order of priorities is:
489 - *Else*, do not prepare any new rate, to exit the phase.
490 This ensures that at the end of each non-initial phase
491 all intervals are valid, narrow enough, and measured
492 at current phase trial duration.
494 Implementation Deviations
495 ~~~~~~~~~~~~~~~~~~~~~~~~~
497 This document so far has been describing a simplified version of MLRsearch algorithm.
498 The full algorithm as implemented contains additional logic,
499 which makes some of the details (but not general ideas) above incorrect.
500 Here is a short description of the additional logic as a list of principles,
501 explaining their main differences from (or additions to) the simplified description,
502 but without detailing their mutual interaction.
504 1. *Logarithmic transmit rate.*
505 In order to better fit the relative width goal,
506 the interval doubling and halving is done differently.
507 For example, the middle of 2 and 8 is 4, not 5.
508 2. *Optimistic maximum rate.*
509 The increased rate is never higher than the maximum rate.
510 Upper bound at that rate is always considered valid.
511 3. *Pessimistic minimum rate.*
512 The decreased rate is never lower than the minimum rate.
513 If a lower bound at that rate is invalid,
514 a phase stops refining the interval further (until it gets re-measured).
515 4. *Conservative interval updates.*
516 Measurements above current upper bound never update a valid upper bound,
517 even if drop ratio is low.
518 Measurements below current lower bound always update any lower bound
519 if drop ratio is high.
520 5. *Ensure sufficient interval width.*
521 Narrow intervals make external search take more time to find a valid bound.
522 If the new transmit increased or decreased rate would result in width
523 less than the current goal, increase/decrease more.
524 This can happen if the measurement for the other interval
525 makes the current interval too narrow.
526 Similarly, take care the measurements in the initial phase
527 create wide enough interval.
528 6. *Timeout for bad cases.*
529 The worst case for MLRsearch is when each phase converges to intervals
530 way different than the results of the previous phase.
531 Rather than suffer total search time several times larger
532 than pure binary search, the implemented tests fail themselves
533 when the search takes too long (given by argument *timeout*).
538 Maximum Receive Rate (MRR) tests are complementary to MLRsearch tests,
539 as they provide a maximum "raw" throughput benchmark for development and
540 testing community. MRR tests measure the packet forwarding rate under
541 the maximum load offered by traffic generator over a set trial duration,
542 regardless of packet loss. Maximum load for specified Ethernet frame
543 size is set to the bi-directional link rate.
545 In |csit-release| MRR test code has been updated with a configurable
546 burst MRR parameters: trial duration and number of trials in a single
547 burst. This enabled a new Burst MRR (BMRR) methodology for more precise
548 performance trending.
550 Current parameters for BMRR tests:
552 - Ethernet frame sizes: 64B (78B for IPv6), IMIX, 1518B, 9000B; all
553 quoted sizes include frame CRC, but exclude per frame transmission
554 overhead of 20B (preamble, inter frame gap).
556 - Maximum load offered: 10GE and 40GE link (sub-)rates depending on NIC
557 tested, with the actual packet rate depending on frame size,
558 transmission overhead and traffic generator NIC forwarding capacity.
560 - For 10GE NICs the maximum packet rate load is 2* 14.88 Mpps for 64B,
561 a 10GE bi-directional link rate.
562 - For 25GE NICs the maximum packet rate load is 2* 18.75 Mpps for 64B,
563 a 25GE bi-directional link sub-rate limited by TG 25GE NIC used,
565 - For 40GE NICs the maximum packet rate load is 2* 18.75 Mpps for 64B,
566 a 40GE bi-directional link sub-rate limited by TG 40GE NIC used,
567 XL710. Packet rate for other tested frame sizes is limited by PCIe
568 Gen3 x8 bandwidth limitation of ~50Gbps.
570 - Trial duration: 1 sec.
572 - Number of trials per burst: 10.
574 Similarly to NDR/PDR throughput tests, MRR test should be reporting bi-
575 directional link rate (or NIC rate, if lower) if tested VPP
576 configuration can handle the packet rate higher than bi-directional link
577 rate, e.g. large packet tests and/or multi-core tests.
579 MRR tests are currently used for FD.io CSIT continuous performance
580 trending and for comparison between releases. Daily trending job tests
581 subset of frame sizes, focusing on 64B (78B for IPv6) for all tests and
582 IMIX for selected tests (vhost, memif).
584 MRR-like measurements are being used to establish starting conditions
585 for experimental Probabilistic Loss Ratio Search (PLRsearch) used for
586 soak testing, aimed at verifying continuous system performance over an
587 extended period of time, hours, days, weeks, months. PLRsearch code is
588 currently in experimental phase in FD.io CSIT project.
593 TRex Traffic Generator (TG) is used for measuring latency of VPP DUTs.
594 Reported latency values are measured using following methodology:
596 - Latency tests are performed at 100% of discovered NDR and PDR rates
597 for each throughput test and packet size (except IMIX).
598 - TG sends dedicated latency streams, one per direction, each at the
599 rate of 9 kpps at the prescribed packet size; these are sent in
600 addition to the main load streams.
601 - TG reports min/avg/max latency values per stream direction, hence two
602 sets of latency values are reported per test case; future release of
603 TRex is expected to report latency percentiles.
604 - Reported latency values are aggregate across two SUTs due to three
605 node topology used for all performance tests; for per SUT latency,
606 reported value should be divided by two.
607 - 1usec is the measurement accuracy advertised by TRex TG for the setup
608 used in FD.io labs used by CSIT project.
609 - TRex setup introduces an always-on error of about 2*2usec per latency
610 flow additonal Tx/Rx interface latency induced by TRex SW writing and
611 reading packet timestamps on CPU cores without HW acceleration on NICs
612 closer to the interface line.
617 All performance tests are executed with single processor core and with
618 multiple cores scenarios.
620 Intel Hyper-Threading (HT)
621 ~~~~~~~~~~~~~~~~~~~~~~~~~~
623 Intel Xeon processors used in FD.io CSIT can operate either in HT
624 Disabled mode (single logical core per each physical core) or in HT
625 Enabled mode (two logical cores per each physical core). HT setting is
626 applied in BIOS and requires server SUT reload for it to take effect,
627 making it impractical for continuous changes of HT mode of operation.
629 |csit-release| performance tests are executed with server SUTs' Intel
630 XEON processors configured with Intel Hyper-Threading Disabled for all
631 Xeon Haswell testbeds (3n-hsw) and with Intel Hyper-Threading Enabled
632 for all Xeon Skylake testbeds.
634 More information about physical testbeds is provided in
635 :ref:`tested_physical_topologies`.
640 |csit-release| multi-core tests are executed in the following VPP worker
641 thread and physical core configurations:
643 #. Intel Xeon Haswell testbeds (3n-hsw) with Intel HT disabled
644 (1 logical CPU core per each physical core):
646 #. 1t1c - 1 VPP worker thread on 1 physical core.
647 #. 2t2c - 2 VPP worker threads on 2 physical cores.
648 #. 4t4c - 4 VPP worker threads on 4 physical cores.
650 #. Intel Xeon Skylake testbeds (2n-skx, 3n-skx) with Intel HT enabled
651 (2 logical CPU cores per each physical core):
653 #. 2t1c - 2 VPP worker threads on 1 physical core.
654 #. 4t2c - 4 VPP worker threads on 2 physical cores.
655 #. 8t4c - 8 VPP worker threads on 4 physical cores.
657 VPP worker threads are the data plane threads running on isolated
658 logical cores. With Intel HT enabled VPP workers are placed as sibling
659 threads on each used physical core. VPP control threads (main, stats)
660 are running on a separate non-isolated core together with other Linux
663 In all CSIT tests care is taken to ensure that each VPP worker handles
664 the same amount of received packet load and does the same amount of
665 packet processing work. This is achieved by evenly distributing per
666 interface type (e.g. physical, virtual) receive queues over VPP workers
667 using default VPP round- robin mapping and by loading these queues with
668 the same amount of packet flows.
670 If number of VPP workers is higher than number of physical or virtual
671 interfaces, multiple receive queues are configured on each interface.
672 NIC Receive Side Scaling (RSS) for physical interfaces and multi-queue
673 for virtual interfaces are used for this purpose.
675 Section :ref:`throughput_speedup_multi_core` includes a set of graphs
676 illustrating packet throughout speedup when running VPP worker threads
677 on multiple cores. Note that in quite a few test cases running VPP
678 workers on 2 or 4 physical cores hits the I/O bandwidth or packets-per-
679 second limit of tested NIC.
684 CSIT code manipulates a number of VPP settings in startup.conf for optimized
685 performance. List of common settings applied to all tests and test
686 dependent settings follows.
688 See `VPP startup.conf <https://git.fd.io/vpp/tree/src/vpp/conf/startup.conf?h=stable/1807>`_
689 for a complete set and description of listed settings.
694 List of vpp startup.conf settings applied to all tests:
696 #. heap-size <value> - set separately for ip4, ip6, stats, main
697 depending on scale tested.
698 #. no-tx-checksum-offload - disables UDP / TCP TX checksum offload in DPDK.
699 Typically needed for use faster vector PMDs (together with
701 #. socket-mem <value>,<value> - memory per numa. (Not required anymore
702 due to VPP code changes, should be removed in CSIT-18.10.)
707 List of vpp startup.conf settings applied dynamically per test:
709 #. corelist-workers <list_of_cores> - list of logical cores to run VPP
710 worker data plane threads. Depends on HyperThreading and core per
712 #. num-rx-queues <value> - depends on a number of VPP threads and NIC
714 #. num-rx-desc/num-tx-desc - number of rx/tx descriptors for specific
715 NICs, incl. xl710, x710, xxv710.
716 #. num-mbufs <value> - increases number of buffers allocated, needed
717 only in scenarios with large number of interfaces and worker threads.
718 Value is per CPU socket. Default is 16384.
719 #. no-multi-seg - disables multi-segment buffers in DPDK, improves
720 packet throughput, but disables Jumbo MTU support. Disabled for all
721 tests apart from the ones that require Jumbo 9000B frame support.
722 #. UIO driver - depends on topology file definition.
723 #. QAT VFs - depends on NRThreads, each thread = 1QAT VFs.
728 FD.io CSIT performance lab is testing VPP vhost with KVM VMs using
729 following environment settings:
731 - Tests with varying Qemu virtio queue (a.k.a. vring) sizes: [vr256]
732 default 256 descriptors, [vr1024] 1024 descriptors to optimize for
734 - Tests with varying Linux :abbr:`CFS (Completely Fair Scheduler)`
735 settings: [cfs] default settings, [cfsrr1] CFS RoundRobin(1) policy
736 applied to all data plane threads handling test packet path including
737 all VPP worker threads and all Qemu testpmd poll-mode threads.
738 - Resulting test cases are all combinations with [vr256,vr1024] and
739 [cfs,cfsrr1] settings.
740 - Adjusted Linux kernel :abbr:`CFS (Completely Fair Scheduler)`
741 scheduler policy for data plane threads used in CSIT is documented in
742 `CSIT Performance Environment Tuning wiki
743 <https://wiki.fd.io/view/CSIT/csit-perf-env-tuning-ubuntu1604>`_.
744 - The purpose is to verify performance impact (MRR and NDR/PDR
745 throughput) and same test measurements repeatability, by making VPP
746 and VM data plane threads less susceptible to other Linux OS system
747 tasks hijacking CPU cores running those data plane threads.
749 LXC/DRC Container Memif
750 -----------------------
752 |csit-release| includes tests taking advantage of VPP memif virtual
753 interface (shared memory interface) to interconnect VPP running in
754 Containers. VPP vswitch instance runs in bare-metal user-mode handling
755 NIC interfaces and connecting over memif (Slave side) to VPPs running in
756 :abbr:`Linux Container (LXC)` or in Docker Container (DRC) configured
757 with memif (Master side). LXCs and DRCs run in a priviliged mode with
758 VPP data plane worker threads pinned to dedicated physical CPU cores per
759 usual CSIT practice. All VPP instances run the same version of software.
760 This test topology is equivalent to existing tests with vhost-user and
761 VMs as described earlier in :ref:`tested_logical_topologies`.
763 In addition to above vswitch tests, a single memif interface test is
764 executed. It runs in a simple topology of two VPP container instances
765 connected over memif interface in order to verify standalone memif
766 interface performance.
768 More information about CSIT LXC and DRC setup and control is available
769 in :ref:`container_orchestration_in_csit`.
774 |csit-release| includes tests of VPP topologies running in K8s
775 orchestrated Pods/Containers and connected over memif virtual
776 interfaces. In order to provide simple topology coding flexibility and
777 extensibility container orchestration is done with `Kubernetes
778 <https://github.com/kubernetes>`_ using `Docker
779 <https://github.com/docker>`_ images for all container applications
780 including VPP. `Ligato <https://github.com/ligato>`_ is used for the
781 Pod/Container networking orchestration that is integrated with K8s,
782 including memif support.
784 In these tests VPP vswitch runs in a K8s Pod with Docker Container (DRC)
785 handling NIC interfaces and connecting over memif to more instances of
786 VPP running in Pods/DRCs. All DRCs run in a priviliged mode with VPP
787 data plane worker threads pinned to dedicated physical CPU cores per
788 usual CSIT practice. All VPP instances run the same version of software.
789 This test topology is equivalent to existing tests with vhost-user and
790 VMs as described earlier in :ref:`tested_physical_topologies`.
792 Further documentation is available in
793 :ref:`container_orchestration_in_csit`.
795 VPP_Device Functional
796 ---------------------
798 |csit-release| added new VPP_Device test environment for functional VPP
799 device tests integrated into LFN CI/CD infrastructure. VPP_Device tests
800 run on 1-Node testbeds (1n-skx, 1n-arm) and rely on Linux SRIOV Virtual
801 Function (VF), dot1q VLAN tagging and external loopback cables to
802 facilitate packet passing over exernal physical links. Initial focus is
803 on few baseline tests. Existing CSIT VIRL tests can be moved to
804 VPP_Device framework by changing L1 and L2 KW(s). RF test definition
805 code stays unchanged with the exception of requiring adjustments from
806 3-Node to 2-Node logical topologies. CSIT VIRL to VPP_Device migration
807 is expected in the next CSIT release.
812 VPP IPSec performance tests are using DPDK cryptodev device driver in
813 combination with HW cryptodev devices - Intel QAT 8950 50G - present in
814 LF FD.io physical testbeds. DPDK cryptodev can be used for all IPSec
815 data plane functions supported by VPP.
817 Currently |csit-release| implements following IPSec test cases:
819 - AES-GCM, CBC-SHA1 ciphers, in combination with IPv4 routed-forwarding
820 with Intel xl710 NIC.
821 - CBC-SHA1 ciphers, in combination with LISP-GPE overlay tunneling for
822 IPv4-over-IPv4 with Intel xl710 NIC.
824 TRex Traffic Generator
825 ----------------------
830 `TRex traffic generator <https://wiki.fd.io/view/TRex>`_ is used for all
831 CSIT performance tests. TRex stateless mode is used to measure NDR and
832 PDR throughputs using binary search (NDR and PDR discovery tests) and
833 for quick checks of DUT performance against the reference NDRs (NDR
834 check tests) for specific configuration.
836 TRex is installed and run on the TG compute node. The typical procedure
839 - If the TRex is not already installed on TG, it is installed in the
840 suite setup phase - see `TRex intallation`_.
841 - TRex configuration is set in its configuration file
846 - TRex is started in the background mode
849 $ sh -c 'cd <t-rex-install-dir>/scripts/ && sudo nohup ./t-rex-64 -i -c 7 --iom 0 > /tmp/trex.log 2>&1 &' > /dev/null
851 - There are traffic streams dynamically prepared for each test, based on traffic
852 profiles. The traffic is sent and the statistics obtained using
853 :command:`trex_stl_lib.api.STLClient`.
855 Measuring Packet Loss
856 ~~~~~~~~~~~~~~~~~~~~~
858 Following sequence is followed to measure packet loss:
860 - Create an instance of STLClient.
861 - Connect to the client.
864 - Send the traffic for defined time.
865 - Get the statistics.
867 If there is a warm-up phase required, the traffic is sent also before
868 test and the statistics are ignored.
873 If measurement of latency is requested, two more packet streams are
874 created (one for each direction) with TRex flow_stats parameter set to
875 STLFlowLatencyStats. In that case, returned statistics will also include
876 min/avg/max latency values.
878 HTTP/TCP with WRK Tool
879 ----------------------
881 `WRK HTTP benchmarking tool <https://github.com/wg/wrk>`_ is used for
882 experimental TCP/IP and HTTP tests of VPP TCP/IP stack and built-in
883 static HTTP server. WRK has been chosen as it is capable of generating
884 significant TCP/IP and HTTP loads by scaling number of threads across
885 multi-core processors.
887 This in turn enables quite high scale benchmarking of the main TCP/IP
888 and HTTP service including HTTP TCP/IP Connections-Per-Second (CPS),
889 HTTP Requests-Per-Second and HTTP Bandwidth Throughput.
891 The initial tests are designed as follows:
893 - HTTP and TCP/IP Connections-Per-Second (CPS)
895 - WRK configured to use 8 threads across 8 cores, 1 thread per core.
896 - Maximum of 50 concurrent connections across all WRK threads.
897 - Timeout for server responses set to 5 seconds.
898 - Test duration is 30 seconds.
899 - Expected HTTP test sequence:
901 - Single HTTP GET Request sent per open connection.
902 - Connection close after valid HTTP reply.
903 - Resulting flow sequence - 8 packets: >Syn, <Syn-Ack, >Ack, >Req,
904 <Rep, >Fin, <Fin, >Ack.
906 - HTTP Requests-Per-Second
908 - WRK configured to use 8 threads across 8 cores, 1 thread per core.
909 - Maximum of 50 concurrent connections across all WRK threads.
910 - Timeout for server responses set to 5 seconds.
911 - Test duration is 30 seconds.
912 - Expected HTTP test sequence:
914 - Multiple HTTP GET Requests sent in sequence per open connection.
915 - Connection close after set test duration time.
916 - Resulting flow sequence: >Syn, <Syn-Ack, >Ack, >Req[1], <Rep[1],
917 .., >Req[n], <Rep[n], >Fin, <Fin, >Ack.
919 .. _binary search: https://en.wikipedia.org/wiki/Binary_search
920 .. _exponential search: https://en.wikipedia.org/wiki/Exponential_search
921 .. _estimation of standard deviation: https://en.wikipedia.org/wiki/Unbiased_estimation_of_standard_deviation
922 .. _simplified error propagation formula: https://en.wikipedia.org/wiki/Propagation_of_uncertainty#Simplification