10 Network data plane packet and bandwidth throughput are measured in
11 accordance with :rfc:`2544`, using FD.io CSIT Multiple Loss Ratio search
12 (MLRsearch), an optimized throughput search algorithm, that measures
13 SUT/DUT packet throughput rates at different Packet Loss Ratio (PLR)
16 Following MLRsearch values are measured across a range of L2 frame sizes
19 - NON DROP RATE (NDR): packet and bandwidth throughput at PLR=0%.
21 - **Aggregate packet rate**: NDR_LOWER <bi-directional packet rate>
23 - **Aggregate bandwidth rate**: NDR_LOWER <bi-directional bandwidth
26 - PARTIAL DROP RATE (PDR): packet and bandwidth throughput at PLR=0.5%.
28 - **Aggregate packet rate**: PDR_LOWER <bi-directional packet rate>
30 - **Aggregate bandwidth rate**: PDR_LOWER <bi-directional bandwidth
33 NDR and PDR are measured for the following L2 frame sizes (untagged
36 - IPv4 payload: 64B, IMIX (28x64B, 16x570B, 4x1518B), 1518B, 9000B.
37 - IPv6 payload: 78B, IMIX (28x78B, 16x570B, 4x1518B), 1518B, 9000B.
39 All rates are reported from external Traffic Generator perspective.
41 .. _mlrsearch_algorithm:
46 Multiple Loss Rate search (MLRsearch) tests use new search algorithm
47 implemented in FD.io CSIT project. MLRsearch discovers multiple packet
48 throughput rates in a single search, with each rate associated with a
49 distinct Packet Loss Ratio (PLR) criteria. MLRsearch is being
50 standardized in IETF with `draft-vpolak-mkonstan-mlrsearch-XX
51 <https://tools.ietf.org/html/draft-vpolak-mkonstan-mlrsearch-00>`_.
53 Two throughput measurements used in FD.io CSIT are Non-Drop Rate (NDR,
54 with zero packet loss, PLR=0) and Partial Drop Rate (PDR, with packet
55 loss rate not greater than the configured non-zero PLR). MLRsearch
56 discovers NDR and PDR in a single pass reducing required execution time
57 compared to separate binary searches for NDR and PDR. MLRsearch reduces
58 execution time even further by relying on shorter trial durations
59 of intermediate steps, with only the final measurements
60 conducted at the specified final trial duration.
61 This results in the shorter overall search
62 execution time when compared to a standard NDR/PDR binary search,
63 while guaranteeing the same or similar results.
65 If needed, MLRsearch can be easily adopted to discover more throughput rates
66 with different pre-defined PLRs.
68 .. Note:: All throughput rates are *always* bi-directional
69 aggregates of two equal (symmetric) uni-directional packet rates
70 received and reported by an external traffic generator.
75 The main properties of MLRsearch:
77 - MLRsearch is a duration aware multi-phase multi-rate search algorithm.
79 - Initial phase determines promising starting interval for the search.
80 - Intermediate phases progress towards defined final search criteria.
81 - Final phase executes measurements according to the final search
86 - Uses link rate as a starting transmit rate and discovers the Maximum
87 Receive Rate (MRR) used as an input to the first intermediate phase.
89 - *Intermediate phases*:
91 - Start with initial trial duration (in the first phase) and converge
92 geometrically towards the final trial duration (in the final phase).
93 - Track two values for NDR and two for PDR.
95 - The values are called (NDR or PDR) lower_bound and upper_bound.
96 - Each value comes from a specific trial measurement
97 (most recent for that transmit rate),
98 and as such the value is associated with that measurement's duration and loss.
99 - A bound can be invalid, for example if NDR lower_bound
100 has been measured with nonzero loss.
101 - Invalid bounds are not real boundaries for the searched value,
102 but are needed to track interval widths.
103 - Valid bounds are real boundaries for the searched value.
104 - Each non-initial phase ends with all bounds valid.
106 - Start with a large (lower_bound, upper_bound) interval width and
107 geometrically converge towards the width goal (measurement resolution)
108 of the phase. Each phase halves the previous width goal.
109 - Use internal and external searches:
111 - External search - measures at transmit rates outside the (lower_bound,
112 upper_bound) interval. Activated when a bound is invalid,
113 to search for a new valid bound by doubling the interval width.
114 It is a variant of `exponential search`_.
115 - Internal search - `binary search`_, measures at transmit rates within the
116 (lower_bound, upper_bound) valid interval, halving the interval width.
118 - *Final phase* is executed with the final test trial duration, and the final
119 width goal that determines resolution of the overall search.
120 Intermediate phases together with the final phase are called non-initial phases.
122 The main benefits of MLRsearch vs. binary search include:
124 - In general MLRsearch is likely to execute more search trials overall, but
125 less trials at a set final duration.
126 - In well behaving cases it greatly reduces (>50%) the overall duration
127 compared to a single PDR (or NDR) binary search duration,
128 while finding multiple drop rates.
129 - In all cases MLRsearch yields the same or similar results to binary search.
130 - Note: both binary search and MLRsearch are susceptible to reporting
131 non-repeatable results across multiple runs for very bad behaving
136 - Worst case MLRsearch can take longer than a binary search e.g. in case of
137 drastic changes in behaviour for trials at varying durations.
139 Search Implementation
140 ~~~~~~~~~~~~~~~~~~~~~
142 Following is a brief description of the current MLRsearch
143 implementation in FD.io CSIT.
148 #. *maximum_transmit_rate* - maximum packet transmit rate to be used by
149 external traffic generator, limited by either the actual Ethernet
150 link rate or traffic generator NIC model capabilities. Sample
151 defaults: 2 * 14.88 Mpps for 64B 10GE link rate,
152 2 * 18.75 Mpps for 64B 40GE NIC maximum rate.
153 #. *minimum_transmit_rate* - minimum packet transmit rate to be used for
154 measurements. MLRsearch fails if lower transmit rate needs to be
155 used to meet search criteria. Default: 2 * 10 kpps (could be higher).
156 #. *final_trial_duration* - required trial duration for final rate
157 measurements. Default: 30 sec.
158 #. *initial_trial_duration* - trial duration for initial MLRsearch phase.
160 #. *final_relative_width* - required measurement resolution expressed as
161 (lower_bound, upper_bound) interval width relative to upper_bound.
163 #. *packet_loss_ratio* - maximum acceptable PLR search criteria for
164 PDR measurements. Default: 0.5%.
165 #. *number_of_intermediate_phases* - number of phases between the initial
166 phase and the final phase. Impacts the overall MLRsearch duration.
167 Less phases are required for well behaving cases, more phases
168 may be needed to reduce the overall search duration for worse behaving cases.
169 Default (2). (Value chosen based on limited experimentation to date.
170 More experimentation needed to arrive to clearer guidelines.)
175 1. First trial measures at maximum rate and discovers MRR.
177 a. *in*: trial_duration = initial_trial_duration.
178 b. *in*: offered_transmit_rate = maximum_transmit_rate.
179 c. *do*: single trial.
180 d. *out*: measured loss ratio.
181 e. *out*: mrr = measured receive rate.
183 2. Second trial measures at MRR and discovers MRR2.
185 a. *in*: trial_duration = initial_trial_duration.
186 b. *in*: offered_transmit_rate = MRR.
187 c. *do*: single trial.
188 d. *out*: measured loss ratio.
189 e. *out*: mrr2 = measured receive rate.
191 3. Third trial measures at MRR2.
193 a. *in*: trial_duration = initial_trial_duration.
194 b. *in*: offered_transmit_rate = MRR2.
195 c. *do*: single trial.
196 d. *out*: measured loss ratio.
203 a. *in*: trial_duration for the current phase.
204 Set to initial_trial_duration for the first intermediate phase;
205 to final_trial_duration for the final phase;
206 or to the element of interpolating geometric sequence
207 for other intermediate phases.
208 For example with two intermediate phases, trial_duration
209 of the second intermediate phase is the geometric average
210 of initial_strial_duration and final_trial_duration.
211 b. *in*: relative_width_goal for the current phase.
212 Set to final_relative_width for the final phase;
213 doubled for each preceding phase.
214 For example with two intermediate phases,
215 the first intermediate phase uses quadruple of final_relative_width
216 and the second intermediate phase uses double of final_relative_width.
217 c. *in*: ndr_interval, pdr_interval from the previous main loop iteration
218 or the previous phase.
219 If the previous phase is the initial phase, both intervals have
220 lower_bound = MRR2, uper_bound = MRR.
221 Note that the initial phase is likely to create intervals with invalid bounds.
222 d. *do*: According to the procedure described in point 2,
223 either exit the phase (by jumping to 1.g.),
224 or prepare new transmit rate to measure with.
225 e. *do*: Perform the trial measurement at the new transmit rate
226 and trial_duration, compute its loss ratio.
227 f. *do*: Update the bounds of both intervals, based on the new measurement.
228 The actual update rules are numerous, as NDR external search
229 can affect PDR interval and vice versa, but the result
230 agrees with rules of both internal and external search.
231 For example, any new measurement below an invalid lower_bound
232 becomes the new lower_bound, while the old measurement
233 (previously acting as the invalid lower_bound)
234 becomes a new and valid upper_bound.
235 Go to next iteration (1.c.), taking the updated intervals as new input.
236 g. *out*: current ndr_interval and pdr_interval.
237 In the final phase this is also considered
238 to be the result of the whole search.
239 For other phases, the next phase loop is started
240 with the current results as an input.
242 2. New transmit rate (or exit) calculation (for 1.d.):
244 - If there is an invalid bound then prepare for external search:
246 - *If* the most recent measurement at NDR lower_bound transmit rate
247 had the loss higher than zero, then
248 the new transmit rate is NDR lower_bound
249 decreased by two NDR interval widths.
250 - Else, *if* the most recent measurement at PDR lower_bound
251 transmit rate had the loss higher than PLR, then
252 the new transmit rate is PDR lower_bound
253 decreased by two PDR interval widths.
254 - Else, *if* the most recent measurement at NDR upper_bound
255 transmit rate had no loss, then
256 the new transmit rate is NDR upper_bound
257 increased by two NDR interval widths.
258 - Else, *if* the most recent measurement at PDR upper_bound
259 transmit rate had the loss lower or equal to PLR, then
260 the new transmit rate is PDR upper_bound
261 increased by two PDR interval widths.
262 - If interval width is higher than the current phase goal:
264 - Else, *if* NDR interval does not meet the current phase width goal,
265 prepare for internal search. The new transmit rate is
266 (NDR lower bound + NDR upper bound) / 2.
267 - Else, *if* PDR interval does not meet the current phase width goal,
268 prepare for internal search. The new transmit rate is
269 (PDR lower bound + PDR upper bound) / 2.
270 - Else, *if* some bound has still only been measured at a lower duration,
271 prepare to re-measure at the current duration (and the same transmit rate).
272 The order of priorities is:
278 - *Else*, do not prepare any new rate, to exit the phase.
279 This ensures that at the end of each non-initial phase
280 all intervals are valid, narrow enough, and measured
281 at current phase trial duration.
283 Implementation Deviations
284 ~~~~~~~~~~~~~~~~~~~~~~~~~
286 This document so far has been describing a simplified version of MLRsearch algorithm.
287 The full algorithm as implemented contains additional logic,
288 which makes some of the details (but not general ideas) above incorrect.
289 Here is a short description of the additional logic as a list of principles,
290 explaining their main differences from (or additions to) the simplified description,
291 but without detailing their mutual interaction.
293 1. *Logarithmic transmit rate.*
294 In order to better fit the relative width goal,
295 the interval doubling and halving is done differently.
296 For example, the middle of 2 and 8 is 4, not 5.
297 2. *Optimistic maximum rate.*
298 The increased rate is never higher than the maximum rate.
299 Upper bound at that rate is always considered valid.
300 3. *Pessimistic minimum rate.*
301 The decreased rate is never lower than the minimum rate.
302 If a lower bound at that rate is invalid,
303 a phase stops refining the interval further (until it gets re-measured).
304 4. *Conservative interval updates.*
305 Measurements above current upper bound never update a valid upper bound,
306 even if drop ratio is low.
307 Measurements below current lower bound always update any lower bound
308 if drop ratio is high.
309 5. *Ensure sufficient interval width.*
310 Narrow intervals make external search take more time to find a valid bound.
311 If the new transmit increased or decreased rate would result in width
312 less than the current goal, increase/decrease more.
313 This can happen if the measurement for the other interval
314 makes the current interval too narrow.
315 Similarly, take care the measurements in the initial phase
316 create wide enough interval.
317 6. *Timeout for bad cases.*
318 The worst case for MLRsearch is when each phase converges to intervals
319 way different than the results of the previous phase.
320 Rather than suffer total search time several times larger
321 than pure binary search, the implemented tests fail themselves
322 when the search takes too long (given by argument *timeout*).
327 Maximum Receive Rate (MRR) tests are complementary to MLRsearch tests,
328 as they provide a maximum "raw" throughput benchmark for development and
329 testing community. MRR tests measure the packet forwarding rate under
330 the maximum load offered by traffic generator over a set trial duration,
331 regardless of packet loss. Maximum load for specified Ethernet frame
332 size is set to the bi-directional link rate.
334 In |csit-release| MRR test code has been updated with a configurable
335 burst MRR parameters: trial duration and number of trials in a single
336 burst. This enabled a new Burst MRR (BMRR) methodology for more precise
337 performance trending.
339 Current parameters for BMRR tests:
341 - Ethernet frame sizes: 64B (78B for IPv6), IMIX, 1518B, 9000B; all
342 quoted sizes include frame CRC, but exclude per frame transmission
343 overhead of 20B (preamble, inter frame gap).
345 - Maximum load offered: 10GE and 40GE link (sub-)rates depending on NIC
346 tested, with the actual packet rate depending on frame size,
347 transmission overhead and traffic generator NIC forwarding capacity.
349 - For 10GE NICs the maximum packet rate load is 2* 14.88 Mpps for 64B,
350 a 10GE bi-directional link rate.
351 - For 25GE NICs the maximum packet rate load is 2* 18.75 Mpps for 64B,
352 a 25GE bi-directional link sub-rate limited by TG 25GE NIC used,
354 - For 40GE NICs the maximum packet rate load is 2* 18.75 Mpps for 64B,
355 a 40GE bi-directional link sub-rate limited by TG 40GE NIC used,
356 XL710. Packet rate for other tested frame sizes is limited by PCIe
357 Gen3 x8 bandwidth limitation of ~50Gbps.
359 - Trial duration: 1 sec.
361 - Number of trials per burst: 10.
363 Similarly to NDR/PDR throughput tests, MRR test should be reporting bi-
364 directional link rate (or NIC rate, if lower) if tested VPP
365 configuration can handle the packet rate higher than bi-directional link
366 rate, e.g. large packet tests and/or multi-core tests.
368 MRR tests are currently used for FD.io CSIT continuous performance
369 trending and for comparison between releases. Daily trending job tests
370 subset of frame sizes, focusing on 64B (78B for IPv6) for all tests and
371 IMIX for selected tests (vhost, memif).
373 MRR-like measurements are being used to establish starting conditions
374 for experimental Probabilistic Loss Ratio Search (PLRsearch) used for
375 soak testing, aimed at verifying continuous system performance over an
376 extended period of time, hours, days, weeks, months. PLRsearch code is
377 currently in experimental phase in FD.io CSIT project.
382 TRex Traffic Generator (TG) is used for measuring latency of VPP DUTs.
383 Reported latency values are measured using following methodology:
385 - Latency tests are performed at 100% of discovered NDR and PDR rates
386 for each throughput test and packet size (except IMIX).
387 - TG sends dedicated latency streams, one per direction, each at the
388 rate of 9 kpps at the prescribed packet size; these are sent in
389 addition to the main load streams.
390 - TG reports min/avg/max latency values per stream direction, hence two
391 sets of latency values are reported per test case; future release of
392 TRex is expected to report latency percentiles.
393 - Reported latency values are aggregate across two SUTs due to three
394 node topology used for all performance tests; for per SUT latency,
395 reported value should be divided by two.
396 - 1usec is the measurement accuracy advertised by TRex TG for the setup
397 used in FD.io labs used by CSIT project.
398 - TRex setup introduces an always-on error of about 2*2usec per latency
399 flow additonal Tx/Rx interface latency induced by TRex SW writing and
400 reading packet timestamps on CPU cores without HW acceleration on NICs
401 closer to the interface line.
406 All performance tests are executed with single processor core and with
407 multiple cores scenarios.
409 Intel Hyper-Threading (HT)
410 ~~~~~~~~~~~~~~~~~~~~~~~~~~
412 Intel Xeon processors used in FD.io CSIT can operate either in HT
413 Disabled mode (single logical core per each physical core) or in HT
414 Enabled mode (two logical cores per each physical core). HT setting is
415 applied in BIOS and requires server SUT reload for it to take effect,
416 making it impractical for continuous changes of HT mode of operation.
418 |csit-release| performance tests are executed with server SUTs' Intel
419 XEON processors configured with Intel Hyper-Threading Disabled for all
420 Xeon Haswell testbeds (3n-hsw) and with Intel Hyper-Threading Enabled
421 for all Xeon Skylake testbeds.
423 More information about physical testbeds is provided in
424 :ref:`tested_physical_topologies`.
429 |csit-release| multi-core tests are executed in the following VPP worker
430 thread and physical core configurations:
432 #. Intel Xeon Haswell testbeds (3n-hsw) with Intel HT disabled
433 (1 logical CPU core per each physical core):
435 #. 1t1c - 1 VPP worker thread on 1 physical core.
436 #. 2t2c - 2 VPP worker threads on 2 physical cores.
437 #. 4t4c - 4 VPP worker threads on 4 physical cores.
439 #. Intel Xeon Skylake testbeds (2n-skx, 3n-skx) with Intel HT enabled
440 (2 logical CPU cores per each physical core):
442 #. 2t1c - 2 VPP worker threads on 1 physical core.
443 #. 4t2c - 4 VPP worker threads on 2 physical cores.
444 #. 8t4c - 8 VPP worker threads on 4 physical cores.
446 VPP worker threads are the data plane threads running on isolated
447 logical cores. With Intel HT enabled VPP workers are placed as sibling
448 threads on each used physical core. VPP control threads (main, stats)
449 are running on a separate non-isolated core together with other Linux
452 In all CSIT tests care is taken to ensure that each VPP worker handles
453 the same amount of received packet load and does the same amount of
454 packet processing work. This is achieved by evenly distributing per
455 interface type (e.g. physical, virtual) receive queues over VPP workers
456 using default VPP round- robin mapping and by loading these queues with
457 the same amount of packet flows.
459 If number of VPP workers is higher than number of physical or virtual
460 interfaces, multiple receive queues are configured on each interface.
461 NIC Receive Side Scaling (RSS) for physical interfaces and multi-queue
462 for virtual interfaces are used for this purpose.
464 Section :ref:`throughput_speedup_multi_core` includes a set of graphs
465 illustrating packet throughout speedup when running VPP worker threads
466 on multiple cores. Note that in quite a few test cases running VPP
467 workers on 2 or 4 physical cores hits the I/O bandwidth or packets-per-
468 second limit of tested NIC.
473 CSIT code manipulates a number of VPP settings in startup.conf for optimized
474 performance. List of common settings applied to all tests and test
475 dependent settings follows.
477 See `VPP startup.conf <https://git.fd.io/vpp/tree/src/vpp/conf/startup.conf?h=stable/1807>`_
478 for a complete set and description of listed settings.
483 List of vpp startup.conf settings applied to all tests:
485 #. heap-size <value> - set separately for ip4, ip6, stats, main
486 depending on scale tested.
487 #. no-tx-checksum-offload - disables UDP / TCP TX checksum offload in DPDK.
488 Typically needed for use faster vector PMDs (together with
490 #. socket-mem <value>,<value> - memory per numa. (Not required anymore
491 due to VPP code changes, should be removed in CSIT-18.10.)
496 List of vpp startup.conf settings applied dynamically per test:
498 #. corelist-workers <list_of_cores> - list of logical cores to run VPP
499 worker data plane threads. Depends on HyperThreading and core per
501 #. num-rx-queues <value> - depends on a number of VPP threads and NIC
503 #. num-rx-desc/num-tx-desc - number of rx/tx descriptors for specific
504 NICs, incl. xl710, x710, xxv710.
505 #. num-mbufs <value> - increases number of buffers allocated, needed
506 only in scenarios with large number of interfaces and worker threads.
507 Value is per CPU socket. Default is 16384.
508 #. no-multi-seg - disables multi-segment buffers in DPDK, improves
509 packet throughput, but disables Jumbo MTU support. Disabled for all
510 tests apart from the ones that require Jumbo 9000B frame support.
511 #. UIO driver - depends on topology file definition.
512 #. QAT VFs - depends on NRThreads, each thread = 1QAT VFs.
517 FD.io CSIT performance lab is testing VPP vhost with KVM VMs using
518 following environment settings:
520 - Tests with varying Qemu virtio queue (a.k.a. vring) sizes: [vr256]
521 default 256 descriptors, [vr1024] 1024 descriptors to optimize for
523 - Tests with varying Linux :abbr:`CFS (Completely Fair Scheduler)`
524 settings: [cfs] default settings, [cfsrr1] CFS RoundRobin(1) policy
525 applied to all data plane threads handling test packet path including
526 all VPP worker threads and all Qemu testpmd poll-mode threads.
527 - Resulting test cases are all combinations with [vr256,vr1024] and
528 [cfs,cfsrr1] settings.
529 - Adjusted Linux kernel :abbr:`CFS (Completely Fair Scheduler)`
530 scheduler policy for data plane threads used in CSIT is documented in
531 `CSIT Performance Environment Tuning wiki
532 <https://wiki.fd.io/view/CSIT/csit-perf-env-tuning-ubuntu1604>`_.
533 - The purpose is to verify performance impact (MRR and NDR/PDR
534 throughput) and same test measurements repeatability, by making VPP
535 and VM data plane threads less susceptible to other Linux OS system
536 tasks hijacking CPU cores running those data plane threads.
538 LXC/DRC Container Memif
539 -----------------------
541 |csit-release| includes tests taking advantage of VPP memif virtual
542 interface (shared memory interface) to interconnect VPP running in
543 Containers. VPP vswitch instance runs in bare-metal user-mode handling
544 NIC interfaces and connecting over memif (Slave side) to VPPs running in
545 :abbr:`Linux Container (LXC)` or in Docker Container (DRC) configured
546 with memif (Master side). LXCs and DRCs run in a priviliged mode with
547 VPP data plane worker threads pinned to dedicated physical CPU cores per
548 usual CSIT practice. All VPP instances run the same version of software.
549 This test topology is equivalent to existing tests with vhost-user and
550 VMs as described earlier in :ref:`tested_logical_topologies`.
552 In addition to above vswitch tests, a single memif interface test is
553 executed. It runs in a simple topology of two VPP container instances
554 connected over memif interface in order to verify standalone memif
555 interface performance.
557 More information about CSIT LXC and DRC setup and control is available
558 in :ref:`container_orchestration_in_csit`.
563 |csit-release| includes tests of VPP topologies running in K8s
564 orchestrated Pods/Containers and connected over memif virtual
565 interfaces. In order to provide simple topology coding flexibility and
566 extensibility container orchestration is done with `Kubernetes
567 <https://github.com/kubernetes>`_ using `Docker
568 <https://github.com/docker>`_ images for all container applications
569 including VPP. `Ligato <https://github.com/ligato>`_ is used for the
570 Pod/Container networking orchestration that is integrated with K8s,
571 including memif support.
573 In these tests VPP vswitch runs in a K8s Pod with Docker Container (DRC)
574 handling NIC interfaces and connecting over memif to more instances of
575 VPP running in Pods/DRCs. All DRCs run in a priviliged mode with VPP
576 data plane worker threads pinned to dedicated physical CPU cores per
577 usual CSIT practice. All VPP instances run the same version of software.
578 This test topology is equivalent to existing tests with vhost-user and
579 VMs as described earlier in :ref:`tested_physical_topologies`.
581 Further documentation is available in
582 :ref:`container_orchestration_in_csit`.
584 VPP_Device Functional
585 ---------------------
587 |csit-release| added new VPP_Device test environment for functional VPP
588 device tests integrated into LFN CI/CD infrastructure. VPP_Device tests
589 run on 1-Node testbeds (1n-skx, 1n-arm) and rely on Linux SRIOV Virtual
590 Function (VF), dot1q VLAN tagging and external loopback cables to
591 facilitate packet passing over exernal physical links. Initial focus is
592 on few baseline tests. Existing CSIT VIRL tests can be moved to
593 VPP_Device framework by changing L1 and L2 KW(s). RF test definition
594 code stays unchanged with the exception of requiring adjustments from
595 3-Node to 2-Node logical topologies. CSIT VIRL to VPP_Device migration
596 is expected in the next CSIT release.
601 VPP IPSec performance tests are using DPDK cryptodev device driver in
602 combination with HW cryptodev devices - Intel QAT 8950 50G - present in
603 LF FD.io physical testbeds. DPDK cryptodev can be used for all IPSec
604 data plane functions supported by VPP.
606 Currently |csit-release| implements following IPSec test cases:
608 - AES-GCM, CBC-SHA1 ciphers, in combination with IPv4 routed-forwarding
609 with Intel xl710 NIC.
610 - CBC-SHA1 ciphers, in combination with LISP-GPE overlay tunneling for
611 IPv4-over-IPv4 with Intel xl710 NIC.
613 TRex Traffic Generator
614 ----------------------
619 `TRex traffic generator <https://wiki.fd.io/view/TRex>`_ is used for all
620 CSIT performance tests. TRex stateless mode is used to measure NDR and
621 PDR throughputs using binary search (NDR and PDR discovery tests) and
622 for quick checks of DUT performance against the reference NDRs (NDR
623 check tests) for specific configuration.
625 TRex is installed and run on the TG compute node. The typical procedure
628 - If the TRex is not already installed on TG, it is installed in the
629 suite setup phase - see `TRex intallation`_.
630 - TRex configuration is set in its configuration file
635 - TRex is started in the background mode
638 $ sh -c 'cd <t-rex-install-dir>/scripts/ && sudo nohup ./t-rex-64 -i -c 7 --iom 0 > /tmp/trex.log 2>&1 &' > /dev/null
640 - There are traffic streams dynamically prepared for each test, based on traffic
641 profiles. The traffic is sent and the statistics obtained using
642 :command:`trex_stl_lib.api.STLClient`.
644 Measuring Packet Loss
645 ~~~~~~~~~~~~~~~~~~~~~
647 Following sequence is followed to measure packet loss:
649 - Create an instance of STLClient.
650 - Connect to the client.
653 - Send the traffic for defined time.
654 - Get the statistics.
656 If there is a warm-up phase required, the traffic is sent also before
657 test and the statistics are ignored.
662 If measurement of latency is requested, two more packet streams are
663 created (one for each direction) with TRex flow_stats parameter set to
664 STLFlowLatencyStats. In that case, returned statistics will also include
665 min/avg/max latency values.
667 HTTP/TCP with WRK Tool
668 ----------------------
670 `WRK HTTP benchmarking tool <https://github.com/wg/wrk>`_ is used for
671 experimental TCP/IP and HTTP tests of VPP TCP/IP stack and built-in
672 static HTTP server. WRK has been chosen as it is capable of generating
673 significant TCP/IP and HTTP loads by scaling number of threads across
674 multi-core processors.
676 This in turn enables quite high scale benchmarking of the main TCP/IP
677 and HTTP service including HTTP TCP/IP Connections-Per-Second (CPS),
678 HTTP Requests-Per-Second and HTTP Bandwidth Throughput.
680 The initial tests are designed as follows:
682 - HTTP and TCP/IP Connections-Per-Second (CPS)
684 - WRK configured to use 8 threads across 8 cores, 1 thread per core.
685 - Maximum of 50 concurrent connections across all WRK threads.
686 - Timeout for server responses set to 5 seconds.
687 - Test duration is 30 seconds.
688 - Expected HTTP test sequence:
690 - Single HTTP GET Request sent per open connection.
691 - Connection close after valid HTTP reply.
692 - Resulting flow sequence - 8 packets: >Syn, <Syn-Ack, >Ack, >Req,
693 <Rep, >Fin, <Fin, >Ack.
695 - HTTP Requests-Per-Second
697 - WRK configured to use 8 threads across 8 cores, 1 thread per core.
698 - Maximum of 50 concurrent connections across all WRK threads.
699 - Timeout for server responses set to 5 seconds.
700 - Test duration is 30 seconds.
701 - Expected HTTP test sequence:
703 - Multiple HTTP GET Requests sent in sequence per open connection.
704 - Connection close after set test duration time.
705 - Resulting flow sequence: >Syn, <Syn-Ack, >Ack, >Req[1], <Rep[1],
706 .., >Req[n], <Rep[n], >Fin, <Fin, >Ack.
708 .. _binary search: https://en.wikipedia.org/wiki/Binary_search
709 .. _exponential search: https://en.wikipedia.org/wiki/Exponential_search
710 .. _estimation of standard deviation: https://en.wikipedia.org/wiki/Unbiased_estimation_of_standard_deviation
711 .. _simplified error propagation formula: https://en.wikipedia.org/wiki/Propagation_of_uncertainty#Simplification