+CSIT |release| introduced test environment configuration changes to KVM Qemu
+vhost-user tests in order to more representatively measure |vpp-release|
+performance in configurations with vhost-user interfaces and different Qemu
+settings.
+
+FD.io CSIT performance lab is testing VPP vhost with KVM VMs using following
+environment settings:
+
+- Tests with varying Qemu virtio queue (a.k.a. vring) sizes: [vr256] default 256
+ descriptors, [vr1024] 1024 descriptors to optimize for packet throughput;
+
+- Tests with varying Linux :abbr:`CFS (Completely Fair Scheduler)` settings:
+ [cfs] default settings, [cfsrr1] CFS RoundRobin(1) policy applied to all data
+ plane threads handling test packet path including all VPP worker threads and
+ all Qemu testpmd poll-mode threads;
+
+- Resulting test cases are all combinations with [vr256,vr1024] and
+ [cfs,cfsrr1] settings;
+
+- Adjusted Linux kernel :abbr:`CFS (Completely Fair Scheduler)` scheduler policy
+ for data plane threads used in CSIT is documented in
+ `CSIT Performance Environment Tuning wiki <https://wiki.fd.io/view/CSIT/csit-perf-env-tuning-ubuntu1604>`_.
+ The purpose is to verify performance impact (NDR, PDR throughput) and
+ same test measurements repeatability, by making VPP and VM data plane
+ threads less susceptible to other Linux OS system tasks hijacking CPU
+ cores running those data plane threads.
+
+Methodology: LXC Container memif
+--------------------------------
+
+CSIT |release| introduced new tests - VPP Memif virtual interface (shared memory
+interface) tests interconnecting VPP instances over memif. VPP vswitch instance
+runs in bare-metal user-mode handling Intel x520 NIC 10GbE interfaces and
+connecting over memif (Master side) virtual interfaces to another instance of
+VPP running in bare-metal :abbr:`LXC (Linux Container)` with memif virtual
+interfaces (Slave side). LXC runs in a priviliged mode with VPP data plane worker
+threads pinned to dedicated physical CPU cores per usual CSIT practice. Both VPP
+run the same version of software. This test topology is equivalent to existing
+tests with vhost-user and VMs.
+
+Methodology: IPSec with Intel QAT HW cards
+------------------------------------------
+
+VPP IPSec performance tests are using DPDK cryptodev device driver in
+combination with HW cryptodev devices - Intel QAT 8950 50G - present in
+LF FD.io physical testbeds. DPDK cryptodev can be used for all IPSec
+data plane functions supported by VPP.
+
+Currently CSIT |release| implements following IPSec test cases:
+
+- AES-GCM, CBC-SHA1 ciphers, in combination with IPv4 routed-forwarding
+ with Intel xl710 NIC.
+- CBC-SHA1 ciphers, in combination with LISP-GPE overlay tunneling for
+ IPv4-over-IPv4 with Intel xl710 NIC.
+
+Methodology: TRex Traffic Generator Usage
+-----------------------------------------
+
+The `TRex traffic generator <https://wiki.fd.io/view/TRex>`_ is used for all
+CSIT performance tests. TRex stateless mode is used to measure NDR and PDR
+throughputs using binary search (NDR and PDR discovery tests) and for quick
+checks of DUT performance against the reference NDRs (NDR check tests) for
+specific configuration.
+
+TRex is installed and run on the TG compute node. The typical procedure is:
+
+- If the TRex is not already installed on TG, it is installed in the
+ suite setup phase - see `TRex intallation`_.
+- TRex configuration is set in its configuration file
+ ::
+
+ /etc/trex_cfg.yaml
+
+- TRex is started in the background mode
+ ::
+
+ $ sh -c 'cd /opt/trex-core-2.25/scripts/ && sudo nohup ./t-rex-64 -i -c 7 --iom 0 > /dev/null 2>&1 &' > /dev/null
+
+- There are traffic streams dynamically prepared for each test, based on traffic
+ profiles. The traffic is sent and the statistics obtained using
+ :command:`trex_stl_lib.api.STLClient`.
+
+**Measuring packet loss**
+
+- Create an instance of STLClient
+- Connect to the client
+- Add all streams
+- Clear statistics
+- Send the traffic for defined time
+- Get the statistics