X-Git-Url: https://gerrit.fd.io/r/gitweb?p=csit.git;a=blobdiff_plain;f=docs%2Freport%2Fvpp_functional_tests%2Ftest_environment.rst;h=d8f2abff5535cc9c58c38275ee7804bc5b89af00;hp=9907264aa4a961c4d88f7931be81ad375256da62;hb=24c3dd59d7a004bc6353e13d750b9ced098cf52d;hpb=fe1e04358e1a88d7e2c821da03f04a251583fff5 diff --git a/docs/report/vpp_functional_tests/test_environment.rst b/docs/report/vpp_functional_tests/test_environment.rst index 9907264aa4..d8f2abff55 100644 --- a/docs/report/vpp_functional_tests/test_environment.rst +++ b/docs/report/vpp_functional_tests/test_environment.rst @@ -1,70 +1,71 @@ Test Environment ================ -CSIT functional tests are currently executed in FD.IO VIRL testbed. The physical -VIRL testbed infrastructure consists of three VIRL hosts: - -- All hosts are Cisco UCS C240-M4 (2x Intel(R) Xeon(R) CPU E5-2699 v3 @2.30GHz, - 18c, 512GB RAM) +CSIT VPP functional tests are executed in FD.io VIRL testbeds. The +physical VIRL testbed infrastructure consists of three VIRL servers: - tb4-virl1: - Status: Production - OS: Ubuntu 16.04.2 - - STD server version 0.10.32.16 - - UWM server version 0.10.32.16 + - VIRL STD server version: 0.10.32.16 + - VIRL UWM server version: 0.10.32.16 - tb4-virl2: - Status: Production - OS: Ubuntu 16.04.2 - - STD server version 0.10.32.16 - - UWM server version 0.10.32.16 + - VIRL STD server version: 0.10.32.16 + - VIRL UWM server version: 0.10.32.16 - tb4-virl3: - - Status: Testing + - Status: Production - OS: Ubuntu 16.04.2 - - STD server version 0.10.32.19 - - UWM server version 0.10.32.19 + - VIRL STD server version: 0.10.32.19 + - VIRL UWM server version: 0.10.32.19 + +- VIRL hosts: Cisco UCS C240-M4, each with 2x Intel Xeon E5-2699 + v3 (2.30 GHz, 18c), 512GB RAM. -Whenever a patch is submitted to gerrit for review, parallel VIRL simulations -are started to reduce the time of execution of all functional tests. The number -of parallel VIRL simulations is equal to number of test groups defined by -TEST_GROUPS variable in :file:`csit/bootstrap.sh` file. The VIRL host to run -VIRL simulation is selected based on least load algorithm per VIRL simulation. +Whenever a patch is submitted to gerrit for review, parallel VIRL +simulations are started to reduce the time of execution of all +functional tests. The number of parallel VIRL simulations is equal to a +number of test groups defined by TEST_GROUPS variable in +:file:`csit/bootstrap.sh` file. VIRL host to run VIRL simulation is +selected based on least load algorithm per VIRL simulation. -Every VIRL simulation uses the same three-node - Traffic Generator (TG node) and -two Systems Under Test (SUT1 and SUT2) - "double-ring" topology. The appropriate -pre-built VPP packages built by Jenkins for the patch under review are then -installed on the two SUTs, along with their :file:`/etc/vpp/startup.conf` file, -in all VIRL simulations. +Every VIRL simulation uses the same three-node logical ring topology - +Traffic Generator (TG node) and two Systems Under Test (SUT1 and SUT2). +The appropriate pre-built VPP packages built by Jenkins for the patch +under review are then installed on the two SUTs, along with their +:file:`/etc/vpp/startup.conf` file, in all VIRL simulations. -SUT Configuration - VIRL Guest VM ---------------------------------- +SUT Settings - VIRL Guest VM +---------------------------- -Configurations of the SUT VMs is defined in `VIRL topologies directory`_ +SUT VMs' settings are defined in `VIRL topologies directory`_ -- List of SUT VM interfaces::: +- List of SUT VM interfaces: -- Number of 2MB hugepages: 1024 +- Number of 2MB hugepages: 1024. -- Maximum number of memory map areas: 20000 +- Maximum number of memory map areas: 20000. -- Kernel Shared Memory Max: 2147483648 (vm.nr_hugepages * 2 * 1024 * 1024) +- Kernel Shared Memory Max: 2147483648 (vm.nr_hugepages * 2 * 1024 * 1024). -SUT Configuration - VIRL Guest OS Linux ---------------------------------------- +SUT Settings - VIRL Guest OS Linux +---------------------------------- -In CSIT terminology, the VM operating system for both SUTs that |vpp-release| has -been tested with, is the following: +In CSIT terminology, the VM operating system for both SUTs that |vpp-release| +has been tested with, is the following: -#. **Ubuntu VIRL image** +#. Ubuntu VIRL image This image implies Ubuntu 16.04.1 LTS, current as of yyyy-mm-dd (that is, package versions are those that would have been installed by a @@ -77,7 +78,7 @@ been tested with, is the following: A replica of this VM image can be built by running the :command:`build.sh` script in CSIT repository. -#. **CentOS VIRL image** +#. CentOS VIRL image This image implies Centos 7.4-1711, current as of yyyy-mm-dd (that is, package versions are those that would have been installed by a @@ -90,7 +91,7 @@ been tested with, is the following: A replica of this VM image can be built by running the :command:`build.sh` script in CSIT repository. -#. **Nested VM image** +#. Nested VM image In addition to the "main" VM image, tests which require VPP to communicate to a VM over a vhost-user interface, utilize a "nested" VM image. @@ -104,22 +105,26 @@ been tested with, is the following: "nested" image are included in CSIT GIT repository, and the image can be rebuilt using the "build.sh" script at `VIRL nested`_. -DUT Configuration - VPP ------------------------ +DUT Settings - VPP +------------------ Every System Under Test runs VPP SW application in Linux user-mode as a Device Under Test (DUT) node. -**DUT port configuration** +DUT Port Configuration +~~~~~~~~~~~~~~~~~~~~~~ Port configuration of DUTs is defined in topology file that is generated per VIRL simulation based on the definition stored in `VIRL topologies directory`_. -Example of DUT nodes configuration::: +Example of DUT nodes configuration: + +:: DUT1: type: DUT host: "10.30.51.157" + arch: x86_64 port: 22 username: cisco honeycomb: @@ -176,6 +181,7 @@ Example of DUT nodes configuration::: DUT2: type: DUT host: "10.30.51.156" + arch: x86_64 port: 22 username: cisco honeycomb: @@ -230,61 +236,45 @@ Example of DUT nodes configuration::: pci_address: "0000:00:07.0" link: link6 -**VPP Version** +VPP Version +~~~~~~~~~~~ |vpp-release| -**VPP Installed Packages - Ubuntu** +VPP Installed Packages - Ubuntu +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + :: - $ dpkg -l vpp\* - Desired=Unknown/Install/Remove/Purge/Hold - | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend - |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) - ||/ Name Version Architecture Description - +++-==============-=============-============-================================================= - ii vpp 18.07-release amd64 Vector Packet Processing--executables - ii vpp-dbg 18.07-release amd64 Vector Packet Processing--debug symbols - ii vpp-dev 18.07-release amd64 Vector Packet Processing--development support - ii vpp-dpdk-dkms 18.05-vpp2 amd64 DPDK Development Package for VPP - Kernel Modules - ii vpp-lib 18.07-release amd64 Vector Packet Processing--runtime libraries - ii vpp-plugins 18.07-release amd64 Vector Packet Processing--runtime plugins - -**VPP Installed Packages - Centos** + $ dpkg -l | grep vpp + ii libvppinfra 19.04-release amd64 Vector Packet Processing--runtime libraries + ii libvppinfra-dev 19.04-release amd64 Vector Packet Processing--runtime libraries + ii python3-vpp-api 19.04-release amd64 VPP Python3 API bindings + ii vpp 19.04-release amd64 Vector Packet Processing--executables + ii vpp-api-python 19.04-release amd64 VPP Python API bindings + ii vpp-dbg 19.04-release amd64 Vector Packet Processing--debug symbols + ii vpp-dev 19.04-release amd64 Vector Packet Processing--development support + ii vpp-plugin-core 19.04-release amd64 Vector Packet Processing--runtime core plugins + ii vpp-plugin-dpdk 19.04-release amd64 Vector Packet Processing--runtime dpdk plugin + +VPP Installed Packages - Centos +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + :: - $ rpm -qai vpp* - Name : vpp - Version : 18.07 - Release : release - Architecture: x86_64 - Install Date: Tue 31 Jul 2018 02:59:45 AM EDT - Group : Unspecified - Size : 2396993 - License : ASL 2.0 - Signature : (none) - Source RPM : vpp-18.07-release.src.rpm - Build Date : Mon 30 Jul 2018 08:20:19 PM EDT - Build Host : c3de88e7d43c - Relocations : (not relocatable) - Summary : Vector Packet Processing - Description : - This package provides VPP executables: vpp, vpp_api_test, vpp_json_test - vpp - the vector packet engine - vpp_api_test - vector packet engine API test tool - vpp_json_test - vector packet engine JSON test tool + $ rpm -qai *vpp* Name : vpp-lib - Version : 18.07 + Version : 19.04 Release : release Architecture: x86_64 - Install Date: Tue 31 Jul 2018 02:59:45 AM EDT + Install Date: Thu 25 Apr 2019 04:14:51 AM EDT Group : System Environment/Libraries - Size : 27134058 + Size : 39543181 License : ASL 2.0 Signature : (none) - Source RPM : vpp-18.07-release.src.rpm - Build Date : Mon 30 Jul 2018 08:20:19 PM EDT - Build Host : c3de88e7d43c + Source RPM : vpp-19.04-release.src.rpm + Build Date : Tue 23 Apr 2019 08:46:26 PM EDT + Build Host : 940fc1a9327e Relocations : (not relocatable) Summary : VPP libraries Description : @@ -294,34 +284,18 @@ Example of DUT nodes configuration::: vlib - vector processing library vlib-api - binary API library vnet - network stack library - Name : vpp-selinux-policy - Version : 18.07 - Release : release - Architecture: x86_64 - Install Date: Tue 31 Jul 2018 02:59:44 AM EDT - Group : System Environment/Base - Size : 86709 - License : ASL 2.0 - Signature : (none) - Source RPM : vpp-18.07-release.src.rpm - Build Date : Mon 30 Jul 2018 08:20:19 PM EDT - Build Host : c3de88e7d43c - Relocations : (not relocatable) - Summary : VPP Security-Enhanced Linux (SELinux) policy - Description : - This package contains a tailored VPP SELinux policy Name : vpp-devel - Version : 18.07 + Version : 19.04 Release : release Architecture: x86_64 - Install Date: Tue 31 Jul 2018 02:59:47 AM EDT + Install Date: Thu 25 Apr 2019 04:14:52 AM EDT Group : Development/Libraries - Size : 11452203 + Size : 12701413 License : ASL 2.0 Signature : (none) - Source RPM : vpp-18.07-release.src.rpm - Build Date : Mon 30 Jul 2018 08:20:19 PM EDT - Build Host : c3de88e7d43c + Source RPM : vpp-19.04-release.src.rpm + Build Date : Tue 23 Apr 2019 08:46:26 PM EDT + Build Host : 940fc1a9327e Relocations : (not relocatable) Summary : VPP header files, static libraries Description : @@ -333,185 +307,85 @@ Example of DUT nodes configuration::: vnet - devices, classify, dhcp, ethernet flow, gre, ip, etc. vpp-api vppinfra + Name : vpp-selinux-policy + Version : 19.04 + Release : release + Architecture: x86_64 + Install Date: Thu 25 Apr 2019 04:14:49 AM EDT + Group : System Environment/Base + Size : 102155 + License : ASL 2.0 + Signature : (none) + Source RPM : vpp-19.04-release.src.rpm + Build Date : Tue 23 Apr 2019 08:46:26 PM EDT + Build Host : 940fc1a9327e + Relocations : (not relocatable) + Summary : VPP Security-Enhanced Linux (SELinux) policy + Description : + This package contains a tailored VPP SELinux policy Name : vpp-plugins - Version : 18.07 + Version : 19.04 Release : release Architecture: x86_64 - Install Date: Tue 31 Jul 2018 02:59:47 AM EDT + Install Date: Thu 25 Apr 2019 04:14:51 AM EDT Group : System Environment/Libraries - Size : 52282610 + Size : 22696981 License : ASL 2.0 Signature : (none) - Source RPM : vpp-18.07-release.src.rpm - Build Date : Mon 30 Jul 2018 08:20:19 PM EDT - Build Host : c3de88e7d43c + Source RPM : vpp-19.04-release.src.rpm + Build Date : Tue 23 Apr 2019 08:46:26 PM EDT + Build Host : 940fc1a9327e Relocations : (not relocatable) Summary : Vector Packet Processing--runtime plugins Description : This package contains VPP plugins + Name : vpp-api-python + Version : 19.04 + Release : release + Architecture: x86_64 + Install Date: Thu 25 Apr 2019 04:14:51 AM EDT + Group : Development/Libraries + Size : 164979 + License : ASL 2.0 + Signature : (none) + Source RPM : vpp-19.04-release.src.rpm + Build Date : Tue 23 Apr 2019 08:46:26 PM EDT + Build Host : 940fc1a9327e + Relocations : (not relocatable) + Summary : VPP api python bindings + Description : + This package contains the python bindings for the vpp api + Name : vpp + Version : 19.04 + Release : release + Architecture: x86_64 + Install Date: Thu 25 Apr 2019 04:14:51 AM EDT + Group : Unspecified + Size : 2496078 + License : ASL 2.0 + Signature : (none) + Source RPM : vpp-19.04-release.src.rpm + Build Date : Tue 23 Apr 2019 08:46:26 PM EDT + Build Host : 940fc1a9327e + Relocations : (not relocatable) + Summary : Vector Packet Processing + Description : + This package provides VPP executables: vpp, vpp_api_test, vpp_json_test + vpp - the vector packet engine + vpp_api_test - vector packet engine API test tool + vpp_json_test - vector packet engine JSON test tool -**VPP Startup Configuration** +VPP Startup Configuration +~~~~~~~~~~~~~~~~~~~~~~~~~ VPP startup configuration is common for all test cases except test cases related to SW Crypto device. -**Default** +**Common Configuration** -:: - - $ cat /etc/vpp/startup.conf - unix { - nodaemon - log /var/log/vpp/vpp.log - full-coredump - cli-listen /run/vpp/cli.sock - gid vpp - } - - api-trace { - ## This stanza controls binary API tracing. Unless there is a very strong reason, - ## please leave this feature enabled. - on - ## Additional parameters: - ## - ## To set the number of binary API trace records in the circular buffer, configure nitems - ## - ## nitems - ## - ## To save the api message table decode tables, configure a filename. Results in /tmp/ - ## Very handy for understanding api message changes between versions, identifying missing - ## plugins, and so forth. - ## - ## save-api-table - } - - api-segment { - gid vpp - } - - socksvr { - default - } - - cpu { - ## In the VPP there is one main thread and optionally the user can create worker(s) - ## The main thread and worker thread(s) can be pinned to CPU core(s) manually or automatically - - ## Manual pinning of thread(s) to CPU core(s) - - ## Set logical CPU core where main thread runs, if main core is not set - ## VPP will use core 1 if available - # main-core 1 - - ## Set logical CPU core(s) where worker threads are running - # corelist-workers 2-3,18-19 - - ## Automatic pinning of thread(s) to CPU core(s) - - ## Sets number of CPU core(s) to be skipped (1 ... N-1) - ## Skipped CPU core(s) are not used for pinning main thread and working thread(s). - ## The main thread is automatically pinned to the first available CPU core and worker(s) - ## are pinned to next free CPU core(s) after core assigned to main thread - # skip-cores 4 - - ## Specify a number of workers to be created - ## Workers are pinned to N consecutive CPU cores while skipping "skip-cores" CPU core(s) - ## and main thread's CPU core - # workers 2 - - ## Set scheduling policy and priority of main and worker threads - - ## Scheduling policy options are: other (SCHED_OTHER), batch (SCHED_BATCH) - ## idle (SCHED_IDLE), fifo (SCHED_FIFO), rr (SCHED_RR) - # scheduler-policy fifo - - ## Scheduling priority is used only for "real-time policies (fifo and rr), - ## and has to be in the range of priorities supported for a particular policy - # scheduler-priority 50 - } +There is used the default startup configuration as defined in `VPP startup.conf`_ - # dpdk { - ## Change default settings for all intefaces - # dev default { - ## Number of receive queues, enables RSS - ## Default is 1 - # num-rx-queues 3 - - ## Number of transmit queues, Default is equal - ## to number of worker threads or 1 if no workers treads - # num-tx-queues 3 - - ## Number of descriptors in transmit and receive rings - ## increasing or reducing number can impact performance - ## Default is 1024 for both rx and tx - # num-rx-desc 512 - # num-tx-desc 512 - - ## VLAN strip offload mode for interface - ## Default is off - # vlan-strip-offload on - # } - - ## Whitelist specific interface by specifying PCI address - # dev 0000:02:00.0 - - ## Whitelist specific interface by specifying PCI address and in - ## addition specify custom parameters for this interface - # dev 0000:02:00.1 { - # num-rx-queues 2 - # } - - ## Specify bonded interface and its slaves via PCI addresses - ## - ## Bonded interface in XOR load balance mode (mode 2) with L3 and L4 headers - # vdev eth_bond0,mode=2,slave=0000:02:00.0,slave=0000:03:00.0,xmit_policy=l34 - # vdev eth_bond1,mode=2,slave=0000:02:00.1,slave=0000:03:00.1,xmit_policy=l34 - ## - ## Bonded interface in Active-Back up mode (mode 1) - # vdev eth_bond0,mode=1,slave=0000:02:00.0,slave=0000:03:00.0 - # vdev eth_bond1,mode=1,slave=0000:02:00.1,slave=0000:03:00.1 - - ## Change UIO driver used by VPP, Options are: igb_uio, vfio-pci, - ## uio_pci_generic or auto (default) - # uio-driver vfio-pci - - ## Disable mutli-segment buffers, improves performance but - ## disables Jumbo MTU support - # no-multi-seg - - ## Increase number of buffers allocated, needed only in scenarios with - ## large number of interfaces and worker threads. Value is per CPU socket. - ## Default is 16384 - # num-mbufs 128000 - - ## Change hugepages allocation per-socket, needed only if there is need for - ## larger number of mbufs. Default is 256M on each detected CPU socket - # socket-mem 2048,2048 - - ## Disables UDP / TCP TX checksum offload. Typically needed for use - ## faster vector PMDs (together with no-multi-seg) - # no-tx-checksum-offload - # } - - - # plugins { - ## Adjusting the plugin path depending on where the VPP plugins are - # path /home/bms/vpp/build-root/install-vpp-native/vpp/lib64/vpp_plugins - - ## Disable all plugins by default and then selectively enable specific plugins - # plugin default { disable } - # plugin dpdk_plugin.so { enable } - # plugin acl_plugin.so { enable } - - ## Enable all plugins by default and then selectively disable specific plugins - # plugin dpdk_plugin.so { disable } - # plugin acl_plugin.so { disable } - # } - - ## Alternate syntax to choose plugin path - # plugin_path /home/bms/vpp/build-root/install-vpp-native/vpp/lib64/vpp_plugins - -**SW Crypto Device** +**SW Crypto Device Configuration** :: @@ -534,15 +408,16 @@ to SW Crypto device. vdev cryptodev_aesni_mb_pmd,socket_id=0 } -TG Configuration ----------------- +TG Settings - Scapy +------------------- Traffic Generator node is VM running the same OS Linux as SUTs. Ports of this VM are used as source (Tx) and destination (Rx) ports for the traffic. Traffic scripts of test cases are executed on this VM. -**TG VM configuration** +TG VM Configuration +~~~~~~~~~~~~~~~~~~~ Configuration of the TG VMs is defined in `VIRL topologies directory`_. @@ -557,7 +432,8 @@ Configuration of the TG VMs is defined in `VIRL topologies directory`_. -**TG node port configuration** +TG Port Configuration +~~~~~~~~~~~~~~~~~~~~~ Port configuration of TG is defined in topology file that is generated per VIRL simulation based on the definition stored in `VIRL topologies directory`_. @@ -567,6 +443,7 @@ Example of TG node configuration::: TG: type: TG host: "10.30.51.155" + arch: x86_64 port: 22 username: cisco priv_key: | @@ -620,8 +497,9 @@ Example of TG node configuration::: link: link5 driver: virtio-pci -**Traffic generator** +Traffic Generator +~~~~~~~~~~~~~~~~~ -Functional tests utilize Scapy as a traffic generator. There was used Scapy -v2.3.1 for |vpp-release| tests. +Functional tests utilize Scapy as a traffic generator. Scapy v2.3.1 is +used for |vpp-release| tests.