Test Environment\r
================\r
\r
-To execute performance tests, there are three identical testbeds, each testbed\r
-consists of two SUTs and one TG.\r
+CSIT performance tests are executed on the three identical physical\r
+testbeds hosted by Linux Foundation for FD.io project. Each testbed\r
+consists of two servers acting as Systems Under Test (SUT) and one\r
+server acting as Traffic Generator (TG).\r
\r
-Naming Convention\r
------------------\r
-\r
-Following naming convention is used within this page to specify physical\r
-connectivity and wiring across defined CSIT testbeds:\r
-\r
-- testbedname: testbedN.\r
-- hostname:\r
-\r
- - traffic-generator: tN-tgW.\r
- - system-under-testX: tN-sutX.\r
-\r
-- portnames:\r
-\r
- - tN-tgW-cY/pZ.\r
- - tN-sutX-cY/pZ.\r
-\r
-- where:\r
-\r
- - N - testbed number.\r
- - tgW - server acts as traffic-generator with W index.\r
- - sutX - server acts as system-under-test with X index.\r
- - Y - PCIe slot number denoting a NIC card number within the host.\r
-\r
- - Y=1,2,3 - slots in Riser 1, Right PCIe Riser Board, NUMA node 0.\r
- - Y=4,5,6 - slots in Riser 2, Left PCIe Riser Board, NUMA node 1.\r
- - Y=m - the MLOM slot.\r
-\r
- - Z - port number on the NIC card.\r
-\r
-Server HW Configuration\r
------------------------\r
-\r
-CSIT testbed contains following three HW configuration types of UCS x86 servers,\r
-across total of ten servers provided:\r
-\r
-#. Type-1: Purpose - VPP functional and performance conformance testing.\r
-\r
- - Quantity: 6 computers as SUT hosts (Systems Under Test).\r
- - Physical connectivity:\r
-\r
- - CIMC and host management ports.\r
- - NIC ports connected in 3-node topologies.\r
-\r
- - Main HW configuration:\r
-\r
- - Chassis: UCSC-C240-M4SX with 6 PCIe3.0 slots.\r
- - Processors: 2* E5-2699 2.3 GHz.\r
- - RAM Memory: 16* 32GB DDR4-2133MHz.\r
- - Disks: 2* 2TB 12G SAS 7.2K RPM SFF HDD.\r
-\r
- - NICs configuration:\r
-\r
- - Right PCIe Riser Board (Riser 1) (x8, x8, x8 PCIe3.0 lanes)\r
-\r
- - PCIe Slot1: Cisco VIC 1385 2p40GE.\r
-\r
- - PCIe Slot2: Intel NIC x520 2p10GE.\r
- - PCIe Slot3: empty.\r
-\r
- - Left PCIe Riser Board (Riser 2) (x8, x16, x8 PCIe3.0 lanes)\r
-\r
- - PCIe Slot4: Intel NIC xl710 2p40GE.\r
- - PCIe Slot5: Intel NIC x710 2p10GE.\r
- - PCIe Slot6: Intel QAT 8950 50G (Walnut Hill)\r
+Server Specification and Configuration\r
+--------------------------------------\r
\r
- - MLOM slot: Cisco VIC 1227 2p10GE (x8 PCIe2.0 lanes).\r
+Complete specification and configuration of compute servers used in CSIT\r
+physical testbeds is maintained on wiki page\r
+`CSIT LF Testbeds <https://wiki.fd.io/view/CSIT/CSIT_LF_testbed>`_.\r
\r
-#. Type-2: Purpose - VPP functional and performance conformance testing.\r
-\r
- - Quantity: 3 computers as TG hosts (Traffic Generators).\r
- - Physical connectivity:\r
-\r
- - CIMC and host management ports.\r
- - NIC ports connected in 3-node topologies.\r
-\r
- - Main HW configuration:\r
-\r
- - Chassis: UCSC-C240-M4SX with 6 PCIe3.0 slots.\r
- - Processors: 2* E5-2699 2.3 GHz.\r
- - RAM Memory: 16* 32GB DDR4-2133MHz.\r
- - Disks: 2* 2TB 12G SAS 7.2K RPM SFF HDD.\r
-\r
- - NICs configuration:\r
-\r
- - Right PCIe Riser Board (Riser 1) (x8, x8, x8 lanes)\r
-\r
- - PCIe Slot1: Intel NIC xl710 2p40GE.\r
- - PCIe Slot2: Intel NIC x710 2p10GE.\r
- - PCIe Slot3: Intel NIC x710 2p10GE.\r
-\r
- - Left PCIe Riser Board (Riser 2) (x8, x16, x8 lanes)\r
-\r
- - PCIe Slot4: Intel NIC xl710 2p40GE.\r
- - PCIe Slot5: Intel NIC x710 2p10GE.\r
- - PCIe Slot6: Intel NIC x710 2p10GE.\r
-\r
- - MLOM slot: empty.\r
-\r
-#. Type-3: Purpose - VIRL functional conformance.\r
-\r
- - Quantity: 3 computers as VIRL hosts.\r
- - Physical connectivity:\r
-\r
- - CIMC and host management ports.\r
- - no NIC ports, standalone setup.\r
-\r
- - Main HW configuration:\r
-\r
- - Chassis: UCSC-C240-M4SX with 6 PCIe3.0 slots.\r
- - Processors: 2* E5-2699 2.3 GHz.\r
- - RAM Memory: 16* 32GB DDR4-2133MHz.\r
- - Disks: 2* 480 GB 2.5inch 6G SATA SSD.\r
-\r
- - NICs configuration:\r
-\r
- - Right PCIe Riser Board (Riser 1) (x8, x8, x8 lanes)\r
-\r
- - no cards.\r
-\r
- - Left PCIe Riser Board (Riser 2) (x8, x16, x8 lanes)\r
-\r
- - no cards.\r
-\r
- - MLOM slot: empty.\r
-\r
-SUT Configuration - Host HW\r
----------------------------\r
-Host hardware details (CPU, memory, NIC layout) and physical topology are\r
-described in detail in\r
-`LF FDio CSIT testbed wiki page <https://wiki.fd.io/view/CSIT/CSIT_LF_testbed>`_.\r
+SUT Configuration\r
+-----------------\r
\r
**Host configuration**\r
\r
- 1x Intel X710 NIC (10GB, 2 ports),\r
- 1x Cisco VIC 1227 (10GB, 2 ports).\r
\r
-This allows for a total of five ring topologies, each using ports on specific\r
-NIC model, enabling per NIC model benchmarking.\r
+This allows for a total of five ring topologies, each using ports on\r
+specific NIC model, enabling per NIC model benchmarking.\r
\r
- 0a:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+\r
Network Connection (rev 01) Subsystem: Intel Corporation Ethernet Server\r
SUT Configuration - Host OS Linux\r
---------------------------------\r
\r
-Software details (OS, configuration) are described in `LF FDio CSIT testbed\r
-wiki page <https://wiki.fd.io/view/CSIT/CSIT_LF_testbed>`_.\r
+Software details (OS, configuration) of physical testbeds are maintained\r
+on wiki page\r
+`CSIT LF Testbeds <https://wiki.fd.io/view/CSIT/CSIT_LF_testbed>`_.\r
\r
-System provisioning is done by combination of PXE boot unattented install and\r
+System provisioning is done by combination of PXE boot unattented\r
+install and\r
`Ansible <https://www.ansible.com>`_ described in `CSIT Testbed Setup`_.\r
\r
Below a subset of the running configuration:\r
\r
- **isolcpus=<cpu number>-<cpu number>** used for all cpu cores apart from\r
first core of each socket used for running VPP worker threads and Qemu/LXC\r
- processes https://www.kernel.org/doc/Documentation/kernel-parameters.txt\r
+ processes\r
+ https://www.kernel.org/doc/Documentation/admin-guide/kernel-parameters.txt\r
- **intel_pstate=disable** - [X86] Do not enable intel_pstate as the default\r
scaling driver for the supported processors. Intel P-State driver decide what\r
P-state (CPU core power state) to use based on requesting policy from the\r
- **rcu_nocbs** - [KNL] In kernels built with CONFIG_RCU_NOCB_CPU=y, set the\r
specified list of CPUs to be no-callback CPUs, that never queue RCU callbacks\r
(read-copy update).\r
- https://www.kernel.org/doc/Documentation/kernel-parameters.txt\r
+ https://www.kernel.org/doc/Documentation/admin-guide/kernel-parameters.txt\r
\r
**Applied command line boot parameters:**\r
\r