7 FD.io VPP software data plane technology has become very popular across
8 a wide range of VPP eco-system use cases, putting higher pressure on
9 continuous verification of VPP software quality.
11 This document describes a proposal for design and implementation of extended
12 continuous VPP testing by extending existing test environments.
13 Furthermore it describes and summarizes implementation details of Integration
14 and System tests platform *1-Node VPP_Device*. It aims to provide a complete
15 end-to-end view of *1-Node VPP_Device* environment in order to improve
16 extendability and maintenance, under the guideline of VPP core team.
18 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
19 "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be
20 interpreted as described in :rfc:`8174`.
31 \graphicspath{{../_tmp/src/vpp_device_tests/}}
32 \includegraphics[width=0.90\textwidth]{vpp_device}
33 \label{fig:vpp_device}
38 .. figure:: vpp_device.svg
45 All :abbr:`FD.io (Fast Data Input/Ouput)` :abbr:`CSIT (Continuous System
46 Integration and Testing)` vpp-device tests are executed on physical testbeds
47 built with bare-metal servers hosted by :abbr:`LF (Linux Foundation)` FD.io
48 project. Two 1-node testbed topologies are used:
50 - **2-Container Topology**: Consisting of one Docker container acting as SUT
51 (System Under Test) and one Docker container as TG (Traffic Generator), both
52 connected in ring topology via physical NIC cross-connecting.
54 Current FD.io production testbeds are built with servers based on one
55 processor generation of Intel Xeons: Skylake (Platinum 8180). Testbeds built
56 with servers based on Arm processors are in the process of being added to FD.io
59 Following section describe existing production 1n-skx testbed.
61 1-Node Xeon Skylake (1n-skx)
62 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
64 1n-skx testbed is based on single SuperMicro SYS-7049GP-TRT server equipped
65 with two Intel Xeon Skylake Platinum 8180 2.5 GHz 28 core processors. Physical
66 testbed topology is depicted in a figure below.
74 \graphicspath{{../_tmp/src/vpp_device_tests/}}
75 \includegraphics[width=0.90\textwidth]{vf-2n-nic2nic}
76 \label{fig:vf-2n-nic2nic}
81 .. figure:: vf-2n-nic2nic.svg
85 Server is populated with the following NIC models:
87 #. NIC-1: x710-da4 4p10GE Intel.
88 #. NIC-2: E810-2CQDA2 2p100GbE Intel.
90 All Intel Xeon Skylake servers run with Intel Hyper-Threading enabled,
91 doubling the number of logical cores exposed to Linux, with 56 logical
92 cores and 28 physical cores per processor socket.
94 NIC interfaces are shared using Linux vfio_pci and VPP VF drivers:
97 - Fortville AVF driver.
99 Provided Intel x710-da4 4p10GE NICs support 32 VFs per interface, 128 per NIC.
101 Total of two 1n-skx testbeds are in operation in FD.io labs.
103 1-Node Virtualbox (1n-vbox)
104 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
106 1n-skx testbed can run in single VirtualBox VM machine. This solution replaces
107 the previously used Vagrant environment based on 3 VMs.
109 VirtualBox VM MAY be created by Vagrant and MUST have additional 4 virtio NICs
110 each pair attached to separate private networks to simulate back-to-back
111 connections. It SHOULD be 82545EM device model (otherwise can be changed in
112 boostrap scripts). Example of Vagrant configuration:
116 Vagrant.configure(2) do |c|
117 c.vm.network "private_network", type: "dhcp", auto_config: false,
118 virtualbox__intnet: "port1", nic_type: "82545EM"
119 c.vm.network "private_network", type: "dhcp", auto_config: false,
120 virtualbox__intnet: "port2", nic_type: "82545EM"
122 c.vm.provider :virtualbox do |v|
123 v.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"]
124 v.customize ["modifyvm", :id, "--nicpromisc3", "allow-all"]
125 v.customize ["modifyvm", :id, "--nicpromisc4", "allow-all"]
126 v.customize ["modifyvm", :id, "--nicpromisc5", "allow-all"]
128 Vagrant VM is populated with the following NIC models:
130 #. NIC-1: 82545EM Intel.
131 #. NIC-2: 82545EM Intel.
132 #. NIC-3: 82545EM Intel.
133 #. NIC-4: 82545EM Intel.
138 It was agreed on :abbr:`TWS (Technical Work Stream)` call to continue with
139 Ubuntu 18.04 LTS as a baseline system with OPTIONAL extend to Centos 7 and
140 SuSE per demand [#TWSLink]_.
142 All :abbr:`DCR (Docker container)` images are REQUIRED to be hosted on Docker
143 registry available from LF network, publicly available and trackable. For
144 backup, tracking and contributing purposes all Dockerfiles (including files
145 needed for building container) MUST be available and stored in
146 [#fdiocsitgerrit]_ repository under appropriate folders. This allows the
147 peer review process to be done for every change of infrastructure related to
148 scope of this document.
149 Currently only **csit-shim-dcr** and **csit-sut-dcr** containers will be stored
150 and maintained under CSIT repository by CSIT contributors.
152 At the time of designing solution described in this document the
153 interconnection between [#dockerhub]_ and [#fdiocsitgerrit]_ for
154 automated build purposes and image hosting cannot be established with the trust
155 and respectful to security of FD.io project. Unless adressed, :abbr:`DCR
156 (Docker container)` images will be placed in custom registry service
158 Automated Jenkins jobs will be created in align of long term solution for
159 container lifecycle and ability to build new version of docker images.
161 In parallel, the effort is started to find the outsourced Docker registry
167 As of initial version of vpp-device, we do have only single latest version of
168 Docker image hosted on [#dockerhub]_. This will be addressed as further
169 improvement with proper semantic versioning.
174 This :abbr:`DCR (Docker container)` acts as the Jenkins slave (known also as
175 jenkins minion). It can connect over SSH protocol to TCP port 6022 of
176 **csit-shim-dcr** and executes non-interactive reservation script. Nomad is
177 responsible for scheduling this container execution onto specific
178 **1-Node VPP_Device** testbed. It executes
179 :abbr:`CSIT (Continuous System Integration and Testing)` environment including
180 :abbr:`CSIT (Continuous System Integration and Testing)` framework.
182 All software dependencies including VPP/DPDK that are not present in
183 **csit-sut-dcr** container image and/or needs to be compiled prior running on
184 **csit-sut-dcr** SHOULD be compiled in this container.
186 - *Container Image Location*: Docker image at snergster/vpp-ubuntu18.
188 - *Container Definition*: Docker file specified at [#JenkinsSlaveDcrFile]_.
190 - *Initializing*: Container is initialized from within *Consul by HashiCorp*
191 and *Nomad by HashiCorp*.
196 This :abbr:`DCR (Docker container)` acts as an intermediate layer running
197 script responsible for orchestrating topologies under test and reservation.
198 Responsible for managing VF resources and allocation to
199 :abbr:`DUT (Device Under Test)`, :abbr:`TG (Traffic Generator)` containers.
200 This MUST to be done on **csit-shim-dcr**.
201 This image also acts as the generic reservation mechanics arbiter to make sure
202 that only Y number of simulations are spawned on any given HW node.
204 - *Container Image Location*: Docker image at snergster/csit-shim.
206 - *Container Definition*: Docker file specified at [#CsitShimDcrFile]_.
208 - *Initializing*: Container is initialized from within *Consul by HashiCorp*
209 and *Nomad by HashiCorp*. Required docker parameters, to be able to run
210 nested containers with VF reservation system are: privileged, net=host,
213 - *Connectivity*: Over SSH only, using <host>:6022 format. Currently using
214 *root* user account as primary. From the jenkins slave it will be able to
215 connect via env variable, since the jenkins slave doesn't actually know what
220 ssh -p 6022 root@10.30.51.node
225 This :abbr:`DCR (Docker container)` acts as an :abbr:`SUT (System Under Test)`.
226 Any :abbr:`DUT (Device Under Test)` or :abbr:`TG (Traffic Generator)`
227 application is installed there. It is RECOMMENDED to install DUT and
228 all DUT dependencies via commands ``rpm -ihv`` on RedHat based OS or
229 ``dpkg -i`` on Debian based OS.
231 Container is designed to be a very lightweight Docker image that only installs
232 packages and execute binaries (previously built or downloaded on
233 **jenkins-slave-dcr**) and contains libraries necessary to run CSIT framework
234 including those required by DUT/TG.
236 - *Container Image Location*: Docker image at snergster/csit-sut.
238 - *Container Definition*: Docker file specified at [#CsitSutDcrFile]_.
244 # Run the container in the background and print the new container ID.
246 # Give extended privileges to this container. A "privileged" container is
247 # given access to all devices and able to run nested containers.
249 # Publish all exposed ports to random ports on the host interfaces.
251 # Automatically remove the container when it exits.
254 dcr_stc_params+="--shm-size 512M "
255 # Override access to PCI bus by attaching a filesystem mount to the
257 dcr_stc_params+="--mount type=tmpfs,destination=/sys/bus/pci/devices "
258 # Mount vfio to be able to bind to see bound interfaces. We cannot use
259 # --device=/dev/vfio as this does not see newly bound interfaces.
260 dcr_stc_params+="--volume /dev/vfio:/dev/vfio "
261 # Mount docker.sock to be able to use docker deamon of the host.
262 dcr_stc_params+="--volume /var/run/docker.sock:/var/run/docker.sock "
263 # Mount /opt/boot/ where VM kernel and initrd are located.
264 dcr_stc_params+="--volume /opt/boot/:/opt/boot/ "
265 # Mount host hugepages for VMs.
266 dcr_stc_params+="--volume /dev/hugepages/:/dev/hugepages/ "
268 Container name is catenated from **csit-** prefix and uuid generated uniquely
269 for each container instance.
271 - *Connectivity*: Over SSH only, using <host>[:<port>] format. Currently using
272 *root* user account as primary.
275 ssh -p <port> root@10.30.51.<node>
277 Container required to run as ``--privileged`` due to ability to create nested
278 containers and have full read/write access to sysfs (for bind/unbind). Docker
279 automatically pick free network port (``--publish-all``) for ability to connect
280 over ssh. To be able to limit access to PCI bus, container is creating tmpfs
281 mount type in PCI bus tree. CSIT reservation script is dynamically linking only
282 PCI devices (NIC cards) that are reserved for particular container. This
283 way it is not colliding with other containers. To make vfio work, access to
284 ``/dev/vfio`` must be granted.
286 .. todo: Change default user to testuser with non-privileged and install sudo.
288 Environment initialization
289 --------------------------
291 All 1-node servers are to be managed and provisioned via the
292 [#ansiblelink]_ set of playbooks with *vpp-device* role. Full playbooks
293 can be found under [#fdiocsitansible]_ directory. This way we are able to
294 track all configuration changes of physical servers in gerrit (in structured
295 yaml format) as well as we are able to extend *vpp-device* to additional
296 servers with less effort or re-stage servers in case of failure.
298 SR-IOV VF initialization is done via ``systemd`` service during host system boot
299 up. Service with name *csit-initialize-vfs.service* is created under systemd
300 system context (``/etc/systemd/system/``). By default service is calling
301 ``/usr/local/bin/csit-initialize-vfs.sh`` with single parameter:
303 - **start**: Creates maximum number of :abbr:`virtual functions (VFs)` (detected
304 from ``sriov_totalvfs``) for each whitelisted PCI device.
305 - **stop**: Removes all :abbr:`VFs (Virtual Functions)` for all whitelisted PCI
308 Service is considered active even when all of its processes exited successfully.
309 Stopping service will automatically remove :abbr:`VFs (Virtual Functions)`.
314 Description=CSIT Initialize SR-IOV VFs
320 ExecStart=/usr/local/bin/csit-initialize-vfs.sh start
321 ExecStop=/usr/local/bin/csit-initialize-vfs.sh stop
324 WantedBy=default.target
326 Script is driven by two array variables ``pci_blacklist``/``pci_whitelist``.
327 They MUST store all PCI addresses in **<domain>:<bus>:<device>.<func>** format,
330 - **pci_blacklist**: PCI addresses to be skipped from
331 :abbr:`VFs (Virtual Functions)` initialization (useful for e.g. excluding
332 management network interfaces).
333 - **pci_whitelist**: PCI addresses to be included for
334 :abbr:`VFs (Virtual Functions)` initialization.
339 During topology initialization phase of script, mutex is used to avoid multiple
340 instances of script to interact with each other during resources allocation.
341 Mutal exclusion ensure that no two distinct instances of script will get same
344 Reservation function reads the list of all available virtual function network
349 # Find the first ${device_count} number of available TG Linux network
350 # VF device names. Only allowed VF PCI IDs are filtered.
351 for netdev in ${tg_netdev[@]}
353 for netdev_path in $(grep -l "${pci_id}" \
354 /sys/class/net/${netdev}*/device/device \
357 if [[ ${#TG_NETDEVS[@]} -lt ${device_count} ]]; then
358 tg_netdev_name=$(dirname ${netdev_path})
359 tg_netdev_name=$(dirname ${tg_netdev_name})
360 TG_NETDEVS+=($(basename ${tg_netdev_name}))
365 if [[ ${#TG_NETDEVS[@]} -eq ${device_count} ]]; then
370 Where ``${pci_id}`` is ID of white-listed VF PCI ID. For more information please
371 see [#pciids]_. This act as security constraint to prevent taking other
373 The output list of all VF network devices is split into two lists for TG and
374 SUT side of connection. First two items from each TG or SUT network devices
375 list are taken to expose directly to namespace of container. This can be done
380 $ ip link set ${netdev} netns ${DCR_CPIDS[tg]}
381 $ ip link set ${netdev} netns ${DCR_CPIDS[dut1]}
383 In this stage also symbolic links to PCI devices under sysfs bus directory tree
384 are created in running containers. Once VF devices are assigned to container
385 namespace and PCI devices are linked to running containers and mutex is exited.
386 Selected VF network device automatically disappear from parent container
387 namespace, so another instance of script will not find device under that
390 Once Docker container exits, network device is returned back into parent
391 namespace and can be reused.
393 Network traffic isolation - Intel i40evf
394 ----------------------------------------
396 In a virtualized environment, on Intel(R) Server Adapters that support SR-IOV,
397 the virtual function (VF) may be subject to malicious behavior. Software-
398 generated layer two frames, like IEEE 802.3x (link flow control), IEEE 802.1Qbb
399 (priority based flow-control), and others of this type, are not expected and
400 can throttle traffic between the host and the virtual switch, reducing
401 performance. To resolve this issue, configure all SR-IOV enabled ports for
402 VLAN tagging. This configuration allows unexpected, and potentially malicious,
403 frames to be dropped. [#inteli40e]_
405 To configure VLAN tagging for the ports on an SR-IOV enabled adapter,
406 use the following command. The VLAN configuration SHOULD be done
407 before the VF driver is loaded or the VM is booted. [#inteli40e]_
411 $ ip link set dev <PF netdev id> vf <id> vlan <vlan id>
413 For example, the following instructions will configure PF eth0 and
414 the first VF on VLAN 10.
418 $ ip link set dev eth0 vf 0 vlan 10
420 VLAN Tag Packet Steering allows to send all packets with a specific VLAN tag to
421 a particular SR-IOV virtual function (VF). Further, this feature allows to
422 designate a particular VF as trusted, and allows that trusted VF to request
423 selective promiscuous mode on the Physical Function (PF). [#inteli40e]_
425 To set a VF as trusted or untrusted, enter the following command in the
430 $ ip link set dev eth0 vf 1 trust [on|off]
432 Once the VF is designated as trusted, use the following commands in the VM
433 to set the VF to promiscuous mode. [#inteli40e]_
435 - For promiscuous all:
438 $ ip link set eth2 promisc on
440 - For promiscuous Multicast:
443 $ ip link set eth2 allmulti on
447 By default, the ethtool priv-flag vf-true-promisc-support is set to
448 *off*, meaning that promiscuous mode for the VF will be limited. To set the
449 promiscuous mode for the VF to true promiscuous and allow the VF to see
450 all ingress traffic, use the following command.
451 $ ethtool set-priv-flags p261p1 vf-true-promisc-support on
452 The vf-true-promisc-support priv-flag does not enable promiscuous mode;
453 rather, it designates which type of promiscuous mode (limited or true)
454 you will get when you enable promiscuous mode using the ip link commands
455 above. Note that this is a global setting that affects the entire device.
456 However,the vf-true-promisc-support priv-flag is only exposed to the first
457 PF of the device. The PF remains in limited promiscuous mode (unless it
458 is in MFP mode) regardless of the vf-true-promisc-support setting.
461 Service described earlier *csit-initialize-vfs.service* is responsible for
462 assigning 802.1Q vlan tagging to each virtual function via physical function
463 from list of white-listed PCI addresses by following (simplified) code.
467 SCRIPT_DIR="$(dirname $(readlink -e "${BASH_SOURCE[0]}"))"
468 source "${SCRIPT_DIR}/csit-initialize-vfs-data.sh"
470 # Initilize whitelisted NICs with maximum number of VFs.
472 for pci_addr in ${PCI_WHITELIST[@]}; do
473 if ! [[ ${PCI_BLACKLIST[*]} =~ "${pci_addr}" ]]; then
474 pci_path="/sys/bus/pci/devices/${pci_addr}"
475 # SR-IOV initialization
476 case "${1:-start}" in
478 sriov_totalvfs=$(< "${pci_path}"/sriov_totalvfs)
484 echo ${sriov_totalvfs} > "${pci_path}"/sriov_numvfs
485 # SR-IOV 802.1Q isolation
486 case "${1:-start}" in
488 pf=$(basename "${pci_path}"/net/*)
489 for vf in $(seq "${sriov_totalvfs}"); do
490 # PCI address index in array (pairing siblings).
491 if [[ -n ${PF_INDICES[@]} ]]
493 vlan_pf_idx=${PF_INDICES[$pci_addr]}
495 vlan_pf_idx=$((pci_idx % (${#PCI_WHITELIST[@]}/2)))
497 # 802.1Q base offset.
499 # 802.1Q PF PCI address offset.
500 vlan_pf_off=$(( vlan_pf_idx * 100 + vlan_bs_off ))
501 # 802.1Q VF PCI address offset.
502 vlan_vf_off=$(( vlan_pf_off + vf - 1 ))
504 vlan_str="vlan ${vlan_vf_off}"
506 mac5="$(printf '%x' ${pci_idx})"
507 mac6="$(printf '%x' $(( vf - 1 )))"
508 mac_str="mac ba:dc:0f:fe:${mac5}:${mac6}"
509 # Set 802.1Q VLAN id and MAC address
510 ip link set ${pf} vf $(( vf - 1)) ${mac_str} ${vlan_str}
511 ip link set ${pf} vf $(( vf - 1)) trust on
512 ip link set ${pf} vf $(( vf - 1)) spoof off
514 pci_idx=$(( pci_idx + 1 ))
522 Assignment starts at VLAN 1100 and incrementing by 1 for each VF and by 100 for
523 each white-listed PCI address up to the middle of the PCI list. Second half of
524 the lists is assumed to be directly (cable) paired siblings and assigned with
525 same 802.1Q VLANs as its siblings.
535 Switch to non-privileged containers: As of now all three container
536 flavors are using privileged containers to make it working. Explore options
537 to switch containers to non-privileged with explicit rather implicit
542 Switch to testuser account instead of root.
549 Docker image distribution: Create jenkins jobs with full pipeline of
550 CI/CD for CSIT Docker images.
557 Implement queueing mechanism: Currently there is no mechanics that
558 would place starving jobs in queue in case of no resources available.
562 Replace reservation script with Docker network plugin written in
563 GOLANG/SH/Python - platform independent.
568 .. [#TWSLink] `TWS <https://wiki.fd.io/view/CSIT/TWS>`_
569 .. [#dockerhub] `Docker hub <https://hub.docker.com/>`_
570 .. [#fdiocsitgerrit] `FD.io/CSIT gerrit <https://gerrit.fd.io/r/CSIT>`_
571 .. [#fdioregistry] `FD.io registy <registry.fdiopoc.net>`_
572 .. [#JenkinsSlaveDcrFile] `jenkins-slave-dcr-file <https://github.com/snergfdio/multivppcache/blob/master/ubuntu18/Dockerfile>`_
573 .. [#CsitShimDcrFile] `csit-shim-dcr-file <https://github.com/snergfdio/multivppcache/blob/master/csit-shim/Dockerfile>`_
574 .. [#CsitSutDcrFile] `csit-sut-dcr-file <https://github.com/snergfdio/multivppcache/blob/master/csit-sut/Dockerfile>`_
575 .. [#ansiblelink] `ansible <https://www.ansible.com/>`_
576 .. [#fdiocsitansible] `Fd.io/CSIT ansible <https://git.fd.io/csit/tree/fdio.infra.ansible>`_
577 .. [#inteli40e] `Intel i40e <https://downloadmirror.intel.com/26370/eng/readme.txt>`_
578 .. [#pciids] `pci ids <http://pci-ids.ucw.cz/v2.2/pci.ids>`_