7 FD.io VPP software data plane technology has become very popular across
8 a wide range of VPP eco-system use cases, putting higher pressure on
9 continuous verification of VPP software quality.
11 This document describes a proposal for design and implementation of extended
12 continuous VPP testing by extending existing test environments.
13 Furthermore it describes and summarizes implementation details of Integration
14 and System tests platform *1-Node VPP_Device*. It aims to provide a complete
15 end-to-end view of *1-Node VPP_Device* environment in order to improve
16 extendability and maintenance, under the guideline of VPP core team.
18 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
19 "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be
20 interpreted as described in :rfc:`8174`.
31 \graphicspath{{../_tmp/src/vpp_device_tests/}}
32 \includegraphics[width=0.90\textwidth]{vpp_device}
33 \label{fig:vpp_device}
38 .. figure:: vpp_device.svg
45 All :abbr:`FD.io (Fast Data Input/Ouput)` :abbr:`CSIT (Continuous System
46 Integration and Testing)` vpp-device tests are executed on physical testbeds
47 built with bare-metal servers hosted by :abbr:`LF (Linux Foundation)` FD.io
48 project. Two 1-node testbed topologies are used:
50 - **2-Container Topology**: Consisting of one Docker container acting as SUT
51 (System Under Test) and one Docker container as TG (Traffic Generator), both
52 connected in ring topology via physical NIC cross-connecting.
54 Current FD.io production testbeds are built with servers based on one
55 processor generation of Intel Xeons: Skylake (Platinum 8180). Testbeds built
56 with servers based on Arm processors are in the process of being added to FD.io
59 Following section describe existing production 1n-skx testbed.
61 1-Node Xeon Skylake (1n-skx)
62 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
64 1n-skx testbed is based on single SuperMicro SYS-7049GP-TRT server equipped
65 with two Intel Xeon Skylake Platinum 8180 2.5 GHz 28 core processors. Physical
66 testbed topology is depicted in a figure below.
74 \graphicspath{{../_tmp/src/vpp_device_tests/}}
75 \includegraphics[width=0.90\textwidth]{vf-2n-nic2nic}
76 \label{fig:vf-2n-nic2nic}
81 .. figure:: vf-2n-nic2nic.svg
85 Server is populated with the following NIC models:
87 #. NIC-1: x710-da4 4p10GE Intel.
88 #. NIC-2: E810-2CQDA2 2p100GbE Intel.
90 All Intel Xeon Skylake servers run with Intel Hyper-Threading enabled,
91 doubling the number of logical cores exposed to Linux, with 56 logical
92 cores and 28 physical cores per processor socket.
94 NIC interfaces are shared using Linux vfio_pci and VPP VF drivers:
97 - Fortville AVF driver.
99 Provided Intel x710-da4 4p10GE NICs support 32 VFs per interface, 128 per NIC.
101 Complete 1n-skx testbeds specification is available on `CSIT LF Testbeds
102 <https://wiki.fd.io/view/CSIT/Testbeds:_Xeon_Skx,_Arm,_Atom.>`_ wiki page.
104 Total of two 1n-skx testbeds are in operation in FD.io labs.
106 1-Node Virtualbox (1n-vbox)
107 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
109 1n-skx testbed can run in single VirtualBox VM machine. This solution replaces
110 the previously used Vagrant environment based on 3 VMs.
112 VirtualBox VM MAY be created by Vagrant and MUST have additional 4 virtio NICs
113 each pair attached to separate private networks to simulate back-to-back
114 connections. It SHOULD be 82545EM device model (otherwise can be changed in
115 boostrap scripts). Example of Vagrant configuration:
119 Vagrant.configure(2) do |c|
120 c.vm.network "private_network", type: "dhcp", auto_config: false,
121 virtualbox__intnet: "port1", nic_type: "82545EM"
122 c.vm.network "private_network", type: "dhcp", auto_config: false,
123 virtualbox__intnet: "port2", nic_type: "82545EM"
125 c.vm.provider :virtualbox do |v|
126 v.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"]
127 v.customize ["modifyvm", :id, "--nicpromisc3", "allow-all"]
128 v.customize ["modifyvm", :id, "--nicpromisc4", "allow-all"]
129 v.customize ["modifyvm", :id, "--nicpromisc5", "allow-all"]
131 Vagrant VM is populated with the following NIC models:
133 #. NIC-1: 82545EM Intel.
134 #. NIC-2: 82545EM Intel.
135 #. NIC-3: 82545EM Intel.
136 #. NIC-4: 82545EM Intel.
141 It was agreed on :abbr:`TWS (Technical Work Stream)` call to continue with
142 Ubuntu 18.04 LTS as a baseline system with OPTIONAL extend to Centos 7 and
143 SuSE per demand [#TWSLink]_.
145 All :abbr:`DCR (Docker container)` images are REQUIRED to be hosted on Docker
146 registry available from LF network, publicly available and trackable. For
147 backup, tracking and contributing purposes all Dockerfiles (including files
148 needed for building container) MUST be available and stored in
149 [#fdiocsitgerrit]_ repository under appropriate folders. This allows the
150 peer review process to be done for every change of infrastructure related to
151 scope of this document.
152 Currently only **csit-shim-dcr** and **csit-sut-dcr** containers will be stored
153 and maintained under CSIT repository by CSIT contributors.
155 At the time of designing solution described in this document the
156 interconnection between [#dockerhub]_ and [#fdiocsitgerrit]_ for
157 automated build purposes and image hosting cannot be established with the trust
158 and respectful to security of FD.io project. Unless adressed, :abbr:`DCR
159 (Docker container)` images will be placed in custom registry service
161 Automated Jenkins jobs will be created in align of long term solution for
162 container lifecycle and ability to build new version of docker images.
164 In parallel, the effort is started to find the outsourced Docker registry
170 As of initial version of vpp-device, we do have only single latest version of
171 Docker image hosted on [#dockerhub]_. This will be addressed as further
172 improvement with proper semantic versioning.
177 This :abbr:`DCR (Docker container)` acts as the Jenkins slave (known also as
178 jenkins minion). It can connect over SSH protocol to TCP port 6022 of
179 **csit-shim-dcr** and executes non-interactive reservation script. Nomad is
180 responsible for scheduling this container execution onto specific
181 **1-Node VPP_Device** testbed. It executes
182 :abbr:`CSIT (Continuous System Integration and Testing)` environment including
183 :abbr:`CSIT (Continuous System Integration and Testing)` framework.
185 All software dependencies including VPP/DPDK that are not present in
186 **csit-sut-dcr** container image and/or needs to be compiled prior running on
187 **csit-sut-dcr** SHOULD be compiled in this container.
189 - *Container Image Location*: Docker image at snergster/vpp-ubuntu18.
191 - *Container Definition*: Docker file specified at [#JenkinsSlaveDcrFile]_.
193 - *Initializing*: Container is initialized from within *Consul by HashiCorp*
194 and *Nomad by HashiCorp*.
199 This :abbr:`DCR (Docker container)` acts as an intermediate layer running
200 script responsible for orchestrating topologies under test and reservation.
201 Responsible for managing VF resources and allocation to
202 :abbr:`DUT (Device Under Test)`, :abbr:`TG (Traffic Generator)` containers.
203 This MUST to be done on **csit-shim-dcr**.
204 This image also acts as the generic reservation mechanics arbiter to make sure
205 that only Y number of simulations are spawned on any given HW node.
207 - *Container Image Location*: Docker image at snergster/csit-shim.
209 - *Container Definition*: Docker file specified at [#CsitShimDcrFile]_.
211 - *Initializing*: Container is initialized from within *Consul by HashiCorp*
212 and *Nomad by HashiCorp*. Required docker parameters, to be able to run
213 nested containers with VF reservation system are: privileged, net=host,
216 - *Connectivity*: Over SSH only, using <host>:6022 format. Currently using
217 *root* user account as primary. From the jenkins slave it will be able to
218 connect via env variable, since the jenkins slave doesn't actually know what
223 ssh -p 6022 root@10.30.51.node
228 This :abbr:`DCR (Docker container)` acts as an :abbr:`SUT (System Under Test)`.
229 Any :abbr:`DUT (Device Under Test)` or :abbr:`TG (Traffic Generator)`
230 application is installed there. It is RECOMMENDED to install DUT and
231 all DUT dependencies via commands ``rpm -ihv`` on RedHat based OS or
232 ``dpkg -i`` on Debian based OS.
234 Container is designed to be a very lightweight Docker image that only installs
235 packages and execute binaries (previously built or downloaded on
236 **jenkins-slave-dcr**) and contains libraries necessary to run CSIT framework
237 including those required by DUT/TG.
239 - *Container Image Location*: Docker image at snergster/csit-sut.
241 - *Container Definition*: Docker file specified at [#CsitSutDcrFile]_.
247 # Run the container in the background and print the new container ID.
249 # Give extended privileges to this container. A "privileged" container is
250 # given access to all devices and able to run nested containers.
252 # Publish all exposed ports to random ports on the host interfaces.
254 # Automatically remove the container when it exits.
257 dcr_stc_params+="--shm-size 512M "
258 # Override access to PCI bus by attaching a filesystem mount to the
260 dcr_stc_params+="--mount type=tmpfs,destination=/sys/bus/pci/devices "
261 # Mount vfio to be able to bind to see bound interfaces. We cannot use
262 # --device=/dev/vfio as this does not see newly bound interfaces.
263 dcr_stc_params+="--volume /dev/vfio:/dev/vfio "
264 # Mount docker.sock to be able to use docker deamon of the host.
265 dcr_stc_params+="--volume /var/run/docker.sock:/var/run/docker.sock "
266 # Mount /opt/boot/ where VM kernel and initrd are located.
267 dcr_stc_params+="--volume /opt/boot/:/opt/boot/ "
268 # Mount host hugepages for VMs.
269 dcr_stc_params+="--volume /dev/hugepages/:/dev/hugepages/ "
271 Container name is catenated from **csit-** prefix and uuid generated uniquely
272 for each container instance.
274 - *Connectivity*: Over SSH only, using <host>[:<port>] format. Currently using
275 *root* user account as primary.
278 ssh -p <port> root@10.30.51.<node>
280 Container required to run as ``--privileged`` due to ability to create nested
281 containers and have full read/write access to sysfs (for bind/unbind). Docker
282 automatically pick free network port (``--publish-all``) for ability to connect
283 over ssh. To be able to limit access to PCI bus, container is creating tmpfs
284 mount type in PCI bus tree. CSIT reservation script is dynamically linking only
285 PCI devices (NIC cards) that are reserved for particular container. This
286 way it is not colliding with other containers. To make vfio work, access to
287 ``/dev/vfio`` must be granted.
289 .. todo: Change default user to testuser with non-privileged and install sudo.
291 Environment initialization
292 --------------------------
294 All 1-node servers are to be managed and provisioned via the
295 [#ansiblelink]_ set of playbooks with *vpp-device* role. Full playbooks
296 can be found under [#fdiocsitansible]_ directory. This way we are able to
297 track all configuration changes of physical servers in gerrit (in structured
298 yaml format) as well as we are able to extend *vpp-device* to additional
299 servers with less effort or re-stage servers in case of failure.
301 SR-IOV VF initialization is done via ``systemd`` service during host system boot
302 up. Service with name *csit-initialize-vfs.service* is created under systemd
303 system context (``/etc/systemd/system/``). By default service is calling
304 ``/usr/local/bin/csit-initialize-vfs.sh`` with single parameter:
306 - **start**: Creates maximum number of :abbr:`virtual functions (VFs)` (detected
307 from ``sriov_totalvfs``) for each whitelisted PCI device.
308 - **stop**: Removes all :abbr:`VFs (Virtual Functions)` for all whitelisted PCI
311 Service is considered active even when all of its processes exited successfully.
312 Stopping service will automatically remove :abbr:`VFs (Virtual Functions)`.
317 Description=CSIT Initialize SR-IOV VFs
323 ExecStart=/usr/local/bin/csit-initialize-vfs.sh start
324 ExecStop=/usr/local/bin/csit-initialize-vfs.sh stop
327 WantedBy=default.target
329 Script is driven by two array variables ``pci_blacklist``/``pci_whitelist``.
330 They MUST store all PCI addresses in **<domain>:<bus>:<device>.<func>** format,
333 - **pci_blacklist**: PCI addresses to be skipped from
334 :abbr:`VFs (Virtual Functions)` initialization (useful for e.g. excluding
335 management network interfaces).
336 - **pci_whitelist**: PCI addresses to be included for
337 :abbr:`VFs (Virtual Functions)` initialization.
342 During topology initialization phase of script, mutex is used to avoid multiple
343 instances of script to interact with each other during resources allocation.
344 Mutal exclusion ensure that no two distinct instances of script will get same
347 Reservation function reads the list of all available virtual function network
352 # Find the first ${device_count} number of available TG Linux network
353 # VF device names. Only allowed VF PCI IDs are filtered.
354 for netdev in ${tg_netdev[@]}
356 for netdev_path in $(grep -l "${pci_id}" \
357 /sys/class/net/${netdev}*/device/device \
360 if [[ ${#TG_NETDEVS[@]} -lt ${device_count} ]]; then
361 tg_netdev_name=$(dirname ${netdev_path})
362 tg_netdev_name=$(dirname ${tg_netdev_name})
363 TG_NETDEVS+=($(basename ${tg_netdev_name}))
368 if [[ ${#TG_NETDEVS[@]} -eq ${device_count} ]]; then
373 Where ``${pci_id}`` is ID of white-listed VF PCI ID. For more information please
374 see [#pciids]_. This act as security constraint to prevent taking other
376 The output list of all VF network devices is split into two lists for TG and
377 SUT side of connection. First two items from each TG or SUT network devices
378 list are taken to expose directly to namespace of container. This can be done
383 $ ip link set ${netdev} netns ${DCR_CPIDS[tg]}
384 $ ip link set ${netdev} netns ${DCR_CPIDS[dut1]}
386 In this stage also symbolic links to PCI devices under sysfs bus directory tree
387 are created in running containers. Once VF devices are assigned to container
388 namespace and PCI devices are linked to running containers and mutex is exited.
389 Selected VF network device automatically disappear from parent container
390 namespace, so another instance of script will not find device under that
393 Once Docker container exits, network device is returned back into parent
394 namespace and can be reused.
396 Network traffic isolation - Intel i40evf
397 ----------------------------------------
399 In a virtualized environment, on Intel(R) Server Adapters that support SR-IOV,
400 the virtual function (VF) may be subject to malicious behavior. Software-
401 generated layer two frames, like IEEE 802.3x (link flow control), IEEE 802.1Qbb
402 (priority based flow-control), and others of this type, are not expected and
403 can throttle traffic between the host and the virtual switch, reducing
404 performance. To resolve this issue, configure all SR-IOV enabled ports for
405 VLAN tagging. This configuration allows unexpected, and potentially malicious,
406 frames to be dropped. [#inteli40e]_
408 To configure VLAN tagging for the ports on an SR-IOV enabled adapter,
409 use the following command. The VLAN configuration SHOULD be done
410 before the VF driver is loaded or the VM is booted. [#inteli40e]_
414 $ ip link set dev <PF netdev id> vf <id> vlan <vlan id>
416 For example, the following instructions will configure PF eth0 and
417 the first VF on VLAN 10.
421 $ ip link set dev eth0 vf 0 vlan 10
423 VLAN Tag Packet Steering allows to send all packets with a specific VLAN tag to
424 a particular SR-IOV virtual function (VF). Further, this feature allows to
425 designate a particular VF as trusted, and allows that trusted VF to request
426 selective promiscuous mode on the Physical Function (PF). [#inteli40e]_
428 To set a VF as trusted or untrusted, enter the following command in the
433 $ ip link set dev eth0 vf 1 trust [on|off]
435 Once the VF is designated as trusted, use the following commands in the VM
436 to set the VF to promiscuous mode. [#inteli40e]_
438 - For promiscuous all:
441 $ ip link set eth2 promisc on
443 - For promiscuous Multicast:
446 $ ip link set eth2 allmulti on
450 By default, the ethtool priv-flag vf-true-promisc-support is set to
451 *off*, meaning that promiscuous mode for the VF will be limited. To set the
452 promiscuous mode for the VF to true promiscuous and allow the VF to see
453 all ingress traffic, use the following command.
454 $ ethtool set-priv-flags p261p1 vf-true-promisc-support on
455 The vf-true-promisc-support priv-flag does not enable promiscuous mode;
456 rather, it designates which type of promiscuous mode (limited or true)
457 you will get when you enable promiscuous mode using the ip link commands
458 above. Note that this is a global setting that affects the entire device.
459 However,the vf-true-promisc-support priv-flag is only exposed to the first
460 PF of the device. The PF remains in limited promiscuous mode (unless it
461 is in MFP mode) regardless of the vf-true-promisc-support setting.
464 Service described earlier *csit-initialize-vfs.service* is responsible for
465 assigning 802.1Q vlan tagging to each virtual function via physical function
466 from list of white-listed PCI addresses by following (simplified) code.
470 SCRIPT_DIR="$(dirname $(readlink -e "${BASH_SOURCE[0]}"))"
471 source "${SCRIPT_DIR}/csit-initialize-vfs-data.sh"
473 # Initilize whitelisted NICs with maximum number of VFs.
475 for pci_addr in ${PCI_WHITELIST[@]}; do
476 if ! [[ ${PCI_BLACKLIST[*]} =~ "${pci_addr}" ]]; then
477 pci_path="/sys/bus/pci/devices/${pci_addr}"
478 # SR-IOV initialization
479 case "${1:-start}" in
481 sriov_totalvfs=$(< "${pci_path}"/sriov_totalvfs)
487 echo ${sriov_totalvfs} > "${pci_path}"/sriov_numvfs
488 # SR-IOV 802.1Q isolation
489 case "${1:-start}" in
491 pf=$(basename "${pci_path}"/net/*)
492 for vf in $(seq "${sriov_totalvfs}"); do
493 # PCI address index in array (pairing siblings).
494 if [[ -n ${PF_INDICES[@]} ]]
496 vlan_pf_idx=${PF_INDICES[$pci_addr]}
498 vlan_pf_idx=$((pci_idx % (${#PCI_WHITELIST[@]}/2)))
500 # 802.1Q base offset.
502 # 802.1Q PF PCI address offset.
503 vlan_pf_off=$(( vlan_pf_idx * 100 + vlan_bs_off ))
504 # 802.1Q VF PCI address offset.
505 vlan_vf_off=$(( vlan_pf_off + vf - 1 ))
507 vlan_str="vlan ${vlan_vf_off}"
509 mac5="$(printf '%x' ${pci_idx})"
510 mac6="$(printf '%x' $(( vf - 1 )))"
511 mac_str="mac ba:dc:0f:fe:${mac5}:${mac6}"
512 # Set 802.1Q VLAN id and MAC address
513 ip link set ${pf} vf $(( vf - 1)) ${mac_str} ${vlan_str}
514 ip link set ${pf} vf $(( vf - 1)) trust on
515 ip link set ${pf} vf $(( vf - 1)) spoof off
517 pci_idx=$(( pci_idx + 1 ))
525 Assignment starts at VLAN 1100 and incrementing by 1 for each VF and by 100 for
526 each white-listed PCI address up to the middle of the PCI list. Second half of
527 the lists is assumed to be directly (cable) paired siblings and assigned with
528 same 802.1Q VLANs as its siblings.
538 Switch to non-privileged containers: As of now all three container
539 flavors are using privileged containers to make it working. Explore options
540 to switch containers to non-privileged with explicit rather implicit
545 Switch to testuser account instead of root.
552 Docker image distribution: Create jenkins jobs with full pipeline of
553 CI/CD for CSIT Docker images.
560 Implement queueing mechanism: Currently there is no mechanics that
561 would place starving jobs in queue in case of no resources available.
565 Replace reservation script with Docker network plugin written in
566 GOLANG/SH/Python - platform independent.
571 .. [#TWSLink] `TWS <https://wiki.fd.io/view/CSIT/TWS>`_
572 .. [#dockerhub] `Docker hub <https://hub.docker.com/>`_
573 .. [#fdiocsitgerrit] `FD.io/CSIT gerrit <https://gerrit.fd.io/r/CSIT>`_
574 .. [#fdioregistry] `FD.io registy <registry.fdiopoc.net>`_
575 .. [#JenkinsSlaveDcrFile] `jenkins-slave-dcr-file <https://github.com/snergfdio/multivppcache/blob/master/ubuntu18/Dockerfile>`_
576 .. [#CsitShimDcrFile] `csit-shim-dcr-file <https://github.com/snergfdio/multivppcache/blob/master/csit-shim/Dockerfile>`_
577 .. [#CsitSutDcrFile] `csit-sut-dcr-file <https://github.com/snergfdio/multivppcache/blob/master/csit-sut/Dockerfile>`_
578 .. [#ansiblelink] `ansible <https://www.ansible.com/>`_
579 .. [#fdiocsitansible] `Fd.io/CSIT ansible <https://git.fd.io/csit/tree/fdio.infra.ansible>`_
580 .. [#inteli40e] `Intel i40e <https://downloadmirror.intel.com/26370/eng/readme.txt>`_
581 .. [#pciids] `pci ids <http://pci-ids.ucw.cz/v2.2/pci.ids>`_