7 FD.io VPP software data plane technology has become very popular across
8 a wide range of VPP eco-system use cases, putting higher pressure on
9 continuous verification of VPP software quality.
11 This document describes a proposal for design and implementation of extended
12 continuous VPP testing by extending existing test environments.
13 Furthermore it describes and summarizes implementation details of Integration
14 and System tests platform *1-Node VPP_Device*. It aims to provide a complete
15 end-to-end view of *1-Node VPP_Device* environment in order to improve
16 extendability and maintenance, under the guideline of VPP core team.
18 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
19 "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be
20 interpreted as described in :rfc:`8174`.
31 \graphicspath{{../_tmp/src/vpp_device_tests/}}
32 \includegraphics[width=0.90\textwidth]{vpp_device}
33 \label{fig:vpp_device}
38 .. figure:: vpp_device.svg
45 All :abbr:`FD.io (Fast Data Input/Ouput)` :abbr:`CSIT (Continuous System
46 Integration and Testing)` vpp-device tests are executed on physical testbeds
47 built with bare-metal servers hosted by :abbr:`LF (Linux Foundation)` FD.io
48 project. Two 1-node testbed topologies are used:
50 - **2-Container Topology**: Consisting of one Docker container acting as SUT
51 (System Under Test) and one Docker container as TG (Traffic Generator), both
52 connected in ring topology via physical NIC cross-connecting.
54 Current FD.io production testbeds are built with servers based on one
55 processor generation of Intel Xeons: Skylake (Platinum 8180). Testbeds built
56 with servers based on Arm processors are in the process of being added to FD.io
59 Following section describe existing production 1n-skx testbed.
61 1-Node Xeon Skylake (1n-skx)
62 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
64 1n-skx testbed is based on single SuperMicro SYS-7049GP-TRT server equipped
65 with two Intel Xeon Skylake Platinum 8180 2.5 GHz 28 core processors. Physical
66 testbed topology is depicted in a figure below.
74 \graphicspath{{../_tmp/src/vpp_device_tests/}}
75 \includegraphics[width=0.90\textwidth]{vf-2n-nic2nic}
76 \label{fig:vf-2n-nic2nic}
81 .. figure:: vf-2n-nic2nic.svg
85 Server is populated with the following NIC models:
87 #. NIC-1: x710-da4 4p10GE Intel.
88 #. NIC-2: x710-da4 4p10GE Intel.
90 All Intel Xeon Skylake servers run with Intel Hyper-Threading enabled,
91 doubling the number of logical cores exposed to Linux, with 56 logical
92 cores and 28 physical cores per processor socket.
94 NIC interfaces are shared using Linux vfio_pci and VPP VF drivers:
97 - Fortville AVF driver.
99 Provided Intel x710-da4 4p10GE NICs support 32 VFs per interface, 128 per NIC.
101 Complete 1n-skx testbeds specification is available on `CSIT LF Testbeds
102 <https://wiki.fd.io/view/CSIT/Testbeds:_Xeon_Skx,_Arm,_Atom.>`_ wiki page.
104 Total of two 1n-skx testbeds are in operation in FD.io labs.
106 1-Node Virtualbox (1n-vbox)
107 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
109 1n-skx testbed can run in single VirtualBox VM machine. This solution replaces
110 the previously used Vagrant environment based on 3 VMs.
112 VirtualBox VM MAY be created by Vagrant and MUST have additional 4 virtio NICs
113 each pair attached to separate private networks to simulate back-to-back
114 connections. It SHOULD be 82545EM device model (otherwise can be changed in
115 boostrap scripts). Example of Vagrant configuration:
119 Vagrant.configure(2) do |c|
120 c.vm.network "private_network", type: "dhcp", auto_config: false,
121 virtualbox__intnet: "port1", nic_type: "82545EM"
122 c.vm.network "private_network", type: "dhcp", auto_config: false,
123 virtualbox__intnet: "port2", nic_type: "82545EM"
125 c.vm.provider :virtualbox do |v|
126 v.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"]
127 v.customize ["modifyvm", :id, "--nicpromisc3", "allow-all"]
128 v.customize ["modifyvm", :id, "--nicpromisc4", "allow-all"]
129 v.customize ["modifyvm", :id, "--nicpromisc5", "allow-all"]
131 Vagrant VM is populated with the following NIC models:
133 #. NIC-1: 82545EM Intel.
134 #. NIC-2: 82545EM Intel.
135 #. NIC-3: 82545EM Intel.
136 #. NIC-4: 82545EM Intel.
141 It was agreed on :abbr:`TWS (Technical Work Stream)` call to continue with
142 Ubuntu 18.04 LTS as a baseline system with OPTIONAL extend to Centos 7 and
143 SuSE per demand [:ref:`TWSLink`].
145 All :abbr:`DCR (Docker container)` images are REQUIRED to be hosted on Docker
146 registry available from LF network, publicly available and trackable. For
147 backup, tracking and contributing purposes all Dockerfiles (including files
148 needed for building container) MUST be available and stored in
149 [:ref:`fdiocsitgerrit`] repository under appropriate folders. This allows the
150 peer review process to be done for every change of infrastructure related to
151 scope of this document.
152 Currently only **csit-shim-dcr** and **csit-sut-dcr** containers will be stored
153 and maintained under CSIT repository by CSIT contributors.
155 At the time of designing solution described in this document the
156 interconnection between [:ref:`dockerhub`] and [:ref:`fdiocsitgerrit`] for
157 automated build purposes and image hosting cannot be established with the trust
158 and respectful to security of FD.io project. Unless adressed, :abbr:`DCR
159 (Docker container)` images will be placed in custom registry service
160 [:ref:`fdioregistry`].
161 Automated Jenkins jobs will be created in align of long term solution for
162 container lifecycle and ability to build new version of docker images.
164 In parallel, the effort is started to find the outsourced Docker registry
170 As of initial version of vpp-device, we do have only single latest version of
171 Docker image hosted on [:ref:`dockerhub`]. This will be addressed as further
172 improvement with proper semantic versioning.
177 This :abbr:`DCR (Docker container)` acts as the Jenkins slave (known also as
178 jenkins minion). It can connect over SSH protocol to TCP port 6022 of
179 **csit-shim-dcr** and executes non-interactive reservation script. Nomad is
180 responsible for scheduling this container execution onto specific
181 **1-Node VPP_Device** testbed. It executes
182 :abbr:`CSIT (Continuous System Integration and Testing)` environment including
183 :abbr:`CSIT (Continuous System Integration and Testing)` framework.
185 All software dependencies including VPP/DPDK that are not present in
186 **csit-sut-dcr** container image and/or needs to be compiled prior running on
187 **csit-sut-dcr** SHOULD be compiled in this container.
189 - *Container Image Location*: Docker image at snergster/vpp-ubuntu18.
191 - *Container Definition*: Docker file specified at
192 [:ref:`JenkinsSlaveDcrFile`].
194 - *Initializing*: Container is initialized from within *Consul by HashiCorp*
195 and *Nomad by HashiCorp*.
200 This :abbr:`DCR (Docker container)` acts as an intermediate layer running
201 script responsible for orchestrating topologies under test and reservation.
202 Responsible for managing VF resources and allocation to
203 :abbr:`DUT (Device Under Test)`, :abbr:`TG (Traffic Generator)` containers.
204 This MUST to be done on **csit-shim-dcr**.
205 This image also acts as the generic reservation mechanics arbiter to make sure
206 that only Y number of simulations are spawned on any given HW node.
208 - *Container Image Location*: Docker image at snergster/csit-shim.
210 - *Container Definition*: Docker file specified at [:ref:`CsitShimDcrFile`].
212 - *Initializing*: Container is initialized from within *Consul by HashiCorp*
213 and *Nomad by HashiCorp*. Required docker parameters, to be able to run
214 nested containers with VF reservation system are: privileged, net=host,
217 - *Connectivity*: Over SSH only, using <host>:6022 format. Currently using
218 *root* user account as primary. From the jenkins slave it will be able to
219 connect via env variable, since the jenkins slave doesn't actually know what
224 ssh -p 6022 root@10.30.51.node
229 This :abbr:`DCR (Docker container)` acts as an :abbr:`SUT (System Under Test)`.
230 Any :abbr:`DUT (Device Under Test)` or :abbr:`TG (Traffic Generator)`
231 application is installed there. It is RECOMMENDED to install DUT and
232 all DUT dependencies via commands ``rpm -ihv`` on RedHat based OS or
233 ``dpkg -i`` on Debian based OS.
235 Container is designed to be a very lightweight Docker image that only installs
236 packages and execute binaries (previously built or downloaded on
237 **jenkins-slave-dcr**) and contains libraries necessary to run CSIT framework
238 including those required by DUT/TG.
240 - *Container Image Location*: Docker image at snergster/csit-sut.
242 - *Container Definition*: Docker file specified at [:ref:`CsitSutDcrFile`].
248 # Run the container in the background and print the new container ID.
250 # Give extended privileges to this container. A "privileged" container is
251 # given access to all devices and able to run nested containers.
253 # Publish all exposed ports to random ports on the host interfaces.
255 # Automatically remove the container when it exits.
258 dcr_stc_params+="--shm-size 512M "
259 # Override access to PCI bus by attaching a filesystem mount to the
261 dcr_stc_params+="--mount type=tmpfs,destination=/sys/bus/pci/devices "
262 # Mount vfio to be able to bind to see bound interfaces. We cannot use
263 # --device=/dev/vfio as this does not see newly bound interfaces.
264 dcr_stc_params+="--volume /dev/vfio:/dev/vfio "
265 # Mount docker.sock to be able to use docker deamon of the host.
266 dcr_stc_params+="--volume /var/run/docker.sock:/var/run/docker.sock "
267 # Mount /opt/boot/ where VM kernel and initrd are located.
268 dcr_stc_params+="--volume /opt/boot/:/opt/boot/ "
269 # Mount host hugepages for VMs.
270 dcr_stc_params+="--volume /dev/hugepages/:/dev/hugepages/ "
272 Container name is catenated from **csit-** prefix and uuid generated uniquely
273 for each container instance.
275 - *Connectivity*: Over SSH only, using <host>[:<port>] format. Currently using
276 *root* user account as primary.
279 ssh -p <port> root@10.30.51.<node>
281 Container required to run as ``--privileged`` due to ability to create nested
282 containers and have full read/write access to sysfs (for bind/unbind). Docker
283 automatically pick free network port (``--publish-all``) for ability to connect
284 over ssh. To be able to limit access to PCI bus, container is creating tmpfs
285 mount type in PCI bus tree. CSIT reservation script is dynamically linking only
286 PCI devices (NIC cards) that are reserved for particular container. This
287 way it is not colliding with other containers. To make vfio work, access to
288 ``/dev/vfio`` must be granted.
290 .. todo: Change default user to testuser with non-privileged and install sudo.
292 Environment initialization
293 --------------------------
295 All 1-node servers are to be managed and provisioned via the
296 [:ref:`ansiblelink`] set of playbooks with *vpp-device* role. Full playbooks
297 can be found under [:ref:`fdiocsitansible`] directory. This way we are able to
298 track all configuration changes of physical servers in gerrit (in structured
299 yaml format) as well as we are able to extend *vpp-device* to additional
300 servers with less effort or re-stage servers in case of failure.
302 SR-IOV VF initialization is done via ``systemd`` service during host system boot
303 up. Service with name *csit-initialize-vfs.service* is created under systemd
304 system context (``/etc/systemd/system/``). By default service is calling
305 ``/usr/local/bin/csit-initialize-vfs.sh`` with single parameter:
307 - **start**: Creates maximum number of :abbr:`virtual functions (VFs)` (detected
308 from ``sriov_totalvfs``) for each whitelisted PCI device.
309 - **stop**: Removes all :abbr:`VFs (Virtual Functions)` for all whitelisted PCI
312 Service is considered active even when all of its processes exited successfully.
313 Stopping service will automatically remove :abbr:`VFs (Virtual Functions)`.
318 Description=CSIT Initialize SR-IOV VFs
324 ExecStart=/usr/local/bin/csit-initialize-vfs.sh start
325 ExecStop=/usr/local/bin/csit-initialize-vfs.sh stop
328 WantedBy=default.target
330 Script is driven by two array variables ``pci_blacklist``/``pci_whitelist``.
331 They MUST store all PCI addresses in **<domain>:<bus>:<device>.<func>** format,
334 - **pci_blacklist**: PCI addresses to be skipped from
335 :abbr:`VFs (Virtual Functions)` initialization (useful for e.g. excluding
336 management network interfaces).
337 - **pci_whitelist**: PCI addresses to be included for
338 :abbr:`VFs (Virtual Functions)` initialization.
343 During topology initialization phase of script, mutex is used to avoid multiple
344 instances of script to interact with each other during resources allocation.
345 Mutal exclusion ensure that no two distinct instances of script will get same
348 Reservation function reads the list of all available virtual function network
353 # Find the first ${device_count} number of available TG Linux network
354 # VF device names. Only allowed VF PCI IDs are filtered.
355 for netdev in ${tg_netdev[@]}
357 for netdev_path in $(grep -l "${pci_id}" \
358 /sys/class/net/${netdev}*/device/device \
361 if [[ ${#TG_NETDEVS[@]} -lt ${device_count} ]]; then
362 tg_netdev_name=$(dirname ${netdev_path})
363 tg_netdev_name=$(dirname ${tg_netdev_name})
364 TG_NETDEVS+=($(basename ${tg_netdev_name}))
369 if [[ ${#TG_NETDEVS[@]} -eq ${device_count} ]]; then
374 Where ``${pci_id}`` is ID of white-listed VF PCI ID. For more information please
375 see [:ref:`pciids`]. This act as security constraint to prevent taking other
377 The output list of all VF network devices is split into two lists for TG and
378 SUT side of connection. First two items from each TG or SUT network devices
379 list are taken to expose directly to namespace of container. This can be done
384 $ ip link set ${netdev} netns ${DCR_CPIDS[tg]}
385 $ ip link set ${netdev} netns ${DCR_CPIDS[dut1]}
387 In this stage also symbolic links to PCI devices under sysfs bus directory tree
388 are created in running containers. Once VF devices are assigned to container
389 namespace and PCI devices are linked to running containers and mutex is exited.
390 Selected VF network device automatically disappear from parent container
391 namespace, so another instance of script will not find device under that
394 Once Docker container exits, network device is returned back into parent
395 namespace and can be reused.
397 Network traffic isolation - Intel i40evf
398 ----------------------------------------
400 In a virtualized environment, on Intel(R) Server Adapters that support SR-IOV,
401 the virtual function (VF) may be subject to malicious behavior. Software-
402 generated layer two frames, like IEEE 802.3x (link flow control), IEEE 802.1Qbb
403 (priority based flow-control), and others of this type, are not expected and
404 can throttle traffic between the host and the virtual switch, reducing
405 performance. To resolve this issue, configure all SR-IOV enabled ports for
406 VLAN tagging. This configuration allows unexpected, and potentially malicious,
407 frames to be dropped. [:ref:`inteli40e`]
409 To configure VLAN tagging for the ports on an SR-IOV enabled adapter,
410 use the following command. The VLAN configuration SHOULD be done
411 before the VF driver is loaded or the VM is booted. [:ref:`inteli40e`]
415 $ ip link set dev <PF netdev id> vf <id> vlan <vlan id>
417 For example, the following instructions will configure PF eth0 and
418 the first VF on VLAN 10.
422 $ ip link set dev eth0 vf 0 vlan 10
424 VLAN Tag Packet Steering allows to send all packets with a specific VLAN tag to
425 a particular SR-IOV virtual function (VF). Further, this feature allows to
426 designate a particular VF as trusted, and allows that trusted VF to request
427 selective promiscuous mode on the Physical Function (PF). [:ref:`inteli40e`]
429 To set a VF as trusted or untrusted, enter the following command in the
434 $ ip link set dev eth0 vf 1 trust [on|off]
436 Once the VF is designated as trusted, use the following commands in the VM
437 to set the VF to promiscuous mode. [:ref:`inteli40e`]
439 - For promiscuous all:
442 $ ip link set eth2 promisc on
444 - For promiscuous Multicast:
447 $ ip link set eth2 allmulti on
451 By default, the ethtool priv-flag vf-true-promisc-support is set to
452 *off*, meaning that promiscuous mode for the VF will be limited. To set the
453 promiscuous mode for the VF to true promiscuous and allow the VF to see
454 all ingress traffic, use the following command.
455 $ ethtool set-priv-flags p261p1 vf-true-promisc-support on
456 The vf-true-promisc-support priv-flag does not enable promiscuous mode;
457 rather, it designates which type of promiscuous mode (limited or true)
458 you will get when you enable promiscuous mode using the ip link commands
459 above. Note that this is a global setting that affects the entire device.
460 However,the vf-true-promisc-support priv-flag is only exposed to the first
461 PF of the device. The PF remains in limited promiscuous mode (unless it
462 is in MFP mode) regardless of the vf-true-promisc-support setting.
465 Service described earlier *csit-initialize-vfs.service* is responsible for
466 assigning 802.1Q vlan tagging to each virtual function via physical function
467 from list of white-listed PCI addresses by following (simplified) code.
471 SCRIPT_DIR="$(dirname $(readlink -e "${BASH_SOURCE[0]}"))"
472 source "${SCRIPT_DIR}/csit-initialize-vfs-data.sh"
474 # Initilize whitelisted NICs with maximum number of VFs.
476 for pci_addr in ${PCI_WHITELIST[@]}; do
477 if ! [[ ${PCI_BLACKLIST[*]} =~ "${pci_addr}" ]]; then
478 pci_path="/sys/bus/pci/devices/${pci_addr}"
479 # SR-IOV initialization
480 case "${1:-start}" in
482 sriov_totalvfs=$(< "${pci_path}"/sriov_totalvfs)
488 echo ${sriov_totalvfs} > "${pci_path}"/sriov_numvfs
489 # SR-IOV 802.1Q isolation
490 case "${1:-start}" in
492 pf=$(basename "${pci_path}"/net/*)
493 for vf in $(seq "${sriov_totalvfs}"); do
494 # PCI address index in array (pairing siblings).
495 if [[ -n ${PF_INDICES[@]} ]]
497 vlan_pf_idx=${PF_INDICES[$pci_addr]}
499 vlan_pf_idx=$((pci_idx % (${#PCI_WHITELIST[@]}/2)))
501 # 802.1Q base offset.
503 # 802.1Q PF PCI address offset.
504 vlan_pf_off=$(( vlan_pf_idx * 100 + vlan_bs_off ))
505 # 802.1Q VF PCI address offset.
506 vlan_vf_off=$(( vlan_pf_off + vf - 1 ))
508 vlan_str="vlan ${vlan_vf_off}"
510 mac5="$(printf '%x' ${pci_idx})"
511 mac6="$(printf '%x' $(( vf - 1 )))"
512 mac_str="mac ba:dc:0f:fe:${mac5}:${mac6}"
513 # Set 802.1Q VLAN id and MAC address
514 ip link set ${pf} vf $(( vf - 1)) ${mac_str} ${vlan_str}
515 ip link set ${pf} vf $(( vf - 1)) trust on
516 ip link set ${pf} vf $(( vf - 1)) spoof off
518 pci_idx=$(( pci_idx + 1 ))
526 Assignment starts at VLAN 1100 and incrementing by 1 for each VF and by 100 for
527 each white-listed PCI address up to the middle of the PCI list. Second half of
528 the lists is assumed to be directly (cable) paired siblings and assigned with
529 same 802.1Q VLANs as its siblings.
539 Switch to non-privileged containers: As of now all three container
540 flavors are using privileged containers to make it working. Explore options
541 to switch containers to non-privileged with explicit rather implicit
546 Switch to testuser account instead of root.
553 Docker image distribution: Create jenkins jobs with full pipeline of
554 CI/CD for CSIT Docker images.
561 Implement queueing mechanism: Currently there is no mechanics that
562 would place starving jobs in queue in case of no resources available.
566 Replace reservation script with Docker network plugin written in
567 GOLANG/SH/Python - platform independent.
574 [TWSLink] `TWS <https://wiki.fd.io/view/CSIT/TWS>`_
578 [dockerhub] `Docker hub <https://hub.docker.com/>`_
582 [fdiocsitgerrit] `FD.io/CSIT gerrit <https://gerrit.fd.io/r/CSIT>`_
586 [fdioregistry] `FD.io registy <registry.fdiopoc.net>`_
588 .. _JenkinsSlaveDcrFile:
590 [JenkinsSlaveDcrFile] `jenkins-slave-dcr-file <https://github.com/snergfdio/multivppcache/blob/master/ubuntu18/Dockerfile>`_
594 [CsitShimDcrFile] `csit-shim-dcr-file <https://github.com/snergfdio/multivppcache/blob/master/csit-shim/Dockerfile>`_
598 [CsitSutDcrFile] `csit-sut-dcr-file <https://github.com/snergfdio/multivppcache/blob/master/csit-sut/Dockerfile>`_
602 [ansiblelink] `ansible <https://www.ansible.com/>`_
606 [fdiocsitansible] `Fd.io/CSIT ansible <https://git.fd.io/csit/tree/fdio.infra.ansible>`_
610 [inteli40e] `Intel i40e <https://downloadmirror.intel.com/26370/eng/readme.txt>`_
614 [pciids] `pci ids <http://pci-ids.ucw.cz/v2.2/pci.ids>`_