1 VPP_Device Integration Tests
2 ============================
7 FD.io VPP software data plane technology has become very popular across
8 a wide range of VPP eco-system use cases, putting higher pressure on
9 continuous verification of VPP software quality.
11 This document describes a proposal for design and implementation of extended
12 continuous VPP testing by extending existing test environments.
13 Furthermore it describes and summarizes implementation details of Integration
14 and System tests platform *1-Node VPP_Device*. It aims to provide a complete
15 end-to-end view of *1-Node VPP_Device* environment in order to improve
16 extendibility and maintenance, under the guideline of VPP core team.
18 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
19 "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be
20 interpreted as described in :rfc:`8174`.
25 .. todo: Covert to SVG
27 .. image:: vpp-device.png
32 All :abbr:`FD.io (Fast Data Input/Ouput)` :abbr:`CSIT (Continuous System
33 Integration and Testing)` vpp-device tests are executed on physical testbeds
34 built with bare-metal servers hosted by :abbr:`LF (Linux Foundation)` FD.io
35 project. Two 1-node testbed topologies are used:
37 - **2-Container Topology**: Consisting of one Docker container acting as SUT
38 (System Under Test) and one Docker container as TG (Traffic Generator), both
39 connected in ring topology via physical NIC crossconnecting.
41 Current FD.io production testbeds are built with servers based on one
42 processor generation of Intel Xeons: Skylake (Platinum 8180). Testbeds built
43 with servers based on Arm processors are in the process of being added to FD.io
46 Following section describe existing production 1n-skx testbed.
48 1-Node Xeon Skylake (1n-skx)
49 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
51 1n-skx testbed is based on single SuperMicro SYS-7049GP-TRT server equipped with
52 two Intel Xeon Skylake Platinum 8180 2.5 GHz 28 core processors. Physical
53 testbed topology is depicted in a figure below.
61 \graphicspath{{../_tmp/src/introduction/}}
62 \includegraphics[width=0.90\textwidth]{testbed-1n-skx}
63 \label{fig:testbed-1n-skx}
68 .. figure:: testbed-1n-skx.svg
72 Logical view is depicted in a figure below.
80 \graphicspath{{../_tmp/src/introduction/}}
81 \includegraphics[width=0.90\textwidth]{logical-1n-skx}
82 \label{fig:logical-1n-skx}
87 .. figure:: logical-1n-skx.svg
91 Server is populated with the following NIC models:
93 #. NIC-1: x710-da4 4p10GE Intel.
94 #. NIC-2: x710-da4 4p10GE Intel.
96 All Intel Xeon Skylake servers run with Intel Hyper-Threading enabled,
97 doubling the number of logical cores exposed to Linux, with 56 logical
98 cores and 28 physical cores per processor socket.
100 NIC interfaces are shared using Linux vfio_pci and VPP VF drivers:
103 - Fortville AVF driver.
105 Provided Intel x710-da4 4p10GE NICs suppport 32 VFs per interface, 128 per NIC.
107 Complete 1n-skx testbeds specification is available on `CSIT LF Testbeds
108 <https://wiki.fd.io/view/CSIT/Testbeds:_Xeon_Skx,_Arm,_Atom.>`_ wiki page.
110 Total of two 1n-skx testbeds are in operation in FD.io labs.
112 1-Node Virtualbox (1n-vbox)
113 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
115 1n-skx testbed can run in single VirtualBox VM machine. This solution replaces
116 the previously used Vagrant environment based on 3 VMs.
118 VirtualBox VM MAY be created by Vagrant and MUST have additional 4 virtio NICs
119 each pair attached to separate private networks to simulate back-to-back
120 connections. It SHOULD be 82545EM device model (otherwise can be changed in
121 boostrap scripts). Example of Vagrant configuration:
124 Vagrant.configure(2) do |c|
125 c.vm.network "private_network", type: "dhcp", auto_config: false,
126 virtualbox__intnet: "port1", nic_type: "82545EM"
127 c.vm.network "private_network", type: "dhcp", auto_config: false,
128 virtualbox__intnet: "port2", nic_type: "82545EM"
130 c.vm.provider :virtualbox do |v|
131 v.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"]
132 v.customize ["modifyvm", :id, "--nicpromisc3", "allow-all"]
133 v.customize ["modifyvm", :id, "--nicpromisc4", "allow-all"]
134 v.customize ["modifyvm", :id, "--nicpromisc5", "allow-all"]
136 Vagrant VM is populated with the following NIC models:
138 #. NIC-1: 82545EM Intel.
139 #. NIC-2: 82545EM Intel.
140 #. NIC-3: 82545EM Intel.
141 #. NIC-4: 82545EM Intel.
146 It was agreed on :abbr:`TWS (Technical Work Stream)` call to continue with
147 Ubuntu 18.04 LTS as a baseline system with OPTIONAL extend to Centos 7 and
148 SuSE per demand [tws]_.
150 All :abbr:`DCR (Docker container)` images are REQUIRED to be hosted on Docker
151 registry available from LF network, publicly available and trackable. For
152 backup, tracking and contributing purposes all Dockerfiles (including files
153 needed for building container) MUST be available and stored in [fdiocsitgerrit]_
154 repository under appropriate folders. This allows the peer review process to be
155 done for every change of infrastructure related to scope of this document.
156 Currently only **csit-shim-dcr** and **csit-sut-dcr** containers will be stored
157 and maintained under CSIT repository by CSIT contributors.
159 At the time of designing solution described in this document the interconnection
160 between [dockerhub]_ and [fdiocsitgerrit]_ for automated build purposes and
161 image hosting cannot be established with the trust and respectful to
162 security of FD.io project. Unless adressed, :abbr:`DCR` images will be placed in
163 custom registry service [fdioregistry]_. Automated Jenkins jobs will be created
164 in align of long term solution for container lifecycle and ability to build
165 new version of docker images.
167 In parallel, the effort is started to find the outsourced Docker registry
173 As of initial version of vpp-device, we do have only single `:latest` version of
174 Docker image hosted on [dockerhub]_. This will be addressed as further
175 improvement with proper semantic versioning.
180 This :abbr:`DCR` acts as the Jenkins slave (known also as jenkins minion). It
181 can connect over SSH protocol to TCP port 6022 of **csit-shim-dcr** and executes
182 non-interactive reservation script. Nomad is responsible for scheduling this
183 container execution onto specific **1-Node VPP_Device** testbed. It executes
184 :abbr:`CSIT` environment including :abbr:`CSIT` framework.
186 All software dependencies including VPP/DPDK that are not present in
187 **csit-sut-dcr** container image and/or needs to be compiled prior running on
188 **csit-sut-dcr** SHOULD be compiled in this container.
190 - *Container Image Location*: Docker image at [jenkins-slave-dcr-img]_.
192 - *Container Definition*: Docker file specified at [jenkins-slave-dcr-file]_.
194 - *Initializing*: Container is initialized from within *Consul by HashiCorp*
195 and *Nomad by HashiCorp*.
200 This :abbr:`DCR` acts as an intermediate layer running script responsible for
201 orchestrating topologies under test and reservation. Responsible for managing VF
202 resources and allocation to :abbr:`DUT (Device Under Test)`, :abbr:`TG
203 (Traffic Generator)` containers. This MUST to be done on **csit-shim-dcr**.
204 This image also acts as the generic reservation mechanics arbiter to make sure
205 that only Y number of simulations are spawned on any given HW node.
207 - *Container Image Location*: Docker image at [csit-shim-dcr-img]_.
209 - *Container Definition*: Docker file specified at [csit-shim-dcr-file]_.
211 - *Initializing*: Container is initialized from within *Consul by HashiCorp*
212 and *Nomad by HashiCorp*. Required docker parameters, to be able to run
213 nested containers with VF reservation system are: privileged, net=host,
216 - *Connectivity*: Over SSH only, using <host>:6022 format. Currently using
217 *root* user account as primary. From the jenkins slave it will be able to
218 connect via env variable, since the jenkins slave doesn't actually know what
221 ssh -p 6022 root@10.30.51.node
226 This :abbr:`DCR` acts as an :abbr:`SUT (System Under Test)`. Any :abbr:`DUT` or
227 :abbr:`TG` application is installed there. It is RECOMMENDED to install DUT and
228 all DUT dependencies via commands ``rpm -ihv`` on RedHat based OS or ``dpkg -i``
231 Container is designed to be a very lightweight Docker image that only installs
232 packages and execute binaries (previously built or downloaded on
233 **jenkins-slave-dcr**) and contains libraries necessary to run CSIT framework
234 including those required by DUT/TG.
236 - *Container Image Location*: Docker image at [csit-sut-dcr-img]_.
238 - *Container Definition*: Docker file specified at [csit-sut-dcr-file]_.
244 # Run the container in the background and print the new container ID.
246 # Give extended privileges to this container. A "privileged" container is
247 # given access to all devices and able to run nested containers.
249 # Publish all exposed ports to random ports on the host interfaces.
251 # Automatically remove the container when it exits.
255 # Override access to PCI bus by attaching a filesystem mount to the
257 --mount type=tmpfs,destination=/sys/bus/pci/devices
258 # Mount vfio to be able to bind to see binded interfaces. We cannot use
259 # --device=/dev/vfio as this does not see newly binded interfaces.
260 --volume /dev/vfio:/dev/vfio
261 # Image of csit-sut-dcr
262 snergster/csit-vpp-device-test:latest
264 Container name is catenated from **csit-** prefix and uuid generated uniquely
265 for each container instance.
267 - *Connectivity*: Over SSH only, using <host>[:<port>] format. Currently using
268 *root* user account as primary.
270 ssh -p <port> root@10.30.51.<node>
272 Container required to run as ``--privileged`` due to ability to create nested
273 containers and have full read/write access to sysfs (for bind/unbind). Docker
274 automatically pick free network port (``--publish-all``) for ability to connect
275 over ssh. To be able to limit access to PCI bus, container is creating tmpfs
276 mount type in PCI bus tree. CSIT reservation script is dynamically linking only
277 PCI devices (NIC cards) that are reserved for particular container. This
278 way it is not colliding with other containers. To make vfio work, access to
279 ``/dev/vfio`` must be granted.
281 .. todo: Change default user to testuser with non-privileged and install sudo.
283 Environment initialization
284 --------------------------
286 All 1-node servers are to be managed and provisioned via the [ansible]_ set of
287 playbooks with *vpp-device* role. Full playbooks can be found under
288 [fdiocsitansible]_ directory. This way we are able to track all configuration
289 changes of physical servers in gerrit (in structured yaml format) as well as we
290 are able to extend *vpp-device* to additional servers with less effort or
291 re-stage servers in case of failure.
293 SR-IOV VF initialization is done via ``systemd`` service during host system boot
294 up. Service with name *csit-initialize-vfs.service* is created under systemd
295 system context (``/etc/systemd/system/``). By default service is calling
296 ``/usr/local/bin/csit-initialize-vfs.sh`` with single parameter:
298 - **start**: Creates maximum number of :abbr:`virtual functions (VFs)` (detected
299 from ``sriov_totalvfs``) for each whitelisted PCI device.
300 - **stop**: Removes all :abbr:`VFs` for all whitelisted PCI device.
302 Service is considered active even when all of its processes exited successfully.
303 Stopping service will automatically remove :abbr:`VFs`.
308 Description=CSIT Initialize SR-IOV VFs
314 ExecStart=/usr/local/bin/csit-initialize-vfs.sh start
315 ExecStop=/usr/local/bin/csit-initialize-vfs.sh stop
318 WantedBy=default.target
320 Script is driven by two array variables ``pci_blacklist``/``pci_whitelist``.
321 They MUST store all PCI addresses in **<domain>:<bus>:<device>.<func>** format,
324 - **pci_blacklist**: PCI addresses to be skipped from :abbr:`VFs`
325 initialization (usefull for e.g. excluding management network interfaces).
326 - **pci_whitelist**: PCI addresses to be included for :abbr:`VFs`
332 During topology initialization phase of script, mutex is used to avoid multiple
333 instances of script to interact with each other during resources allocation.
334 Mutal exclusion ensure that no two distinct instances of script will get same
337 Reservation function reads the list of all available virtual function network
342 net_path="/sys/bus/pci/devices/*/net/*"
345 $(find ${net_path} -type d -name . -o -prune -exec basename '{}' ';');
347 if grep -q "${pci_id}" "/sys/class/net/${netdev}/device/device"; then
352 Where ``${pci_id}`` is ID of white-listed VF PCI ID. For more information please
353 see [pci_ids_]. This act as security constraint to prevent taking other unwanted
355 The output list of all VF network devices is split into two lists for TG and
356 SUT side of connection. First two items from each TG or SUT network devices
357 list are taken to expose directly to namespace of container. This can be done
362 $ ip link set ${netdev} netns ${DCR_CPIDS[tg]}
363 $ ip link set ${netdev} netns ${DCR_CPIDS[dut1]}
365 In this stage also symbolic links to PCI devices under sysfs bus directory tree
366 are created in running containers. Once VF devices are assigned to container
367 namespace and PCI deivces are linked to running containers and mutex is exited.
368 Selected VF network device automatically dissapear from parent container
369 namespace, so another instance of script will not find device under that
372 Once Docker container exits, network device is returned back into parent
373 namespace and can be reused.
375 Network traffic isolation - Intel i40evf
376 ----------------------------------------
378 In a virtualized environment, on Intel(R) Server Adapters that support SR-IOV,
379 the virtual function (VF) may be subject to malicious behavior. Software-
380 generated layer two frames, like IEEE 802.3x (link flow control), IEEE 802.1Qbb
381 (priority based flow-control), and others of this type, are not expected and
382 can throttle traffic between the host and the virtual switch, reducing
383 performance. To resolve this issue, configure all SR-IOV enabled ports for
384 VLAN tagging. This configuration allows unexpected, and potentially malicious,
385 frames to be dropped. [intel_i40e_]
387 To configure VLAN tagging for the ports on an SR-IOV enabled adapter,
388 use the following command. The VLAN configuration SHOULD be done
389 before the VF driver is loaded or the VM is booted. [intel_i40e_]
393 $ ip link set dev <PF netdev id> vf <id> vlan <vlan id>
395 For example, the following instructions will configure PF eth0 and
396 the first VF on VLAN 10.
400 $ ip link set dev eth0 vf 0 vlan 10
403 VLAN Tag Packet Steering allows to send all packets with a specific VLAN tag to
404 a particular SR-IOV virtual function (VF). Further, this feature allows to
405 designate a particular VF as trusted, and allows that trusted VF to request
406 selective promiscuous mode on the Physical Function (PF). [intel_i40e_]
408 To set a VF as trusted or untrusted, enter the following command in the
413 $ ip link set dev eth0 vf 1 trust [on|off]
415 Once the VF is designated as trusted, use the following commands in the VM
416 to set the VF to promiscuous mode. [intel_i40e_]
418 - For promiscuous all:
421 $ ip link set eth2 promisc on
423 - For promiscuous Multicast:
426 $ ip link set eth2 allmulti on
428 .. note: By default, the ethtool priv-flag vf-true-promisc-support is set to
429 *off*, meaning that promiscuous mode for the VF will be limited. To set the
430 promiscuous mode for the VF to true promiscuous and allow the VF to see
431 all ingress traffic, use the following command.
432 $ ethtool set-priv-flags p261p1 vf-true-promisc-support on
433 The vf-true-promisc-support priv-flag does not enable promiscuous mode;
434 rather, it designates which type of promiscuous mode (limited or true)
435 you will get when you enable promiscuous mode using the ip link commands
436 above. Note that this is a global setting that affects the entire device.
437 However,the vf-true-promisc-support priv-flag is only exposed to the first
438 PF of the device. The PF remains in limited promiscuous mode (unless it
439 is in MFP mode) regardless of the vf-true-promisc-support setting.
442 Service described earlier *csit-initialize-vfs.service* is responsible for
443 assigning 802.1Q vlan tagging to each vitual function via physical function
444 from list of white-listed PCI addresses by following (simplified) code.
449 for pci_addr in ${pci_whitelist[@]}; do
450 pci_path="/sys/bus/pci/devices/${pci_addr}"
451 pf=$(basename "${pci_path}"/net/*)
452 for vf in $(seq "${sriov_totalvfs}"); do
453 # PCI address index in array (pairing siblings).
454 vlan_pf_idx=$(( pci_idx % (${#pci_whitelist[@]} / 2) ))
455 # 802.1Q base offset.
457 # 802.1Q PF PCI address offset.
458 vlan_pf_off=$(( vlan_pf_idx * 100 + vlan_bs_off ))
459 # 802.1Q VF PCI address offset.
460 vlan_vf_off=$(( vlan_pf_off + vf - 1 ))
462 vlan_str="vlan ${vlan_vf_off}"
464 mac5="$(printf '%x' ${pci_idx})"
465 mac6="$(printf '%x' $(( vf - 1 )))"
466 mac_str="mac ba:dc:0f:fe:${mac5}:${mac6}"
467 # Set 802.1Q VLAN id and MAC address
468 ip link set ${pf} vf $(( vf - 1 )) ${mac_str} ${vlan_str}
469 ip link set ${pf} vf $(( vf - 1 )) trust on
470 ip link set ${pf} vf $(( vf - 1 )) spoof off
472 pci_idx=$(( pci_idx + 1 ))
475 Assignment starts at VLAN 1100 and incrementing by 1 for each VF and by 100 for
476 each white-listed PCI address up to the middle of the PCI list. Second half of
477 the lists is assumed to be directly (cable) paired siblings and assigned with
478 same 802.1Q VLANs as its siblings.
486 .. todo: Switch to non-privileged containers: As of now all three container
487 flavors are using privileged containers to make it working. Explore options
488 to switch containers to non-privileged with explicit rather implicit
491 .. todo: Switch to testuser account intead of root.
496 .. todo: Docker image distribution: Create jenkins jobs with full pipiline of
497 CI/CD for CSIT Docker images.
502 .. todo: Improve NIC selection pair-wise: As of now script is taking first two
503 interfaces from discovered list regardless of sibling pairing. Implement
504 more advance method of selection of interfaces based on VF 802.1Q siblings.
506 .. todo: Implement queueing mechanism: Currently there is no mechanics that
507 would place starving jobs in queue in case of no resources available.
509 .. todo: Replace reservation script with Docker network plugin written in
510 GOLANG/SH/Python - platform independent.
515 .. _tws: https://wiki.fd.io/view/CSIT/TWS
516 .. _dockerhub: https://hub.docker.com/
517 .. _fdiocsitgerrit: https://gerrit.fd.io/r/CSIT
518 .. _fdioregistry: registry.fdiopoc.net
519 .. _jenkins-slave-dcr-img: snergster/vpp-ubuntu18
520 .. _jenkins-slave-dcr-file: https://github.com/snergfdio/multivppcache/blob/master/ubuntu18/Dockerfile
521 .. _csit-shim-dcr-img: snergster/csit-shim
522 .. _csit-shim-dcr-file: https://github.com/snergfdio/multivppcache/blob/master/csit-shim/Dockerfile
523 .. _csit-sut-dcr-img: snergster/csit-sut
524 .. _csit-sut-dcr-file: https://github.com/snergfdio/multivppcache/blob/master/csit-sut/Dockerfile
525 .. _ansible: https://www.ansible.com/
526 .. _fdiocsitansible: https://git.fd.io/csit/tree/resources/tools/testbed-setup/ansible
527 .. _intel_i40e: https://downloadmirror.intel.com/26370/eng/readme.txt
528 .. _pci_ids: http://pci-ids.ucw.cz/v2.2/pci.ids