X-Git-Url: https://gerrit.fd.io/r/gitweb?p=csit.git;a=blobdiff_plain;f=docs%2Freport%2Fvpp_performance_tests%2Fdocumentation%2Fcontainers.rst;h=b15c899726e82cfe2ae386bd4457d5d98305c4fd;hp=b7c1b01652ade557ad587e7288bcaa72ea83b7bd;hb=c5e84fbee876a45d3495cde6f4e2d8140cacbe5a;hpb=f4cd1c230a2328fd647fd88da5d9149fbad556e3 diff --git a/docs/report/vpp_performance_tests/documentation/containers.rst b/docs/report/vpp_performance_tests/documentation/containers.rst index b7c1b01652..b15c899726 100644 --- a/docs/report/vpp_performance_tests/documentation/containers.rst +++ b/docs/report/vpp_performance_tests/documentation/containers.rst @@ -1,5 +1,5 @@ -.. _containter_orchestration_in_csit: +.. _container_orchestration_in_csit: Container Orchestration in CSIT =============================== @@ -22,7 +22,7 @@ file systems. :abbr:`LXC (Linux Containers)` combine kernel's cgroups and support for isolated namespaces to provide an isolated environment for applications. Docker does use LXC as one of its execution drivers, enabling image management -and providing deployment services. More information in [lxc]_, [lxc-namespace]_ +and providing deployment services. More information in [lxc]_, [lxcnamespace]_ and [stgraber]_. Linux containers can be of two kinds: privileged containers and @@ -41,7 +41,7 @@ user gets root in a container. With unprivileged containers, non-root users can create containers and will appear in the container as the root, but will appear as userid on the host. Unprivileged containers are also better suited to supporting multi-tenancy operating -environments. More information in [lxc-security]_ and [stgraber]_. +environments. More information in [lxcsecurity]_ and [stgraber]_. Privileged Containers ~~~~~~~~~~~~~~~~~~~~~ @@ -68,7 +68,7 @@ list of applicable security control mechanisms: - Seccomp - secure computing mode, enables filtering of system calls, [seccomp]_. -More information in [lxc-security]_ and [lxc-sec-features]_. +More information in [lxcsecurity]_ and [lxcsecfeatures]_. **Linux Containers in CSIT** @@ -90,7 +90,7 @@ orchestration system: 2. Build - building a container image from scratch or another container image via :command:`docker build ` or customizing LXC templates in - `https://github.com/lxc/lxc/tree/master/templates`_ + `GitHub `_. 3. (Re-)Create - creating a running instance of a container application from anew, or re-creating one that failed. A.k.a. (re-)deploy via @@ -111,14 +111,12 @@ Current CSIT testing framework integrates following Linux container orchestration mechanisms: - LXC/Docker for complete VPP container lifecycle control. -- Combination of Kubernetes (container orchestration), Docker (container - images) and Ligato (container networking). LXC ~~~ LXC is the well-known and heavily tested low-level Linux container -runtime [lxc-source]_, that provides a userspace interface for the Linux kernel +runtime [lxcsource]_, that provides a userspace interface for the Linux kernel containment features. With a powerful API and simple tools, LXC enables Linux users to easily create and manage system or application containers. LXC uses following kernel features to contain processes: @@ -163,31 +161,6 @@ containerized applications used in CSIT performance tests. configuration file controls the range of CPU cores the Docker image must run on. VPP thread pinning defined vpp startup.conf. -Kubernetes -~~~~~~~~~~ - -Kubernetes [k8s-doc]_, or K8s, is a production-grade container orchestration -platform for automating the deployment, scaling and operating -application containers. Kubernetes groups containers that make up an -application into logical units, pods, for easy management and discovery. -K8s pod definitions including compute resource allocation is provided in -.yaml files. - -CSIT uses K8s and its infrastructure components like etcd to control all -phases of container based virtualized network topologies. - -Ligato -~~~~~~ - -Ligato [ligato]_ is an open-source project developing a set of cloud-native -tools for orchestrating container networking. Ligato integrates with FD.io VPP -using goVPP [govpp]_ and vpp-agent [vpp-agent]_. - -**Known Issues** - -- Currently using a separate LF Jenkins job for building csit-centric - prod_vpp_agent docker images vs. dockerhub/ligato ones. - Implementation -------------- @@ -358,33 +331,26 @@ Usage example: | | [Arguments] | ${technology} | ${image} | ${cpu_count}=${1} | ${count}=${1} | | ... | | ${group}= | Set Variable | VNF - | | ${guest_dir}= | Set Variable | /mnt/host - | | ${host_dir}= | Set Variable | /tmp | | ${skip_cpus}= | Evaluate | ${vpp_cpus}+${system_cpus} | | Import Library | resources.libraries.python.ContainerUtils.ContainerManager - | | ... | engine=${technology} | WITH NAME | ${group} + | | ... | engine=${container_engine} | WITH NAME | ${group} | | ${duts}= | Get Matches | ${nodes} | DUT* | | :FOR | ${dut} | IN | @{duts} - | | | {env}= | Create List | LC_ALL="en_US.UTF-8" - | | | ... | DEBIAN_FRONTEND=noninteractive | ETCDV3_ENDPOINTS=172.17.0.1:2379 + | | | ${env}= | Create List | DEBIAN_FRONTEND=noninteractive + | | | ${mnt}= | Create List | /tmp:/mnt/host | /dev:/dev | | | ${cpu_node}= | Get interfaces numa node | ${nodes['${dut}']} | | | ... | ${dut1_if1} | ${dut1_if2} | | | Run Keyword | ${group}.Construct containers - | | | ... | name=${dut}_${group} - | | | ... | node=${nodes['${dut}']} - | | | ... | host_dir=${host_dir} - | | | ... | guest_dir=${guest_dir} - | | | ... | image=${image} - | | | ... | cpu_count=${cpu_count} - | | | ... | cpu_skip=${skip_cpus} - | | | ... | smt_used=${False} - | | | ... | cpuset_mems=${cpu_node} - | | | ... | cpu_shared=${False} - | | | ... | env=${env} + | | | ... | name=${dut}_${group} | node=${nodes['${dut}']} | mnt=${mnt} + | | | ... | image=${container_image} | cpu_count=${container_cpus} + | | | ... | cpu_skip=${skip_cpus} | cpuset_mems=${cpu_node} + | | | ... | cpu_shared=${False} | env=${env} | count=${container_count} + | | | ... | install_dkms=${container_install_dkms} + | | Append To List | ${container_groups} | ${group} Mandatory parameters to create standalone container are: ``node``, ``name``, -``image`` [image-var]_, ``cpu_count``, ``cpu_skip``, ``smt_used``, -``cpuset_mems``, ``cpu_shared``. +``image`` [imagevar]_, ``cpu_count``, ``cpu_skip``, ``cpuset_mems``, +``cpu_shared``. There is no parameters check functionality. Passing required arguments is in coder responsibility. All the above parameters are required to calculate the @@ -393,110 +359,30 @@ correct cpu placement. See documentation for the full reference. Kubernetes ~~~~~~~~~~ -Kubernetes is implemented as separate library ``KubernetesUtils.py``, -with a class with the same name. This utility provides an API for L2 -Robot Keywords to control ``kubectl`` installed on each of DUTs. One -time initialization script, ``resources/libraries/bash/k8s_setup.sh`` -does reset/init kubectl, applies Calico v2.6.3 and initializes the -``csit`` namespace. CSIT namespace is required to not to interfere with -existing setups and it further simplifies apply/get/delete -Pod/ConfigMap operations on SUTs. +For the future use, Kubernetes [k8sdoc]_ is implemented as separate library +``KubernetesUtils.py``, with a class with the same name. This utility provides +an API for L2 Robot Keywords to control ``kubectl`` installed on each of DUTs. +One time initialization script, ``resources/libraries/bash/k8s_setup.sh`` +does reset/init kubectl, and initializes the ``csit`` namespace. CSIT +namespace is required to not to interfere with existing setups and it +further simplifies apply/get/delete Pod/ConfigMap operations on SUTs. Kubernetes utility is based on YAML templates to avoid crafting the huge YAML configuration files, what would lower the readability of code and -requires complicated algorithms. The templates can be found in -``resources/templates/kubernetes`` and can be leveraged in the future -for other separate tasks. +requires complicated algorithms. Two types of YAML templates are defined: - Static - do not change between deployments, that is infrastructure containers like Kafka, Calico, ETCD. -- Dynamic - per test suite/case topology YAML files e.g. SFC_controller, - VNF, VSWITCH. +- Dynamic - per test suite/case topology YAML files. Making own python wrapper library of ``kubectl`` instead of using the official Python package allows to control and deploy environment over the SSH library without the need of using isolated driver running on each of DUTs. -Ligato -~~~~~~ - -Ligato integration does require to compile the ``vpp-agent`` tool and build the -bundled Docker image. Compilation of ``vpp-agent`` depends on specific VPP. In -``ligato/vpp-agent`` repository there are well prepared scripts for building the -Docker image. Building docker image is possible via series of commands: - -:: - - git clone https://github.com/ligato/vpp-agent - cd vpp_agent/docker/dev_vpp_agent - sudo docker build -t dev_vpp_agent --build-arg AGENT_COMMIT=\ - --build-arg VPP_COMMIT= --no-cache . - sudo ./shrink.sh - cd ../prod_vpp_agent - sudo ./build.sh - sudo ./shrink.sh - -CSIT requires Docker image to include the desired VPP version (per patch -testing, nightly testing, on demand testing). - -The entire build process of building ``dev_vpp_agent`` image heavily depends -on internet connectivity and also takes a significant amount of time (~1-1.5h -based on internet bandwidth and allocated resources). The optimal solution would -be to build the image on jenkins slave, transfer the Docker image to DUTs and -execute separate suite of tests. - -To adress the amount of time required to build ``dev_vpp_agent`` image, we can -pull existing specific version of ```dev_vpp_agent``` and exctract the -```vpp-agent``` from it. - -We created separate sets of Jenkins jobs, that will be executing following: - -1. Clone latest CSIT and Ligato repositaries. -2. Pull specific version of ``dev_vpp_agent`` image from Dockerhub. -3. Extract VPP API (from ``.deb`` package) and copy into ``dev_vpp_agent`` - image -4. Rebuild vpp-agent and extract outside image. -5. Build ``prod_vpp_image`` Docker image from ``dev_vpp_agent`` image. -6. Transfer ``prod_vpp_agent`` image to DUTs. -7. Execute subset of performance tests designed for Ligato testing. - -:: - - +-----------------------------------------------+ - | ubuntu:16.04 <-----| Base image on Dockerhub - +------------------------^----------------------+ - | - | - +------------------------+----------------------+ - | ligato/dev_vpp_agent <------| Pull this image from - +------------------------^----------------------+ | Dockerhub ligato/dev_vpp_agent: - | - | Rebuild and extract agent.tar.gz from dev_vpp_agent - +------------------------+----------------------+ - | prod_vpp_agent <------| Build by passing own - +-----------------------------------------------+ | vpp.tar.gz (from nexus - | or built by JJB) and - | agent.tar.gz extracted - | from ligato/dev_vpp_agent - - -Approximate size of vnf-agent docker images: - -:: - - REPOSITORY TAG IMAGE ID CREATED SIZE - dev-vpp-agent latest 78c53bd57e2 6 weeks ago 9.79GB - prod_vpp_agent latest f68af5afe601 5 weeks ago 443MB - -In CSIT we need to create separate performance suite under -``tests/kubernetes/perf`` which contains modified Suite setup in comparison -to standard perf tests. This is due to reason that VPP will act as vswitch in -Docker image and not as standalone installed service. - Tested Topologies ~~~~~~~~~~~~~~~~~ @@ -504,7 +390,7 @@ Listed CSIT container networking test topologies are defined with DUT containerized VPP switch forwarding packets between NF containers. Each NF container runs their own instance of VPP in L2XC configuration. -Following container networking topologies are tested in CSIT |release|: +Following container networking topologies are tested in |csit-release|: - LXC topologies: @@ -514,38 +400,22 @@ Following container networking topologies are tested in CSIT |release|: - Docker topologies: - eth-l2xcbase-eth-2memif-1docker. - -- Kubernetes/Ligato topologies: - - - eth-1drcl2bdbasemaclrn-eth-2memif-1drcl2xc-1paral - - eth-1drcl2bdbasemaclrn-eth-2memif-2drcl2xc-1horiz - - eth-1drcl2bdbasemaclrn-eth-2memif-4drcl2xc-1horiz - - eth-1drcl2bdbasemaclrn-eth-4memif-2drcl2xc-1chain - - eth-1drcl2bdbasemaclrn-eth-8memif-4drcl2xc-1chain - - eth-1drcl2xcbase-eth-2memif-1drcl2xc-1paral - - eth-1drcl2xcbase-eth-2memif-2drcl2xc-1horiz - - eth-1drcl2xcbase-eth-2memif-4drcl2xc-1horiz - - eth-1drcl2xcbase-eth-4memif-2drcl2xc-1chain - - eth-1drcl2xcbase-eth-8memif-4drcl2xc-1chain + - eth-l2xcbase-eth-1memif-1docker References ----------- +~~~~~~~~~~ .. [lxc] `Linux Containers `_ -.. [lxc-namespace] `Resource management: Linux kernel Namespaces and cgroups `_. +.. [lxcnamespace] `Resource management: Linux kernel Namespaces and cgroups `_. .. [stgraber] `LXC 1.0: Blog post series `_. -.. [lxc-security] `Linux Containers Security `_. -.. [capabilities] `Linux manual - capabilities - overview of Linux capabilities http://man7.org/linux/man-pages/man7/capabilities.7.html`_. +.. [lxcsecurity] `Linux Containers Security `_. +.. [capabilities] `Linux manual - capabilities - overview of Linux capabilities `_. .. [cgroup1] `Linux kernel documentation: cgroups `_. .. [cgroup2] `Linux kernel documentation: Control Group v2 `_. .. [selinux] `SELinux Project Wiki `_. -.. [lxc-sec-features] `LXC 1.0: Security features `_. -.. [lxc-source] `Linux Containers source `_. +.. [lxcsecfeatures] `LXC 1.0: Security features `_. +.. [lxcsource] `Linux Containers source `_. .. [apparmor] `Ubuntu AppArmor `_. .. [seccomp] `SECure COMPuting with filters `_. .. [docker] `Docker `_. -.. [k8s-doc] `Kubernetes documentation `_. -.. [ligato] `Ligato `_. -.. [govpp] `FD.io goVPP project `_. -.. [vpp-agent] `Ligato vpp-agent `_. -.. [image-var] Image parameter is required in initial commit version. There is plan to implement container build class to build Docker/LXC image. +.. [k8sdoc] `Kubernetes documentation `_.