2 .. _container_orchestration_in_csit:
4 Container Orchestration in CSIT
5 ===============================
13 Linux Containers is an OS-level virtualization method for running
14 multiple isolated Linux systems (containers) on a compute host using a
15 single Linux kernel. Containers rely on Linux kernel cgroups
16 functionality for controlling usage of shared system resources (i.e.
17 CPU, memory, block I/O, network) and for namespace isolation. The latter
18 enables complete isolation of applications' view of operating
19 environment, including process trees, networking, user IDs and mounted
22 :abbr:`LXC (Linux Containers)` combine kernel's cgroups and support for isolated
23 namespaces to provide an isolated environment for applications. Docker
24 does use LXC as one of its execution drivers, enabling image management
25 and providing deployment services. More information in [lxc]_, [lxcnamespace]_
28 Linux containers can be of two kinds: privileged containers and
29 unprivileged containers.
31 Unprivileged Containers
32 ~~~~~~~~~~~~~~~~~~~~~~~
34 Running unprivileged containers is the safest way to run containers in a
35 production environment. From LXC 1.0 one can start a full system
36 container entirely as a user, allowing to map a range of UIDs on the
37 host into a namespace inside of which a user with UID 0 can exist again.
38 In other words an unprivileged container does mask the userid from the
39 host, making it impossible to gain a root access on the host even if a
40 user gets root in a container. With unprivileged containers, non-root
41 users can create containers and will appear in the container as the
42 root, but will appear as userid <non-zero> on the host. Unprivileged
43 containers are also better suited to supporting multi-tenancy operating
44 environments. More information in [lxcsecurity]_ and [stgraber]_.
49 Privileged containers do not mask UIDs, and container UID 0 is mapped to
50 the host UID 0. Security and isolation is controlled by a good
51 configuration of cgroup access, extensive AppArmor profile preventing
52 the known attacks as well as container capabilities and SELinux. Here a
53 list of applicable security control mechanisms:
55 - Capabilities - keep (whitelist) or drop (blacklist) Linux capabilities,
57 - Control groups - cgroups, resource bean counting, resource quotas, access
58 restrictions, [cgroup1]_, [cgroup2]_.
59 - AppArmor - apparmor profiles aim to prevent any of the known ways of
60 escaping a container or cause harm to the host, [apparmor]_.
61 - SELinux - Security Enhanced Linux is a Linux kernel security module
62 that provides similar function to AppArmor, supporting access control
63 security policies including United States Department of Defense-style
64 mandatory access controls. Mandatory access controls allow an
65 administrator of a system to define how applications and users can
66 access different resources such as files, devices, networks and inter-
67 process communication, [selinux]_.
68 - Seccomp - secure computing mode, enables filtering of system calls,
71 More information in [lxcsecurity]_ and [lxcsecfeatures]_.
73 **Linux Containers in CSIT**
75 CSIT is using Privileged Containers as the ``sysfs`` is mounted with RW
76 access. Sysfs is required to be mounted as RW due to VPP accessing
77 :command:`/sys/bus/pci/drivers/uio_pci_generic/unbind`. This is not the case of
78 unprivileged containers where ``sysfs`` is mounted as read-only.
81 Orchestrating Container Lifecycle Events
82 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
84 Following Linux container lifecycle events need to be addressed by an
87 1. Acquire - acquiring/downloading existing container images via
88 :command:`docker pull` or :command:`lxc-create -t download`.
90 2. Build - building a container image from scratch or another
91 container image via :command:`docker build <dockerfile/composefile>` or
92 customizing LXC templates in
93 `GitHub <https://github.com/lxc/lxc/tree/master/templates>`_.
95 3. (Re-)Create - creating a running instance of a container application
96 from anew, or re-creating one that failed. A.k.a. (re-)deploy via
97 :command:`docker run` or :command:`lxc-start`
99 4. Execute - execute system operations within the container by attaching to
100 running container. THis is done by :command:`lxc-attach` or
101 :command:`docker exec`
103 5. Distribute - distributing pre-built container images to the compute
104 nodes. Currently not implemented in CSIT.
107 Container Orchestration Systems Used in CSIT
108 --------------------------------------------
110 Current CSIT testing framework integrates following Linux container
111 orchestration mechanisms:
113 - LXC/Docker for complete VPP container lifecycle control.
118 LXC is the well-known and heavily tested low-level Linux container
119 runtime [lxcsource]_, that provides a userspace interface for the Linux kernel
120 containment features. With a powerful API and simple tools, LXC enables
121 Linux users to easily create and manage system or application
122 containers. LXC uses following kernel features to contain processes:
124 - Kernel namespaces: ipc, uts, mount, pid, network and user.
125 - AppArmor and SELinux security profiles.
130 CSIT uses LXC runtime and LXC usertools to test VPP data plane performance in
131 a range of virtual networking topologies.
135 - Current CSIT restriction: only single instance of lxc runtime due to
136 the cgroup policies used in CSIT. There is plan to add the capability into
137 code to create cgroups per container instance to address this issue. This sort
138 of functionality is better supported in LXC 2.1 but can be done is current
141 - CSIT code is currently using cgroup to control the range of CPU cores the
142 LXC container runs on. VPP thread pinning is defined vpp startup.conf.
147 Docker builds on top of Linux kernel containment features, and
148 offers a high-level tool for wrapping the processes, maintaining and
149 executing them in containers [docker]_. Currently it is using *runc*,
150 a CLI tool for spawning and running containers according to the
151 `OCI specification <https://www.opencontainers.org/>`_.
153 A Docker container image is a lightweight, stand-alone, executable
154 package that includes everything needed to run the container:
155 code, runtime, system tools, system libraries, settings.
157 CSIT uses Docker to manage the maintenance and execution of
158 containerized applications used in CSIT performance tests.
160 - Data plane thread pinning to CPU cores - Docker CLI and/or Docker
161 configuration file controls the range of CPU cores the Docker image
162 must run on. VPP thread pinning defined vpp startup.conf.
167 CSIT container orchestration is implemented in CSIT Level-1 keyword
168 Python libraries following the Builder design pattern. Builder design
169 pattern separates the construction of a complex object from its
170 representation, so that the same construction process can create
171 different representations e.g. LXC, Docker, other.
173 CSIT Robot Framework keywords are then responsible for higher level
174 lifecycle control of of the named container groups. One can have
175 multiple named groups, with 1..N containers in a group performing
176 different role/functionality e.g. NFs, Switch, Kafka bus, ETCD
177 datastore, etc. ContainerManager class acts as a Director and uses
178 ContainerEngine class that encapsulate container control.
180 Current CSIT implementation is illustrated using UML Class diagram:
189 +-----------------------------------------------------------------------+
190 | RF Keywords (high level lifecycle control) |
191 +-----------------------------------------------------------------------+
192 | Construct VNF containers on all DUTs |
193 | Acquire all '${group}' containers |
194 | Create all '${group}' containers |
195 | Install all '${group}' containers |
196 | Configure all '${group}' containers |
197 | Stop all '${group}' containers |
198 | Destroy all '${group}' containers |
199 +-----------------+-----------------------------------------------------+
203 +-----------------v-----------------+ +--------------------------+
204 | ContainerManager | | ContainerEngine |
205 +-----------------------------------+ +--------------------------+
206 | __init()__ | | __init(node)__ |
207 | construct_container() | | acquire(force) |
208 | construct_containers() | | create() |
209 | acquire_all_containers() | | stop() |
210 | create_all_containers() | 1 1 | destroy() |
211 | execute_on_container() <>-------| info() |
212 | execute_on_all_containers() | | execute(command) |
213 | install_vpp_in_all_containers() | | system_info() |
214 | configure_vpp_in_all_containers() | | install_supervisor() |
215 | stop_all_containers() | | install_vpp() |
216 | destroy_all_containers() | | restart_vpp() |
217 +-----------------------------------+ | create_vpp_exec_config() |
218 | create_vpp_startup_config|
219 | is_container_running() |
220 | is_container_present() |
221 | _configure_cgroup() |
222 +-------------^------------+
226 +----------+---------+
228 +------+------+ +------+------+
230 +-------------+ +-------------+
231 | (inherited) | | (inherited) |
232 +------+------+ +------+------+
234 +----------+---------+
238 +---------v---------+
240 +-------------------+
242 | __setattr__(a, v) |
243 +-------------------+
245 Sequentional diagram that illustrates the creation of a single container.
250 e = engine [Docker|LXC]
251 .. = kwargs (variable number of keyword argument)
253 +-------+ +------------------+ +-----------------+
254 | RF KW | | ContainerManager | | ContainerEngine |
255 +---+---+ +--------+---------+ +--------+--------+
257 | 1: new ContainerManager(e) | |
258 +-+---------------------------->+-+ |
259 |-| |-| 2: new ContainerEngine |
260 |-| |-+----------------------->+-+
264 |-| 3: construct_container(..) | |
265 |-+---------------------------->+-+ |
267 |-| |-+----------------------->+-+
268 |-| |-| |-| 5: new +-------------+
269 |-| |-| |-+-------->| Container A |
270 |-| |-| |-| +-------------+
271 |-| |-|<-----------------------+-|
274 |-| 6: acquire_all_containers() | |
275 |-+---------------------------->+-+ |
276 |-| |-| 7: acquire() |
277 |-| |-+----------------------->+-+
280 |-| |-| |-| | 8: is_container_present()
281 |-| |-| True/False |-|<-+
284 +---------------------------------------------------------------------------------------------+
285 | |-| ALT [isRunning & force] |-| |-|--+ |
286 | |-| |-| |-| | 8a: destroy() |
288 +---------------------------------------------------------------------------------------------+
292 |-| 9: create_all_containers() | |
293 |-+---------------------------->+-+ |
294 |-| |-| 10: create() |
295 |-| |-+----------------------->+-+
297 |-| |-| |-| | 11: wait('RUNNING')
301 +---------------------------------------------------------------------------------------------+
303 | |-| (install_vpp, configure_vpp) | | |
305 +---------------------------------------------------------------------------------------------+
307 |-| 12: destroy_all_containers() | |
308 |-+---------------------------->+-+ |
309 |-| |-| 13: destroy() |
310 |-| |-+----------------------->+-+
318 Container Data Structure
319 ~~~~~~~~~~~~~~~~~~~~~~~~
321 Container is represented in Python L1 library as a separate Class with instance
322 variables and no methods except overriden ``__getattr__`` and ``__setattr__``.
323 Instance variables are assigned to container dynamically during the
324 ``construct_container(**kwargs)`` call and are passed down from the RF keyword.
326 There is no parameters check functionality. Passing the correct arguments
327 is a responsibility of the caller.
332 This section contains a high-level example of multiple initialization steps
333 via ContainerManager; taken from an actual CSIT code,
334 but with non-code lines (comments, Documentation) removed for brevity.
338 .. code-block:: robotframework
340 | Start containers for test
341 | | [Arguments] | ${dut}=${None} | ${nf_chains}=${1} | ${nf_nodes}=${1}
342 | | ... | ${auto_scale}=${True} | ${pinning}=${True}
344 | | Set Test Variable | @{container_groups} | @{EMPTY}
345 | | Set Test Variable | ${container_group} | CNF
346 | | Set Test Variable | ${nf_nodes}
347 | | Import Library | resources.libraries.python.ContainerUtils.ContainerManager
348 | | ... | engine=${container_engine} | WITH NAME | ${container_group}
349 | | Construct chains of containers
350 | | ... | dut=${dut} | nf_chains=${nf_chains} | nf_nodes=${nf_nodes}
351 | | ... | auto_scale=${auto_scale} | pinning=${pinning}
352 | | Acquire all '${container_group}' containers
353 | | Create all '${container_group}' containers
354 | | Configure VPP in all '${container_group}' containers
355 | | Start VPP in all '${container_group}' containers
356 | | Append To List | ${container_groups} | ${container_group}
362 For the future use, Kubernetes [k8sdoc]_ is implemented as separate library
363 ``KubernetesUtils.py``, with a class with the same name. This utility provides
364 an API for L2 Robot Keywords to control ``kubectl`` installed on each of DUTs.
365 One time initialization script, ``resources/libraries/bash/k8s_setup.sh``
366 does reset/init kubectl, and initializes the ``csit`` namespace. CSIT
367 namespace is required to not to interfere with existing setups and it
368 further simplifies apply/get/delete Pod/ConfigMap operations on SUTs.
370 Kubernetes utility is based on YAML templates to avoid crafting the huge
371 YAML configuration files, what would lower the readability of code and
372 requires complicated algorithms.
374 Two types of YAML templates are defined:
376 - Static - do not change between deployments, that is infrastructure
377 containers like Kafka, Calico, ETCD.
379 - Dynamic - per test suite/case topology YAML files.
381 Making own python wrapper library of ``kubectl`` instead of using the
382 official Python package allows to control and deploy environment over
383 the SSH library without the need of using isolated driver running on
389 Listed CSIT container networking test topologies are defined with DUT
390 containerized VPP switch forwarding packets between NF containers. Each
391 NF container runs their own instance of VPP in L2XC configuration.
393 Following container networking topologies are tested in |csit-release|:
397 - eth-l2xcbase-eth-2memif-1lxc.
398 - eth-l2bdbasemaclrn-eth-2memif-1lxc.
402 - eth-l2xcbase-eth-2memif-1docker.
403 - eth-l2xcbase-eth-1memif-1docker
408 .. [lxc] `Linux Containers <https://linuxcontainers.org/>`_
409 .. [lxcnamespace] `Resource management: Linux kernel Namespaces and cgroups <https://www.cs.ucsb.edu/~rich/class/cs293b-cloud/papers/lxc-namespace.pdf>`_.
410 .. [stgraber] `LXC 1.0: Blog post series <https://stgraber.org/2013/12/20/lxc-1-0-blog-post-series/>`_.
411 .. [lxcsecurity] `Linux Containers Security <https://linuxcontainers.org/lxc/security/>`_.
412 .. [capabilities] `Linux manual - capabilities - overview of Linux capabilities <http://man7.org/linux/man-pages/man7/capabilities.7.html>`_.
413 .. [cgroup1] `Linux kernel documentation: cgroups <https://www.kernel.org/doc/Documentation/cgroup-v1/cgroups.txt>`_.
414 .. [cgroup2] `Linux kernel documentation: Control Group v2 <https://www.kernel.org/doc/Documentation/cgroup-v2.txt>`_.
415 .. [selinux] `SELinux Project Wiki <http://selinuxproject.org/page/Main_Page>`_.
416 .. [lxcsecfeatures] `LXC 1.0: Security features <https://stgraber.org/2014/01/01/lxc-1-0-security-features/>`_.
417 .. [lxcsource] `Linux Containers source <https://github.com/lxc/lxc>`_.
418 .. [apparmor] `Ubuntu AppArmor <https://wiki.ubuntu.com/AppArmor>`_.
419 .. [seccomp] `SECure COMPuting with filters <https://www.kernel.org/doc/Documentation/prctl/seccomp_filter.txt>`_.
420 .. [docker] `Docker <https://www.docker.com/what-docker>`_.
421 .. [k8sdoc] `Kubernetes documentation <https://kubernetes.io/docs/home/>`_.