Split container sections in report
[csit.git] / docs / report / vpp_performance_tests / containers.rst
1
2 .. _containter_orchestration_in_csit:
3
4 Container Orchestration in CSIT
5 ===============================
6
7 Overview
8 --------
9
10 Linux Containers
11 ~~~~~~~~~~~~~~~~
12
13 Linux Containers is an OS-level virtualization method for running
14 multiple isolated Linux systems (containers) on a compute host using a
15 single Linux kernel. Containers rely on Linux kernel cgroups
16 functionality for controlling usage of shared system resources (i.e.
17 CPU, memory, block I/O, network) and for namespace isolation. The latter
18 enables complete isolation of applications' view of operating
19 environment, including process trees, networking, user IDs and mounted
20 file systems.
21
22 :abbr:`LXC (Linux Containers)` combine kernel's cgroups and support for isolated
23 namespaces to provide an isolated environment for applications. Docker
24 does use LXC as one of its execution drivers, enabling image management
25 and providing deployment services. More information in [lxc]_, [lxc-namespace]_
26 and [stgraber]_.
27
28 Linux containers can be of two kinds: privileged containers and
29 unprivileged containers.
30
31 Unprivileged Containers
32 ~~~~~~~~~~~~~~~~~~~~~~~
33
34 Running unprivileged containers is the safest way to run containers in a
35 production environment. From LXC 1.0 one can start a full system
36 container entirely as a user, allowing to map a range of UIDs on the
37 host into a namespace inside of which a user with UID 0 can exist again.
38 In other words an unprivileged container does mask the userid from the
39 host, making it impossible to gain a root access on the host even if a
40 user gets root in a container. With unprivileged containers, non-root
41 users can create containers and will appear in the container as the
42 root, but will appear as userid <non-zero> on the host. Unprivileged
43 containers are also better suited to supporting multi-tenancy operating
44 environments. More information in [lxc-security]_ and [stgraber]_.
45
46 Privileged Containers
47 ~~~~~~~~~~~~~~~~~~~~~
48
49 Privileged containers do not mask UIDs, and container UID 0 is mapped to
50 the host UID 0. Security and isolation is controlled by a good
51 configuration of cgroup access, extensive AppArmor profile preventing
52 the known attacks as well as container capabilities and SELinux. Here a
53 list of applicable security control mechanisms:
54
55 - Capabilities - keep (whitelist) or drop (blacklist) Linux capabilities,
56   [capabilities]_.
57 - Control groups - cgroups, resource bean counting, resource quotas, access
58   restrictions, [cgroup1]_, [cgroup2]_.
59 - AppArmor - apparmor profiles aim to prevent any of the known ways of
60   escaping a container or cause harm to the host, [apparmor]_.
61 - SELinux - Security Enhanced Linux is a Linux kernel security module
62   that provides similar function to AppArmor, supporting access control
63   security policies including United States Department of Defense–style
64   mandatory access controls. Mandatory access controls allow an
65   administrator of a system to define how applications and users can
66   access different resources such as files, devices, networks and inter-
67   process communication, [selinux]_.
68 - Seccomp - secure computing mode, enables filtering of system calls,
69   [seccomp]_.
70
71 More information in [lxc-security]_ and [lxc-sec-features]_.
72
73 **Linux Containers in CSIT**
74
75 CSIT is using Privileged Containers as the ``sysfs`` is mounted with RW
76 access. Sysfs is required to be mounted as RW due to VPP accessing
77 :command:`/sys/bus/pci/drivers/uio_pci_generic/unbind`. This is not the case of
78 unprivileged containers where ``sysfs`` is mounted as read-only.
79
80
81 Orchestrating Container Lifecycle Events
82 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
83
84 Following Linux container lifecycle events need to be addressed by an
85 orchestration system:
86
87 1. Acquire - acquiring/downloading existing container images via
88    :command:`docker pull` or :command:`lxc-create -t download`.
89
90 2. Build - building a container image from scratch or another
91    container image via :command:`docker build <dockerfile/composefile>` or
92    customizing LXC templates in
93    `https://github.com/lxc/lxc/tree/master/templates`_
94
95 3. (Re-)Create - creating a running instance of a container application
96    from anew, or re-creating one that failed. A.k.a. (re-)deploy via
97    :command:`docker run` or :command:`lxc-start`
98
99 4. Execute - execute system operations within the container by attaching to
100    running container. THis is done by :command:`lxc-attach` or
101    :command:`docker exec`
102
103 5. Distribute - distributing pre-built container images to the compute
104    nodes. Currently not implemented in CSIT.
105
106
107 Container Orchestration Systems Used in CSIT
108 --------------------------------------------
109
110 Current CSIT testing framework integrates following Linux container
111 orchestration mechanisms:
112
113 - LXC/Docker for complete VPP container lifecycle control.
114 - Combination of Kubernetes (container orchestration), Docker (container
115   images) and Ligato (container networking).
116
117 LXC
118 ~~~
119
120 LXC is the well-known and heavily tested low-level Linux container
121 runtime [lxc-source]_, that provides a userspace interface for the Linux kernel
122 containment features. With a powerful API and simple tools, LXC enables
123 Linux users to easily create and manage system or application
124 containers. LXC uses following kernel features to contain processes:
125
126 - Kernel namespaces: ipc, uts, mount, pid, network and user.
127 - AppArmor and SELinux security profiles.
128 - Seccomp policies.
129 - Chroot.
130 - Cgroups.
131
132 CSIT uses LXC runtime and LXC usertools to test VPP data plane performance in
133 a range of virtual networking topologies.
134
135 **Known Issues**
136
137 - Current CSIT restriction: only single instance of lxc runtime due to
138   the cgroup policies used in CSIT. There is plan to add the capability into
139   code to create cgroups per container instance to address this issue. This sort
140   of functionality is better supported in LXC 2.1 but can be done is current
141   version as well.
142
143 **Open Questions**
144
145 - CSIT code is currently using cgroup to pin lxc data plane thread to
146   cpu cores after lxc container is created. In the future may find a
147   more universal way to do it.
148
149 Docker
150 ~~~~~~
151
152 Docker builds on top of Linux kernel containment features, and
153 offers a high-level tool for wrapping the processes, maintaining and
154 executing them in containers [docker]_. Currently it using *runc* a CLI tool for
155 spawning and running containers according to the `OCI specification
156 <https://www.opencontainers.org/>`_
157
158 A Docker container image is a lightweight, stand-alone, executable
159 package of a piece of software that includes everything needed to run
160 it: code, runtime, system tools, system libraries, settings.
161
162 CSIT uses Docker to manage the maintenance and execution of
163 containerized applications used in CSIT performance tests.
164
165 - Data plane thread pinning to CPU cores - Docker CLI and/or Docker
166   configuration file controls the range of CPU cores the Docker image
167   must run on. VPP thread pinning defined vpp startup.conf.
168
169
170 Kubernetes
171 ~~~~~~~~~~
172
173 Kubernetes [k8s-doc]_, or K8s, is a production-grade container orchestration
174 platform for automating the deployment, scaling and operating
175 application containers. Kubernetes groups containers that make up an
176 application into logical units, pods, for easy management and discovery.
177 K8s pod definitions including compute resource allocation is provided in
178 .yaml files.
179
180 CSIT uses K8s and its infrastructure components like etcd to control all
181 phases of container based virtualized network topologies.
182
183 **Known Issues**
184
185 - Unable to properly pin k8s pods and containers to cpu cores. This will be
186   addressed in Kubernetes 1.8+ in alpha testing.
187
188 **Open Questions**
189
190 - Clarify the functions provided by Contiv and Calico in Ligato system?
191
192 Ligato
193 ~~~~~~
194
195 Ligato [ligato]_ is an open-source project developing a set of cloud-native
196 tools for orchestrating container networking. Ligato integrates with FD.io VPP
197 using goVPP [govpp]_ and vpp-agent [vpp-agent]_.
198
199 **Known Issues**
200
201 **Open Questions**
202
203 - Currently using a separate LF Jenkins job for building csit-centric
204   vpp_agent docker images vs. dockerhub/ligato ones.
205
206 Implementation
207 --------------
208
209 CSIT container orchestration is implemented in CSIT Level-1 keyword
210 Python libraries following the Builder design pattern. Builder design
211 pattern separates the construction of a complex object from its
212 representation, so that the same construction process can create
213 different representations e.g. LXC, Docker, other.
214
215 CSIT Robot Framework keywords are then responsible for higher level
216 lifecycle control of of the named container groups. One can have
217 multiple named groups, with 1..N containers in a group performing
218 different role/functionality e.g. NFs, Switch, Kafka bus, ETCD
219 datastore, etc. ContainerManager class acts as a Director and uses
220 ContainerEngine class that encapsulate container control.
221
222 Current CSIT implementation is illustrated using UML Class diagram:
223
224 1. Acquire
225 2. Build
226 3. (Re-)Create
227 4. Execute
228
229 ::
230
231  +-----------------------------------------------------------------------+
232  |              RF Keywords (high level lifecycle control)               |
233  +-----------------------------------------------------------------------+
234  | Construct VNF containers on all DUTs                                  |
235  | Acquire all '${group}' containers                                     |
236  | Create all '${group}' containers                                      |
237  | Install all '${group}' containers                                     |
238  | Configure all '${group}' containers                                   |
239  | Stop all '${group}' containers                                        |
240  | Destroy all '${group}' containers                                     |
241  +-----------------+-----------------------------------------------------+
242                    |  1
243                    |
244                    |  1..N
245  +-----------------v-----------------+        +--------------------------+
246  |          ContainerManager         |        |  ContainerEngine         |
247  +-----------------------------------+        +--------------------------+
248  | __init()__                        |        | __init(node)__           |
249  | construct_container()             |        | acquire(force)           |
250  | construct_containers()            |        | create()                 |
251  | acquire_all_containers()          |        | stop()                   |
252  | create_all_containers()           | 1    1 | destroy()                |
253  | execute_on_container()            <>-------| info()                   |
254  | execute_on_all_containers()       |        | execute(command)         |
255  | install_vpp_in_all_containers()   |        | system_info()            |
256  | configure_vpp_in_all_containers() |        | install_supervisor()     |
257  | stop_all_containers()             |        | install_vpp()            |
258  | destroy_all_containers()          |        | restart_vpp()            |
259  +-----------------------------------+        | create_vpp_exec_config() |
260                                               | create_vpp_startup_config|
261                                               | is_container_running()   |
262                                               | is_container_present()   |
263                                               | _configure_cgroup()      |
264                                               +-------------^------------+
265                                                             |
266                                                             |
267                                                             |
268                                                  +----------+---------+
269                                                  |                    |
270                                           +------+-------+     +------+-------+
271                                           |     LXC      |     |    Docker    |
272                                           +--------------+     +--------------+
273                                           | (inherinted) |     | (inherinted) |
274                                           +------+-------+     +------+-------+
275                                                   |                   |
276                                                   +---------+---------+
277                                                             |
278                                                             | constructs
279                                                             |
280                                                   +---------v---------+
281                                                   |     Container     |
282                                                   +-------------------+
283                                                   | __getattr__(a)    |
284                                                   | __setattr__(a, v) |
285                                                   +-------------------+
286
287 Sequentional diagram that illustrates the creation of a single container.
288
289 .. mk: what "RF KW" is meant below?
290 .. mk: the flow sequence should adhere to the lifecycle events listed earlier in this doc.
291
292 ::
293
294  Legend:
295     e  = engine [Docker|LXC]
296     .. = kwargs (variable number of keyword argument)
297
298  +-------+                  +------------------+       +-----------------+
299  | RF KW |                  | ContainerManager |       | ContainerEngine |
300  +---+---+                  +--------+---------+       +--------+--------+
301      |                               |                          |
302      |  1: new ContainerManager(e)   |                          |
303     +-+---------------------------->+-+                         |
304     |-|                             |-| 2: new ContainerEngine  |
305     |-|                             |-+----------------------->+-+
306     |-|                             |-|                        |-|
307     |-|                             +-+                        +-+
308     |-|                              |                          |
309     |-| 3: construct_container(..)   |                          |
310     |-+---------------------------->+-+                         |
311     |-|                             |-| 4: init()               |
312     |-|                             |-+----------------------->+-+
313     |-|                             |-|                        |-| 5: new  +-------------+
314     |-|                             |-|                        |-+-------->| Container A |
315     |-|                             |-|                        |-|         +-------------+
316     |-|                             |-|<-----------------------+-|
317     |-|                             +-+                        +-+
318     |-|                              |                          |
319     |-| 6: acquire_all_containers()  |                          |
320     |-+---------------------------->+-+                         |
321     |-|                             |-| 7: acquire()            |
322     |-|                             |-+----------------------->+-+
323     |-|                             |-|                        |-|
324     |-|                             |-|                        |-+--+
325     |-|                             |-|                        |-|  | 8: is_container_present()
326     |-|                             |-|             True/False |-|<-+
327     |-|                             |-|                        |-|
328     |-|                             |-|                        |-|
329  +---------------------------------------------------------------------------------------------+
330  |  |-| ALT [isRunning & force]     |-|                        |-|--+                          |
331  |  |-|                             |-|                        |-|  | 8a: destroy()            |
332  |  |-|                             |-|                        |-<--+                          |
333  +---------------------------------------------------------------------------------------------+
334     |-|                             |-|                        |-|
335     |-|                             +-+                        +-+
336     |-|                              |                          |
337     |-| 9: create_all_containers()   |                          |
338     |-+---------------------------->+-+                         |
339     |-|                             |-| 10: create()            |
340     |-|                             |-+----------------------->+-+
341     |-|                             |-|                        |-+--+
342     |-|                             |-|                        |-|  | 11: wait('RUNNING')
343     |-|                             |-|                        |-<--+
344     |-|                             +-+                        +-+
345     |-|                              |                          |
346  +---------------------------------------------------------------------------------------------+
347  |  |-| ALT                          |                          |                              |
348  |  |-| (install_vpp, configure_vpp) |                          |                              |
349  |  |-|                              |                          |                              |
350  +---------------------------------------------------------------------------------------------+
351     |-|                              |                          |
352     |-| 12: destroy_all_containers() |                          |
353     |-+---------------------------->+-+                         |
354     |-|                             |-| 13: destroy()           |
355     |-|                             |-+----------------------->+-+
356     |-|                             |-|                        |-|
357     |-|                             +-+                        +-+
358     |-|                              |                          |
359     +++                              |                          |
360      |                               |                          |
361      +                               +                          +
362
363 Container Data Structure
364 ~~~~~~~~~~~~~~~~~~~~~~~~
365
366 Container is represented in Python L1 library as a separate Class with instance
367 variables and no methods except overriden ``__getattr__`` and ``__setattr__``.
368 Instance variables are assigned to container dynamically during the
369 ``construct_container(**kwargs)`` call and are passed down from the RF keyword.
370
371 Usage example:
372
373 .. code-block:: robotframework
374
375   | Construct VNF containers on all DUTs
376   | | [Arguments] | ${technology} | ${image} | ${cpu_count}=${1} | ${count}=${1}
377   | | ...
378   | | ${group}= | Set Variable | VNF
379   | | ${guest_dir}= | Set Variable | /mnt/host
380   | | ${host_dir}= | Set Variable | /tmp
381   | | ${skip_cpus}= | Evaluate | ${vpp_cpus}+${system_cpus}
382   | | Import Library | resources.libraries.python.ContainerUtils.ContainerManager
383   | | ... | engine=${technology} | WITH NAME | ${group}
384   | | ${duts}= | Get Matches | ${nodes} | DUT*
385   | | :FOR | ${dut} | IN | @{duts}
386   | | | {env}= | Create List | LC_ALL="en_US.UTF-8"
387   | | | ... | DEBIAN_FRONTEND=noninteractive | ETCDV3_ENDPOINTS=172.17.0.1:2379
388   | | | ${cpu_node}= | Get interfaces numa node | ${nodes['${dut}']}
389   | | | ... | ${dut1_if1} | ${dut1_if2}
390   | | | Run Keyword | ${group}.Construct containers
391   | | | ... | name=${dut}_${group}
392   | | | ... | node=${nodes['${dut}']}
393   | | | ... | host_dir=${host_dir}
394   | | | ... | guest_dir=${guest_dir}
395   | | | ... | image=${image}
396   | | | ... | cpu_count=${cpu_count}
397   | | | ... | cpu_skip=${skip_cpus}
398   | | | ... | smt_used=${False}
399   | | | ... | cpuset_mems=${cpu_node}
400   | | | ... | cpu_shared=${False}
401   | | | ... | env=${env}
402
403 Mandatory parameters to create standalone container are: ``node``, ``name``,
404 ``image`` [image-var]_, ``cpu_count``, ``cpu_skip``, ``smt_used``,
405 ``cpuset_mems``, ``cpu_shared``.
406
407 There is no parameters check functionality. Passing required arguments is in
408 coder responsibility. All the above parameters are required to calculate the
409 correct cpu placement. See documentation for the full reference.
410
411 Kubernetes
412 ~~~~~~~~~~
413
414 Kubernetes is implemented as separate library ``KubernetesUtils.py``,
415 with a class with the same name. This utility provides an API for L2
416 Robot Keywords to control ``kubectl`` installed on each of DUTs. One
417 time initialization script, ``resources/libraries/bash/k8s_setup.sh``
418 does reset/init kubectl, applies Calico v2.4.1 and initializes the
419 ``csit`` namespace. CSIT namespace is required to not to interfere with
420 existing setups and it further simplifies apply/get/delete
421 Pod/ConfigMap operations on SUTs.
422
423 Kubernetes utility is based on YAML templates to avoid crafting the huge
424 YAML configuration files, what would lower the readability of code and
425 requires complicated algorithms. The templates can be found in
426 ``resources/templates/kubernetes`` and can be leveraged in the future
427 for other separate tasks.
428
429 Two types of YAML templates are defined:
430
431 - Static - do not change between deployments, that is infrastructure
432   containers like Kafka, Calico, ETCD.
433
434 - Dynamic - per test suite/case topology YAML files e.g. SFC_controller,
435   VNF, VSWITCH.
436
437 Making own python wrapper library of ``kubectl`` instead of using the
438 official Python package allows to control and deploy environment over
439 the SSH library without the need of using isolated driver running on
440 each of DUTs.
441
442 Ligato
443 ~~~~~~
444
445 Ligato integration does require to compile the ``vpp-agent`` tool and build the
446 bundled Docker image. Compilation of ``vpp-agent`` depends on specific VPP. In
447 ``ligato/vpp-agent`` repository there are well prepared scripts for building the
448 Docker image. Building docker image is possible via series of commands:
449
450 ::
451
452   git clone https://github.com/ligato/vpp-agent
453   cd vpp_agent/docker/dev_vpp_agent
454   sudo docker build -t dev_vpp_agent --build-arg AGENT_COMMIT=<agent commit id>\
455       --build-arg VPP_COMMIT=<vpp commit id> --no-cache .
456   sudo ./shrink.sh
457   cd ../prod_vpp_agent
458   sudo ./build.sh
459   sudo ./shrink.sh
460
461 CSIT requires Docker image to include the desired VPP version (per patch
462 testing, nightly testing, on demand testing).
463
464 The entire build process of building ``dev_vpp_agent`` image heavily depends
465 on internet connectivity and also takes a significant amount of time (~1-1.5h
466 based on internet bandwidth and allocated resources). The optimal solution would
467 be to build the image on jenkins slave, transfer the Docker image to DUTs and
468 execute separate suite of tests.
469
470 To adress the amount of time required to build ``dev_vpp_agent`` image, we can
471 pull existing specific version of ```dev_vpp_agent``` and exctract the
472 ```vpp-agent``` from it.
473
474 We created separate sets of Jenkins jobs, that will be executing following:
475
476 1. Clone latest CSIT and Ligato repositaries.
477 2. Pull specific version of ``dev_vpp_agent`` image from Dockerhub.
478 3. Build ``prod_vpp_image`` Docker image from ``dev_vpp_agent`` image.
479 4. Shrink image using ``docker/dev_vpp_agent/shrink.sh`` script.
480 5. Transfer ``prod_vpp_agent_shrink`` image to DUTs.
481 6. Execute subset of performance tests designed for Ligato testing.
482
483 ::
484
485  +-----------------------------------------------+
486  |                  ubuntu:16.04                 <-----| Base image on Dockerhub
487  +------------------------^----------------------+
488                           |
489                           |
490  +------------------------+----------------------+
491  |               ligato/dev_vpp_agent            <------| Pull this image from
492  +------------------------^----------------------+      | Dockerhub ligato/dev_vpp_agent:<version>
493                           |
494                           | Extract agent.tar.gz from dev_vpp_agent
495  +------------------------+----------------------+
496  |                 prod_vpp_agent                <------| Build by passing own
497  +-----------------------------------------------+      | vpp.tar.gz (from nexus
498                                                         | or built by JJB) and
499                                                         | agent.tar.gz extracted
500                                                         | from ligato/dev_vpp_agent
501
502
503 Approximate size of vnf-agent docker images:
504
505 ::
506
507   REPOSITORY            TAG       IMAGE ID        CREATED        SIZE
508   dev_vpp_agent         latest    442771972e4a    8 hours ago    3.57 GB
509   dev_vpp_agent_shrink  latest    bd2e76980236    8 hours ago    1.68 GB
510   prod_vpp_agent        latest    e33a5551b504    2 days ago     404 MB
511   prod_vpp_agent_shrink latest    446b271cce26    2 days ago     257 MB
512
513 In CSIT we need to create separate performance suite under
514 ``tests/kubernetes/perf`` which contains modified Suite setup in comparison
515 to standard perf tests. This is due to reason that VPP will act as vswitch in
516 Docker image and not as standalone installed service.
517
518 Tested Topologies
519 ~~~~~~~~~~~~~~~~~
520
521 Listed CSIT container networking test topologies are defined with DUT
522 containerized VPP switch forwarding packets between NF containers. Each
523 NF container runs their own instance of VPP in L2XC configuration.
524
525 Following container networking topologies are tested in CSIT |release|:
526
527 - LXC topologies:
528
529   - eth-l2xcbase-eth-2memif-1lxc.
530   - eth-l2bdbasemaclrn-eth-2memif-1lxc.
531
532 - Docker topologies:
533
534   - eth-l2xcbase-eth-2memif-1docker.
535
536 - Kubernetes/Ligato topologies:
537
538   - eth-1drcl2xcbase-eth-2memif-1drcl2xc.
539   - eth-1drcl2xcbase-eth-4memif-2drcl2xc.
540   - eth-1drcl2bdbasemaclrn-eth-2memif-1drcl2xc.
541   - eth-1drcl2bdbasemaclrn-eth-4memif-2drcl2xc.
542
543
544 References
545 ----------
546
547 .. [lxc] `Linux Containers <https://linuxcontainers.org/>`_
548 .. [lxc-namespace] `Resource management: Linux kernel Namespaces and cgroups <https://www.cs.ucsb.edu/~rich/class/cs293b-cloud/papers/lxc-namespace.pdf>`_.
549 .. [stgraber] `LXC 1.0: Blog post series <https://stgraber.org/2013/12/20/lxc-1-0-blog-post-series/>`_.
550 .. [lxc-security] `Linux Containers Security <https://linuxcontainers.org/lxc/security/>`_.
551 .. [capabilities] `Linux manual - capabilities - overview of Linux capabilities http://man7.org/linux/man-pages/man7/capabilities.7.html`_.
552 .. [cgroup1] `Linux kernel documentation: cgroups <https://www.kernel.org/doc/Documentation/cgroup-v1/cgroups.txt>`_.
553 .. [cgroup2] `Linux kernel documentation: Control Group v2 <https://www.kernel.org/doc/Documentation/cgroup-v2.txt>`_.
554 .. [selinux] `SELinux Project Wiki <http://selinuxproject.org/page/Main_Page>`_.
555 .. [lxc-sec-features] `LXC 1.0: Security features <https://stgraber.org/2014/01/01/lxc-1-0-security-features/>`_.
556 .. [lxc-source] `Linux Containers source <https://github.com/lxc/lxc>`_.
557 .. [apparmor] `Ubuntu AppArmor <https://wiki.ubuntu.com/AppArmor>`_.
558 .. [seccomp] `SECure COMPuting with filters <https://www.kernel.org/doc/Documentation/prctl/seccomp_filter.txt>`_.
559 .. [docker] `Docker <https://www.docker.com/what-docker>`_.
560 .. [k8s-doc] `Kubernetes documentation <https://kubernetes.io/docs/home/>`_.
561 .. [ligato] `Ligato <https://github.com/ligato>`_.
562 .. [govpp] `FD.io goVPP project <https://wiki.fd.io/view/GoVPP>`_.
563 .. [vpp-agent] `Ligato vpp-agent <https://github.com/ligato/vpp-agent>`_.
564 .. [image-var] Image parameter is required in initial commit version. There is plan to implement container build class to build Docker/LXC image.

©2016 FD.io a Linux Foundation Collaborative Project. All Rights Reserved.
Linux Foundation is a registered trademark of The Linux Foundation. Linux is a registered trademark of Linus Torvalds.
Please see our privacy policy and terms of use.