-.. _containter_orchestration_in_csit:
+.. _container_orchestration_in_csit:
Container Orchestration in CSIT
===============================
of functionality is better supported in LXC 2.1 but can be done is current
version as well.
-**Open Questions**
-
-- CSIT code is currently using cgroup to pin lxc data plane thread to
- cpu cores after lxc container is created. In the future may find a
- more universal way to do it.
+- CSIT code is currently using cgroup to control the range of CPU cores the
+ LXC container runs on. VPP thread pinning is defined vpp startup.conf.
Docker
~~~~~~
configuration file controls the range of CPU cores the Docker image
must run on. VPP thread pinning defined vpp startup.conf.
-
Kubernetes
~~~~~~~~~~
CSIT uses K8s and its infrastructure components like etcd to control all
phases of container based virtualized network topologies.
-**Known Issues**
-
-- Unable to properly pin k8s pods and containers to cpu cores. This will be
- addressed in Kubernetes 1.8+ in alpha testing.
-
-**Open Questions**
-
-- Clarify the functions provided by Contiv and Calico in Ligato system?
-
Ligato
~~~~~~
**Known Issues**
-**Open Questions**
-
- Currently using a separate LF Jenkins job for building csit-centric
- vpp_agent docker images vs. dockerhub/ligato ones.
+ prod_vpp_agent docker images vs. dockerhub/ligato ones.
Implementation
--------------
Sequentional diagram that illustrates the creation of a single container.
-.. mk: what "RF KW" is meant below?
-.. mk: the flow sequence should adhere to the lifecycle events listed earlier in this doc.
-
::
Legend:
| | [Arguments] | ${technology} | ${image} | ${cpu_count}=${1} | ${count}=${1}
| | ...
| | ${group}= | Set Variable | VNF
- | | ${guest_dir}= | Set Variable | /mnt/host
- | | ${host_dir}= | Set Variable | /tmp
| | ${skip_cpus}= | Evaluate | ${vpp_cpus}+${system_cpus}
| | Import Library | resources.libraries.python.ContainerUtils.ContainerManager
- | | ... | engine=${technology} | WITH NAME | ${group}
+ | | ... | engine=${container_engine} | WITH NAME | ${group}
| | ${duts}= | Get Matches | ${nodes} | DUT*
| | :FOR | ${dut} | IN | @{duts}
- | | | {env}= | Create List | LC_ALL="en_US.UTF-8"
- | | | ... | DEBIAN_FRONTEND=noninteractive | ETCDV3_ENDPOINTS=172.17.0.1:2379
+ | | | ${env}= | Create List | DEBIAN_FRONTEND=noninteractive
+ | | | ${mnt}= | Create List | /tmp:/mnt/host | /dev:/dev
| | | ${cpu_node}= | Get interfaces numa node | ${nodes['${dut}']}
| | | ... | ${dut1_if1} | ${dut1_if2}
| | | Run Keyword | ${group}.Construct containers
- | | | ... | name=${dut}_${group}
- | | | ... | node=${nodes['${dut}']}
- | | | ... | host_dir=${host_dir}
- | | | ... | guest_dir=${guest_dir}
- | | | ... | image=${image}
- | | | ... | cpu_count=${cpu_count}
- | | | ... | cpu_skip=${skip_cpus}
- | | | ... | smt_used=${False}
- | | | ... | cpuset_mems=${cpu_node}
- | | | ... | cpu_shared=${False}
- | | | ... | env=${env}
+ | | | ... | name=${dut}_${group} | node=${nodes['${dut}']} | mnt=${mnt}
+ | | | ... | image=${container_image} | cpu_count=${container_cpus}
+ | | | ... | cpu_skip=${skip_cpus} | cpuset_mems=${cpu_node}
+ | | | ... | cpu_shared=${False} | env=${env} | count=${container_count}
+ | | | ... | install_dkms=${container_install_dkms}
+ | | Append To List | ${container_groups} | ${group}
Mandatory parameters to create standalone container are: ``node``, ``name``,
-``image`` [image-var]_, ``cpu_count``, ``cpu_skip``, ``smt_used``,
-``cpuset_mems``, ``cpu_shared``.
+``image`` [image-var]_, ``cpu_count``, ``cpu_skip``, ``cpuset_mems``,
+``cpu_shared``.
There is no parameters check functionality. Passing required arguments is in
coder responsibility. All the above parameters are required to calculate the
with a class with the same name. This utility provides an API for L2
Robot Keywords to control ``kubectl`` installed on each of DUTs. One
time initialization script, ``resources/libraries/bash/k8s_setup.sh``
-does reset/init kubectl, applies Calico v2.4.1 and initializes the
+does reset/init kubectl, applies Calico v2.6.3 and initializes the
``csit`` namespace. CSIT namespace is required to not to interfere with
existing setups and it further simplifies apply/get/delete
Pod/ConfigMap operations on SUTs.
1. Clone latest CSIT and Ligato repositaries.
2. Pull specific version of ``dev_vpp_agent`` image from Dockerhub.
-3. Build ``prod_vpp_image`` Docker image from ``dev_vpp_agent`` image.
-4. Shrink image using ``docker/dev_vpp_agent/shrink.sh`` script.
-5. Transfer ``prod_vpp_agent_shrink`` image to DUTs.
-6. Execute subset of performance tests designed for Ligato testing.
+3. Extract VPP API (from ``.deb`` package) and copy into ``dev_vpp_agent``
+ image
+4. Rebuild vpp-agent and extract outside image.
+5. Build ``prod_vpp_image`` Docker image from ``dev_vpp_agent`` image.
+6. Transfer ``prod_vpp_agent`` image to DUTs.
+7. Execute subset of performance tests designed for Ligato testing.
::
| ligato/dev_vpp_agent <------| Pull this image from
+------------------------^----------------------+ | Dockerhub ligato/dev_vpp_agent:<version>
|
- | Extract agent.tar.gz from dev_vpp_agent
+ | Rebuild and extract agent.tar.gz from dev_vpp_agent
+------------------------+----------------------+
| prod_vpp_agent <------| Build by passing own
+-----------------------------------------------+ | vpp.tar.gz (from nexus
::
REPOSITORY TAG IMAGE ID CREATED SIZE
- dev_vpp_agent latest 442771972e4a 8 hours ago 3.57 GB
- dev_vpp_agent_shrink latest bd2e76980236 8 hours ago 1.68 GB
- prod_vpp_agent latest e33a5551b504 2 days ago 404 MB
- prod_vpp_agent_shrink latest 446b271cce26 2 days ago 257 MB
+ dev-vpp-agent latest 78c53bd57e2 6 weeks ago 9.79GB
+ prod_vpp_agent latest f68af5afe601 5 weeks ago 443MB
In CSIT we need to create separate performance suite under
``tests/kubernetes/perf`` which contains modified Suite setup in comparison
containerized VPP switch forwarding packets between NF containers. Each
NF container runs their own instance of VPP in L2XC configuration.
-Following container networking topologies are tested in CSIT |release|:
+Following container networking topologies are tested in |csit-release|:
- LXC topologies:
- Docker topologies:
- eth-l2xcbase-eth-2memif-1docker.
+ - eth-l2xcbase-eth-1memif-1docker
- Kubernetes/Ligato topologies:
- - eth-1drcl2xcbase-eth-2memif-1drcl2xc.
- - eth-1drcl2xcbase-eth-4memif-2drcl2xc.
- - eth-1drcl2bdbasemaclrn-eth-2memif-1drcl2xc.
- - eth-1drcl2bdbasemaclrn-eth-4memif-2drcl2xc.
-
+ - eth-1drcl2bdbasemaclrn-eth-2memif-1drcl2xc-1paral
+ - eth-1drcl2bdbasemaclrn-eth-2memif-2drcl2xc-1horiz
+ - eth-1drcl2bdbasemaclrn-eth-2memif-4drcl2xc-1horiz
+ - eth-1drcl2bdbasemaclrn-eth-4memif-2drcl2xc-1chain
+ - eth-1drcl2bdbasemaclrn-eth-8memif-4drcl2xc-1chain
+ - eth-1drcl2xcbase-eth-2memif-1drcl2xc-1paral
+ - eth-1drcl2xcbase-eth-2memif-2drcl2xc-1horiz
+ - eth-1drcl2xcbase-eth-2memif-4drcl2xc-1horiz
+ - eth-1drcl2xcbase-eth-4memif-2drcl2xc-1chain
+ - eth-1drcl2xcbase-eth-8memif-4drcl2xc-1chain
References
-----------
+~~~~~~~~~~
.. [lxc] `Linux Containers <https://linuxcontainers.org/>`_
.. [lxc-namespace] `Resource management: Linux kernel Namespaces and cgroups <https://www.cs.ucsb.edu/~rich/class/cs293b-cloud/papers/lxc-namespace.pdf>`_.