4 This document describes how to clone the Contiv repository and then use
5 `kubeadm <https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/>`__
6 to manually install Kubernetes with Contiv-VPP networking on one or more
7 bare metal or VM hosts.
9 Clone the Contiv Repository
10 ---------------------------
12 To clone the Contiv repository enter the following command:
16 git clone https://github.com/contiv/vpp/<repository-name>
18 **Note:** Replace ** with the name you want assigned to your cloned
21 The cloned repository has important folders that contain content that
22 are referenced in this Contiv documentation; those folders are noted
28 build build-root doxygen gmod LICENSE Makefile RELEASE.md src
29 build-data docs extras INFO.yaml MAINTAINERS README.md sphinx_venv test
34 Host-specific Configurations
35 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
37 - **VmWare VMs**: the vmxnet3 driver is required on each interface that
38 will be used by VPP. Please see
39 `here <https://github.com/contiv/vpp/tree/master/docs/VMWARE_FUSION_HOST.md>`__
40 for instructions how to install the vmxnet3 driver on VmWare Fusion.
42 Setting up Network Adapter(s)
43 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
48 DPDK setup must be completed **on each node** as follows:
50 - Load the PCI UIO driver:
54 $ sudo modprobe uio_pci_generic
56 - Verify that the PCI UIO driver has loaded successfully:
61 uio_pci_generic 16384 0
62 uio 20480 1 uio_pci_generic
64 Please note that this driver needs to be loaded upon each server
65 bootup, so you may want to add ``uio_pci_generic`` into the
66 ``/etc/modules`` file, or a file in the ``/etc/modules-load.d/``
67 directory. For example, the ``/etc/modules`` file could look as
72 # /etc/modules: kernel modules to load at boot time.
74 # This file contains the names of kernel modules that should be loaded
75 # at boot time, one per line. Lines beginning with "#" are ignored.
78 .. rubric:: Determining Network Adapter PCI Addresses
79 :name: determining-network-adapter-pci-addresses
81 You need the PCI address of the network interface that VPP will use
82 for the multi-node pod interconnect. On Debian-based distributions,
83 you can use ``lshw``\ (*):
87 $ sudo lshw -class network -businfo
88 Bus info Device Class Description
89 ====================================================
90 pci@0000:00:03.0 ens3 network Virtio network device
91 pci@0000:00:04.0 ens4 network Virtio network device
93 **Note:** On CentOS/RedHat/Fedora distributions, ``lshw`` may not be
94 available by default, install it by issuing the following command:
95 ``yum -y install lshw``
97 Configuring vswitch to Use Network Adapters
98 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
100 Finally, you need to set up the vswitch to use the network adapters:
102 - `Setup on a node with a single
103 NIC <https://github.com/contiv/vpp/tree/master/docs/SINGLE_NIC_SETUP.md>`__
104 - `Setup a node with multiple
105 NICs <https://github.com/contiv/vpp/tree/master/docs/MULTI_NIC_SETUP.md>`__
107 Using a Node Setup Script
108 ~~~~~~~~~~~~~~~~~~~~~~~~~
110 You can perform the above steps using the `node setup
111 script <https://github.com/contiv/vpp/tree/master/k8s/README.md#setup-node-sh>`__.
113 Installing Kubernetes with Contiv-VPP CNI plugin
114 ------------------------------------------------
116 After the nodes you will be using in your K8s cluster are prepared, you
117 can install the cluster using
118 `kubeadm <https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/>`__.
120 (1/4) Installing Kubeadm on Your Hosts
121 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
123 For first-time installation, see `Installing
124 kubeadm <https://kubernetes.io/docs/setup/independent/install-kubeadm/>`__.
125 To update an existing installation, you should do a
126 ``apt-get update && apt-get upgrade`` or ``yum update`` to get the
127 latest version of kubeadm.
129 On each host with multiple NICs where the NIC that will be used for
130 Kubernetes management traffic is not the one pointed to by the default
131 route out of the host, a `custom management
132 network <https://github.com/contiv/vpp/tree/master/docs/CUSTOM_MGMT_NETWORK.md>`__
133 for Kubernetes must be configured.
135 Using Kubernetes 1.10 and Above
136 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
138 In K8s 1.10, support for huge pages in a pod has been introduced. For
139 now, this feature must be either disabled or memory limit must be
140 defined for vswitch container.
142 To disable huge pages, perform the following steps as root: \* Using
143 your favorite editor, disable huge pages in the kubelet configuration
144 file (``/etc/systemd/system/kubelet.service.d/10-kubeadm.conf`` or
145 ``/etc/default/kubelet`` for version 1.11+):
149 Environment="KUBELET_EXTRA_ARGS=--feature-gates HugePages=false"
151 - Restart the kubelet daemon:
155 systemctl daemon-reload
156 systemctl restart kubelet
158 To define memory limit, append the following snippet to vswitch
159 container in deployment yaml file:
165 hugepages-2Mi: 1024Mi
168 or set ``contiv.vswitch.defineMemoryLimits`` to ``true`` in `helm
169 values <https://github.com/contiv/vpp/blob/master/k8s/contiv-vpp/README.md>`__.
171 (2/4) Initializing Your Master
172 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
174 Before initializing the master, you may want to
175 `remove <#tearing-down-kubernetes>`__ any previously installed K8s
176 components. Then, proceed with master initialization as described in the
178 manual <https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#initializing-your-master>`__.
179 Execute the following command as root:
183 kubeadm init --token-ttl 0 --pod-network-cidr=10.1.0.0/16
185 **Note:** ``kubeadm init`` will autodetect the network interface to
186 advertise the master on as the interface with the default gateway. If
187 you want to use a different interface (i.e. a custom management network
188 setup), specify the ``--apiserver-advertise-address=<ip-address>``
189 argument to kubeadm init. For example:
193 kubeadm init --token-ttl 0 --pod-network-cidr=10.1.0.0/16 --apiserver-advertise-address=192.168.56.106
195 **Note:** The CIDR specified with the flag ``--pod-network-cidr`` is
196 used by kube-proxy, and it **must include** the ``PodSubnetCIDR`` from
197 the ``IPAMConfig`` section in the Contiv-vpp config map in Contiv-vpp’s
199 `contiv-vpp.yaml <https://github.com/contiv/vpp/blob/master/k8s/contiv-vpp/values.yaml>`__.
200 Pods in the host network namespace are a special case; they share their
201 respective interfaces and IP addresses with the host. For proxying to
202 work properly it is therefore required for services with backends
203 running on the host to also **include the node management IP** within
204 the ``--pod-network-cidr`` subnet. For example, with the default
205 ``PodSubnetCIDR=10.1.0.0/16`` and ``PodIfIPCIDR=10.2.1.0/24``, the
206 subnet ``10.3.0.0/16`` could be allocated for the management network and
207 ``--pod-network-cidr`` could be defined as ``10.0.0.0/8``, so as to
208 include IP addresses of all pods in all network namespaces:
212 kubeadm init --token-ttl 0 --pod-network-cidr=10.0.0.0/8 --apiserver-advertise-address=10.3.1.1
214 If Kubernetes was initialized successfully, it prints out this message:
218 Your Kubernetes master has initialized successfully!
220 After successful initialization, don’t forget to set up your .kube
221 directory as a regular user (as instructed by ``kubeadm``):
226 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
227 sudo chown $(id -u):$(id -g) $HOME/.kube/config
229 (3/4) Installing the Contiv-VPP Pod Network
230 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
232 If you have already used the Contiv-VPP plugin before, you may need to
233 pull the most recent Docker images on each node:
237 bash <(curl -s https://raw.githubusercontent.com/contiv/vpp/master/k8s/pull-images.sh)
239 Install the Contiv-VPP network for your cluster as follows:
241 - If you do not use the STN feature, install Contiv-vpp as follows:
245 kubectl apply -f https://raw.githubusercontent.com/contiv/vpp/master/k8s/contiv-vpp.yaml
247 - If you use the STN feature, download the ``contiv-vpp.yaml`` file:
251 wget https://raw.githubusercontent.com/contiv/vpp/master/k8s/contiv-vpp.yaml
253 Then edit the STN configuration as described
254 `here <https://github.com/contiv/vpp/tree/master/docs/SINGLE_NIC_SETUP.md#configuring-stn-in-contiv-vpp-k8s-deployment-files>`__.
255 Finally, create the Contiv-vpp deployment from the edited file:
259 kubectl apply -f ./contiv-vpp.yaml
261 Beware contiv-etcd data is persisted in ``/var/etcd`` by default. It has
262 to be cleaned up manually after ``kubeadm reset``. Otherwise outdated
263 data will be loaded by a subsequent deployment.
265 You can also generate random subfolder, alternatively:
269 curl --silent https://raw.githubusercontent.com/contiv/vpp/master/k8s/contiv-vpp.yaml | sed "s/\/var\/etcd\/contiv-data/\/var\/etcd\/contiv-data\/$RANDOM/g" | kubectl apply -f -
271 Deployment Verification
272 ^^^^^^^^^^^^^^^^^^^^^^^
274 After some time, all contiv containers should enter the running state:
278 root@cvpp:/home/jan# kubectl get pods -n kube-system -o wide | grep contiv
279 NAME READY STATUS RESTARTS AGE IP NODE
281 contiv-etcd-gwc84 1/1 Running 0 14h 192.168.56.106 cvpp
282 contiv-ksr-5c2vk 1/1 Running 2 14h 192.168.56.106 cvpp
283 contiv-vswitch-l59nv 2/2 Running 0 14h 192.168.56.106 cvpp
285 In particular, make sure that the Contiv-VPP pod IP addresses are the
286 same as the IP address specified in the
287 ``--apiserver-advertise-address=<ip-address>`` argument to kubeadm init.
289 Verify that the VPP successfully grabbed the network interface specified
290 in the VPP startup config (``GigabitEthernet0/4/0`` in our case):
296 Name Idx State Counter Count
297 GigabitEthernet0/4/0 1 up rx packets 1294
303 host-40df9b44c3d42f4 3 up rx packets 126601
310 host-vppv2 2 up rx packets 132162
319 You should also see the interface to kube-dns (``host-40df9b44c3d42f4``)
320 and to the node’s IP stack (``host-vppv2``).
322 Master Isolation (Optional)
323 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
325 By default, your cluster will not schedule pods on the master for
326 security reasons. If you want to be able to schedule pods on the master,
327 (e.g., for a single-machine Kubernetes cluster for development), then
332 kubectl taint nodes --all node-role.kubernetes.io/master-
334 More details about installing the pod network can be found in the
336 manual <https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network>`__.
338 (4/4) Joining Your Nodes
339 ~~~~~~~~~~~~~~~~~~~~~~~~
341 To add a new node to your cluster, run as root the command that was
342 output by kubeadm init. For example:
346 kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
348 More details can be found int the `kubeadm
349 manual <https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#joining-your-nodes>`__.
351 .. _deployment-verification-1:
353 Deployment Verification
354 ^^^^^^^^^^^^^^^^^^^^^^^
356 After some time, all contiv containers should enter the running state:
360 root@cvpp:/home/jan# kubectl get pods -n kube-system -o wide | grep contiv
361 NAME READY STATUS RESTARTS AGE IP NODE
362 contiv-etcd-gwc84 1/1 Running 0 14h 192.168.56.106 cvpp
363 contiv-ksr-5c2vk 1/1 Running 2 14h 192.168.56.106 cvpp
364 contiv-vswitch-h6759 2/2 Running 0 14h 192.168.56.105 cvpp-slave2
365 contiv-vswitch-l59nv 2/2 Running 0 14h 192.168.56.106 cvpp
366 etcd-cvpp 1/1 Running 0 14h 192.168.56.106 cvpp
367 kube-apiserver-cvpp 1/1 Running 0 14h 192.168.56.106 cvpp
368 kube-controller-manager-cvpp 1/1 Running 0 14h 192.168.56.106 cvpp
369 kube-dns-545bc4bfd4-fr6j9 3/3 Running 0 14h 10.1.134.2 cvpp
370 kube-proxy-q8sv2 1/1 Running 0 14h 192.168.56.106 cvpp
371 kube-proxy-s8kv9 1/1 Running 0 14h 192.168.56.105 cvpp-slave2
372 kube-scheduler-cvpp 1/1 Running 0 14h 192.168.56.106 cvpp
374 In particular, verify that a vswitch pod and a kube-proxy pod is running
375 on each joined node, as shown above.
377 On each joined node, verify that the VPP successfully grabbed the
378 network interface specified in the VPP startup config
379 (``GigabitEthernet0/4/0`` in our case):
385 Name Idx State Counter Count
386 GigabitEthernet0/4/0 1 up
389 From the vpp CLI on a joined node you can also ping kube-dns to verify
390 node-to-node connectivity. For example:
395 64 bytes from 10.1.134.2: icmp_seq=1 ttl=64 time=.1557 ms
396 64 bytes from 10.1.134.2: icmp_seq=2 ttl=64 time=.1339 ms
397 64 bytes from 10.1.134.2: icmp_seq=3 ttl=64 time=.1295 ms
398 64 bytes from 10.1.134.2: icmp_seq=4 ttl=64 time=.1714 ms
399 64 bytes from 10.1.134.2: icmp_seq=5 ttl=64 time=.1317 ms
401 Statistics: 5 sent, 5 received, 0% packet loss
403 Deploying Example Applications
404 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
409 You can go ahead and create a simple deployment:
413 $ kubectl run nginx --image=nginx --replicas=2
415 Use ``kubectl describe pod`` to get the IP address of a pod, e.g.:
419 $ kubectl describe pod nginx | grep IP
421 You should see two ip addresses, for example:
428 You can check the pods’ connectivity in one of the following ways: \*
429 Connect to the VPP debug CLI and ping any pod:
436 - Start busybox and ping any pod:
440 kubectl run busybox --rm -ti --image=busybox /bin/sh
441 If you don't see a command prompt, try pressing enter.
445 - You should be able to ping any pod from the host:
451 Deploying Pods on Different Nodes
452 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
454 to enable pod deployment on the master, untaint the master first:
458 kubectl taint nodes --all node-role.kubernetes.io/master-
460 In order to verify inter-node pod connectivity, we need to tell
461 Kubernetes to deploy one pod on the master node and one POD on the
462 worker. For this, we can use node selectors.
464 In your deployment YAMLs, add the ``nodeSelector`` sections that refer
465 to preferred node hostnames, e.g.:
470 kubernetes.io/hostname: vm5
472 Example of whole JSONs:
482 kubernetes.io/hostname: vm5
496 kubernetes.io/hostname: vm6
501 After deploying the JSONs, verify they were deployed on different hosts:
505 $ kubectl get pods -o wide
506 NAME READY STATUS RESTARTS AGE IP NODE
507 nginx1 1/1 Running 0 13m 10.1.36.2 vm5
508 nginx2 1/1 Running 0 13m 10.1.219.3 vm6
510 Now you can verify the connectivity to both nginx PODs from a busybox
515 kubectl run busybox --rm -it --image=busybox /bin/sh
518 Connecting to 10.1.36.2 (10.1.36.2:80)
519 index.html 100% |*******************************************************************************************************************************************************************| 612 0:00:00 ETA
524 Connecting to 10.1.219.3 (10.1.219.3:80)
525 index.html 100% |*******************************************************************************************************************************************************************| 612 0:00:00 ETA
527 Uninstalling Contiv-VPP
528 ~~~~~~~~~~~~~~~~~~~~~~~
530 To uninstall the network plugin itself, use ``kubectl``:
534 kubectl delete -f https://raw.githubusercontent.com/contiv/vpp/master/k8s/contiv-vpp.yaml
536 Tearing down Kubernetes
537 ~~~~~~~~~~~~~~~~~~~~~~~
539 - First, drain the node and make sure that the node is empty before
544 kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
545 kubectl delete node <node name>
547 - Next, on the node being removed, reset all kubeadm installed state:
555 - If you added environment variable definitions into
556 ``/etc/systemd/system/kubelet.service.d/10-kubeadm.conf``, this would
557 have been a process from the `Custom Management Network
558 file <https://github.com/contiv/vpp/blob/master/docs/CUSTOM_MGMT_NETWORK.md#setting-up-a-custom-management-network-on-multi-homed-nodes>`__,
559 then remove the definitions now.
564 Some of the issues that can occur during the installation are:
566 - Forgetting to create and initialize the ``.kube`` directory in your
567 home directory (As instructed by ``kubeadm init --token-ttl 0``).
568 This can manifest itself as the following error:
572 W1017 09:25:43.403159 2233 factory_object_mapping.go:423] Failed to download OpenAPI (Get https://192.168.209.128:6443/swagger-2.0.0.pb-v1: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")), falling back to swagger
573 Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
575 - Previous installation lingering on the file system.
576 ``'kubeadm init --token-ttl 0`` fails to initialize kubelet with one
577 or more of the following error messages:
582 [kubelet-check] It seems like the kubelet isn't running or healthy.
583 [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.
586 If you run into any of the above issues, try to clean up and reinstall
594 kubeadm init --token-ttl 0
595 rm -rf /var/etcd/contiv-data
596 rm -rf /var/bolt/bolt.db
598 Contiv-specific kubeadm installation on Aarch64
599 -----------------------------------------------
601 Supplemental instructions apply when using Contiv-VPP for Aarch64. Most
602 installation steps for Aarch64 are the same as that described earlier in
603 this chapter, so you should firstly read it before you start the
604 installation on Aarch64 platform.
606 Use the `Aarch64-specific kubeadm install
607 instructions <https://github.com/contiv/vpp/blob/master/docs/arm64/MANUAL_INSTALL_ARM64.md>`__
608 to manually install Kubernetes with Contiv-VPP networking on one or more
609 bare-metals of Aarch64 platform.