1 Contiv-VPP Vagrant Installation
2 ===============================
7 The following items are prerequisites before installing vagrant: -
8 Vagrant 2.0.1 or later - Hypervisors: - VirtualBox 5.2.8 or later -
9 VMWare Fusion 10.1.0 or later or VmWare Workstation 14 - For VmWare
10 Fusion, you will need the `Vagrant VmWare Fusion
11 plugin <https://www.vagrantup.com/vmware/index.html>`__ - Laptop or
12 server with at least 4 CPU cores and 16 Gig of RAM
14 Creating / Shutting Down / Destroying the Cluster
15 -------------------------------------------------
17 This folder contains the Vagrant file that is used to create a single or
18 multi-node Kubernetes cluster using Contiv-VPP as a Network Plugin.
20 The folder is organized into two subfolders:
22 - (config) - contains the files that share cluster information, which
23 are used during the provisioning stage (master IP address,
24 Certificates, hash-keys). **CAUTION:** Editing is not recommended!
25 - (vagrant) - contains scripts that are used for creating, destroying,
26 rebooting and shutting down the VMs that host the K8s cluster.
28 To create and run a K8s cluster with a *contiv-vpp CNI* plugin, run the
29 ``vagrant-start`` script, located in the `vagrant
30 folder <https://github.com/contiv/vpp/tree/master/vagrant>`__. The
31 ``vagrant-start`` script prompts the user to select the number of worker
32 nodes for the kubernetes cluster. Zero (0) worker nodes mean that a
33 single-node cluster (with one kubernetes master node) will be deployed.
35 Next, the user is prompted to select either the *production environment*
36 or the *development environment*. Instructions on how to build the
37 development *contiv/vpp-vswitch* image can be found below in the
39 environment <#building-and-deploying-the-dev-contiv-vswitch-image>`__
42 The last option asks the user to select either *Without StealTheNIC* or
43 *With StealTheNIC*. Using option *With StealTheNIC* has the plugin
44 “steal” interfaces owned by Linux and uses their configuration in VPP.
46 For the production environment, enter the following commands:
51 Please provide the number of workers for the Kubernetes cluster (0-50) or enter [Q/q] to exit: 1
53 Please choose Kubernetes environment:
58 You chose Development environment
60 Please choose deployment scenario:
61 1) Without StealTheNIC
65 You chose deployment without StealTheNIC
67 Creating a production environment, without STN and 1 worker node(s)
69 For the development environment, enter the following commands:
74 Please provide the number of workers for the Kubernetes cluster (0-50) or enter [Q/q] to exit: 1
76 Please choose Kubernetes environment:
81 You chose Development environment
83 Please choose deployment scenario:
84 1) Without StealTheNIC
88 You chose deployment without StealTheNIC
90 Creating a development environment, without STN and 1 worker node(s)
92 To destroy and clean-up the cluster, run the *vagrant-cleanup* script,
93 located `inside the vagrant
94 folder <https://github.com/contiv/vpp/tree/master/vagrant>`__:
101 To shutdown the cluster, run the *vagrant-shutdown* script, located
103 folder <https://github.com/contiv/vpp/tree/master/vagrant>`__:
110 - To reboot the cluster, run the *vagrant-reload* script, located
112 folder <https://github.com/contiv/vpp/tree/master/vagrant>`__:
119 - From a suspended state, or after a reboot of the host machine, the
120 cluster can be brought up by running the *vagrant-up* script.
122 Building and Deploying the dev-contiv-vswitch Image
123 ---------------------------------------------------
125 If you chose the optional development-environment-deployment option,
126 then perform the following instructions on how to build a modified
127 *contivvpp/vswitch* image:
129 - Make sure changes in the code have been saved. From the k8s-master
130 node, build the new *contivvpp/vswitch* image (run as sudo):
134 vagrant ssh k8s-master
136 sudo ./save-dev-image
138 - The newly built *contivvpp/vswitch* image is now tagged as *latest*.
139 Verify the build with ``sudo docker images``; the *contivvpp/vswitch*
140 should have been created a few seconds ago. The new image with all
141 the changes must become available to all the nodes in the K8s
142 cluster. To make the changes available to all, load the docker image
143 into the running worker nodes (run as sudo):
147 vagrant ssh k8s-worker1
149 sudo ./load-dev-image
151 - Verify with ``sudo docker images``; the old *contivvpp/vswitch*
152 should now be tagged as ``<none>`` and the latest tagged
153 *contivvpp/vswitch* should have been created a few seconds ago.
155 Exploring the Cluster
156 ---------------------
158 Once the cluster is up, perform the following steps: - Log into the
165 vagrant ssh k8s-master
167 Welcome to Ubuntu 16.04 LTS (GNU/Linux 4.4.0-21-generic x86_64)
169 * Documentation: https://help.ubuntu.com/
170 vagrant@k8s-master:~$
172 - Verify the Kubernetes/Contiv-VPP installation. First, verify the
173 nodes in the cluster:
177 vagrant@k8s-master:~$ kubectl get nodes -o wide
179 NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
180 k8s-master Ready master 22m v1.9.2 <none> Ubuntu 16.04 LTS 4.4.0-21-generic docker://17.12.0-ce
181 k8s-worker1 Ready <none> 15m v1.9.2 <none> Ubuntu 16.04 LTS 4.4.0-21-generic docker://17.12.0-ce
183 - Next, verify that all pods are running correctly:
187 vagrant@k8s-master:~$ kubectl get pods -n kube-system -o wide
189 NAME READY STATUS RESTARTS AGE IP NODE
190 contiv-etcd-2ngdc 1/1 Running 0 17m 192.169.1.10 k8s-master
191 contiv-ksr-x7gsq 1/1 Running 3 17m 192.169.1.10 k8s-master
192 contiv-vswitch-9bql6 2/2 Running 0 17m 192.169.1.10 k8s-master
193 contiv-vswitch-hpt2x 2/2 Running 0 10m 192.169.1.11 k8s-worker1
194 etcd-k8s-master 1/1 Running 0 16m 192.169.1.10 k8s-master
195 kube-apiserver-k8s-master 1/1 Running 0 16m 192.169.1.10 k8s-master
196 kube-controller-manager-k8s-master 1/1 Running 0 15m 192.169.1.10 k8s-master
197 kube-dns-6f4fd4bdf-62rv4 2/3 CrashLoopBackOff 14 17m 10.1.1.2 k8s-master
198 kube-proxy-bvr74 1/1 Running 0 10m 192.169.1.11 k8s-worker1
199 kube-proxy-v4fzq 1/1 Running 0 17m 192.169.1.10 k8s-master
200 kube-scheduler-k8s-master 1/1 Running 0 16m 192.169.1.10 k8s-master
202 - If you want your pods to be scheduled on both the master and the
203 workers, you have to untaint the master node:
207 - Check VPP and its interfaces:
211 vagrant@k8s-master:~$ sudo vppctl
212 _______ _ _ _____ ___
213 __/ __/ _ \ (_)__ | | / / _ \/ _ \
214 _/ _// // / / / _ \ | |/ / ___/ ___/
215 /_/ /____(_)_/\___/ |___/_/ /_/
218 Name Idx State Counter Count
219 GigabitEthernet0/8/0 1 up rx packets 14
228 - Make sure that ``GigabitEthernet0/8/0`` is listed and that its status
231 - Next, create an example deployment of nginx pods:
235 vagrant@k8s-master:~$ kubectl run nginx --image=nginx --replicas=2
236 deployment "nginx" created
238 - Check the status of the deployment:
242 vagrant@k8s-master:~$ kubectl get deploy -o wide
244 NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
245 nginx 2 2 2 2 2h nginx nginx run=nginx
247 - Verify that the pods in the deployment are up and running:
251 vagrant@k8s-master:~$ kubectl get pods -o wide
253 NAME READY STATUS RESTARTS AGE IP NODE
254 nginx-8586cf59-6kx2m 1/1 Running 1 1h 10.1.2.3 k8s-worker1
255 nginx-8586cf59-j5vf9 1/1 Running 1 1h 10.1.2.2 k8s-worker1
257 - Issue an HTTP GET request to a pod in the deployment:
261 vagrant@k8s-master:~$ wget 10.1.2.2
263 --2018-01-19 12:34:08-- http://10.1.2.2/
264 Connecting to 10.1.2.2:80... connected.
265 HTTP request sent, awaiting response... 200 OK
266 Length: 612 [text/html]
267 Saving to: ‘index.html.1’
269 index.html.1 100%[=========================================>] 612 --.-KB/s in 0s
271 2018-01-19 12:34:08 (1.78 MB/s) - ‘index.html.1’ saved [612/612]
273 How to SSH into k8s Worker Node
274 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
276 To SSH into k8s Worker Node, perform the following steps:
284 vagrant ssh k8s-worker1