1 ## Contiv-VPP Vagrant Installation
4 The following items are prerequisites before installing vagrant:
5 - Vagrant 2.0.1 or later
7 - VirtualBox 5.2.8 or later
8 - VMWare Fusion 10.1.0 or later or VmWare Workstation 14
9 - For VmWare Fusion, you will need the [Vagrant VmWare Fusion plugin](https://www.vagrantup.com/vmware/index.html)
10 - Laptop or server with at least 4 CPU cores and 16 Gig of RAM
12 ### Creating / Shutting Down / Destroying the Cluster
13 This folder contains the Vagrant file that is used to create a single or multi-node
14 Kubernetes cluster using Contiv-VPP as a Network Plugin.
16 The folder is organized into two subfolders:
18 - (config) - contains the files that share cluster information, which are used
19 during the provisioning stage (master IP address, Certificates, hash-keys).
20 **CAUTION:** Editing is not recommended!
21 - (vagrant) - contains scripts that are used for creating, destroying, rebooting
22 and shutting down the VMs that host the K8s cluster.
24 To create and run a K8s cluster with a *contiv-vpp CNI* plugin, run the
25 `vagrant-start` script, located in the [vagrant folder](https://github.com/contiv/vpp/tree/master/vagrant). The `vagrant-start`
26 script prompts the user to select the number of worker nodes for the kubernetes cluster.
27 Zero (0) worker nodes mean that a single-node cluster (with one kubernetes master node) will be deployed.
29 Next, the user is prompted to select either the *production environment* or the *development environment*.
30 Instructions on how to build the development *contiv/vpp-vswitch* image can be found below in the
31 [development environment](#building-and-deploying-the-dev-contiv-vswitch-image) command section.
33 The last option asks the user to select either *Without StealTheNIC* or *With StealTheNIC*.
34 Using option *With StealTheNIC* has the plugin "steal" interfaces owned by Linux and uses their configuration in VPP.
36 For the production environment, enter the following commands:
39 Please provide the number of workers for the Kubernetes cluster (0-50) or enter [Q/q] to exit: 1
41 Please choose Kubernetes environment:
46 You chose Development environment
48 Please choose deployment scenario:
49 1) Without StealTheNIC
53 You chose deployment without StealTheNIC
55 Creating a production environment, without STN and 1 worker node(s)
58 For the development environment, enter the following commands:
61 Please provide the number of workers for the Kubernetes cluster (0-50) or enter [Q/q] to exit: 1
63 Please choose Kubernetes environment:
68 You chose Development environment
70 Please choose deployment scenario:
71 1) Without StealTheNIC
75 You chose deployment without StealTheNIC
77 Creating a development environment, without STN and 1 worker node(s)
80 To destroy and clean-up the cluster, run the *vagrant-cleanup* script, located
81 [inside the vagrant folder](https://github.com/contiv/vpp/tree/master/vagrant):
87 To shutdown the cluster, run the *vagrant-shutdown* script, located [inside the vagrant folder](https://github.com/contiv/vpp/tree/master/vagrant):
93 - To reboot the cluster, run the *vagrant-reload* script, located [inside the vagrant folder](https://github.com/contiv/vpp/tree/master/vagrant):
99 - From a suspended state, or after a reboot of the host machine, the cluster
100 can be brought up by running the *vagrant-up* script.
103 ### Building and Deploying the dev-contiv-vswitch Image
104 If you chose the optional development-environment-deployment option, then perform the
105 following instructions on how to build a modified *contivvpp/vswitch* image:
107 - Make sure changes in the code have been saved. From the k8s-master node,
108 build the new *contivvpp/vswitch* image (run as sudo):
111 vagrant ssh k8s-master
113 sudo ./save-dev-image
116 - The newly built *contivvpp/vswitch* image is now tagged as *latest*. Verify the
117 build with `sudo docker images`; the *contivvpp/vswitch* should have been created a few
118 seconds ago. The new image with all the changes must become available to all
119 the nodes in the K8s cluster. To make the changes available to all, load the docker image into the running
120 worker nodes (run as sudo):
123 vagrant ssh k8s-worker1
125 sudo ./load-dev-image
128 - Verify with `sudo docker images`; the old *contivvpp/vswitch* should now be tagged as
129 `<none>` and the latest tagged *contivvpp/vswitch* should have been created a
132 ### Exploring the Cluster
133 Once the cluster is up, perform the following steps:
134 - Log into the master:
138 vagrant ssh k8s-master
140 Welcome to Ubuntu 16.04 LTS (GNU/Linux 4.4.0-21-generic x86_64)
142 * Documentation: https://help.ubuntu.com/
143 vagrant@k8s-master:~$
145 - Verify the Kubernetes/Contiv-VPP installation. First, verify the nodes
149 vagrant@k8s-master:~$ kubectl get nodes -o wide
151 NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
152 k8s-master Ready master 22m v1.9.2 <none> Ubuntu 16.04 LTS 4.4.0-21-generic docker://17.12.0-ce
153 k8s-worker1 Ready <none> 15m v1.9.2 <none> Ubuntu 16.04 LTS 4.4.0-21-generic docker://17.12.0-ce
156 - Next, verify that all pods are running correctly:
159 vagrant@k8s-master:~$ kubectl get pods -n kube-system -o wide
161 NAME READY STATUS RESTARTS AGE IP NODE
162 contiv-etcd-2ngdc 1/1 Running 0 17m 192.169.1.10 k8s-master
163 contiv-ksr-x7gsq 1/1 Running 3 17m 192.169.1.10 k8s-master
164 contiv-vswitch-9bql6 2/2 Running 0 17m 192.169.1.10 k8s-master
165 contiv-vswitch-hpt2x 2/2 Running 0 10m 192.169.1.11 k8s-worker1
166 etcd-k8s-master 1/1 Running 0 16m 192.169.1.10 k8s-master
167 kube-apiserver-k8s-master 1/1 Running 0 16m 192.169.1.10 k8s-master
168 kube-controller-manager-k8s-master 1/1 Running 0 15m 192.169.1.10 k8s-master
169 kube-dns-6f4fd4bdf-62rv4 2/3 CrashLoopBackOff 14 17m 10.1.1.2 k8s-master
170 kube-proxy-bvr74 1/1 Running 0 10m 192.169.1.11 k8s-worker1
171 kube-proxy-v4fzq 1/1 Running 0 17m 192.169.1.10 k8s-master
172 kube-scheduler-k8s-master 1/1 Running 0 16m 192.169.1.10 k8s-master
175 - If you want your pods to be scheduled on both the master and the workers,
176 you have to untaint the master node:
181 - Check VPP and its interfaces:
183 vagrant@k8s-master:~$ sudo vppctl
184 _______ _ _ _____ ___
185 __/ __/ _ \ (_)__ | | / / _ \/ _ \
186 _/ _// // / / / _ \ | |/ / ___/ ___/
187 /_/ /____(_)_/\___/ |___/_/ /_/
190 Name Idx State Counter Count
191 GigabitEthernet0/8/0 1 up rx packets 14
200 - Make sure that `GigabitEthernet0/8/0` is listed and that its status is `up`.
202 - Next, create an example deployment of nginx pods:
204 vagrant@k8s-master:~$ kubectl run nginx --image=nginx --replicas=2
205 deployment "nginx" created
207 - Check the status of the deployment:
210 vagrant@k8s-master:~$ kubectl get deploy -o wide
212 NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
213 nginx 2 2 2 2 2h nginx nginx run=nginx
216 - Verify that the pods in the deployment are up and running:
218 vagrant@k8s-master:~$ kubectl get pods -o wide
220 NAME READY STATUS RESTARTS AGE IP NODE
221 nginx-8586cf59-6kx2m 1/1 Running 1 1h 10.1.2.3 k8s-worker1
222 nginx-8586cf59-j5vf9 1/1 Running 1 1h 10.1.2.2 k8s-worker1
225 - Issue an HTTP GET request to a pod in the deployment:
228 vagrant@k8s-master:~$ wget 10.1.2.2
230 --2018-01-19 12:34:08-- http://10.1.2.2/
231 Connecting to 10.1.2.2:80... connected.
232 HTTP request sent, awaiting response... 200 OK
233 Length: 612 [text/html]
234 Saving to: ‘index.html.1’
236 index.html.1 100%[=========================================>] 612 --.-KB/s in 0s
238 2018-01-19 12:34:08 (1.78 MB/s) - ‘index.html.1’ saved [612/612]
241 #### How to SSH into k8s Worker Node
242 To SSH into k8s Worker Node, perform the following steps:
249 vagrant ssh k8s-worker1