^^^^^^^^^^^^^^^^^
ssh into the newly created box:
-.. code-block:: console
+.. code-block:: shell
$ vagrant ssh <id>
Become the root with:
-.. code-block:: console
+.. code-block:: shell
$ sudo bash
For Ubuntu systems:
-.. code-block:: console
+.. code-block:: shell
# dpkg -i *.deb
For CentOS systems:
-.. code-block:: console
+.. code-block:: shell
# rpm -Uvh *.rpm
Since VPP is now installed, you can start running VPP with:
-.. code-block:: console
+.. code-block:: shell
- # service vpp start
\ No newline at end of file
+ # service vpp start
Once you're satisfied with your *Vagrantfile*, boot the box with:
-.. code-block:: console
+.. code-block:: shell
$ vagrant up
To poweroff your VM, type:
- .. code-block:: console
+ .. code-block:: shell
$ vagrant halt <id>
To resume your VM, type:
- .. code-block:: console
+ .. code-block:: shell
$ vagrant resume <id>
To destroy your VM, type:
- .. code-block:: console
+ .. code-block:: shell
$ vagrant destroy <id>
If you're on Ubuntu, perform:
-.. code-block:: console
+.. code-block:: shell
$ sudo apt-get install virtualbox
Here we are on a 64-bit version of CentOS, downloading and installing Vagrant 2.1.2:
-.. code-block:: console
+.. code-block:: shell
$ yum -y install https://releases.hashicorp.com/vagrant/2.1.2/vagrant_2.1.2_x86_64.rpm
This is a similar command, but on a 64-bit version of Debian:
-.. code-block:: console
+.. code-block:: shell
$ sudo apt-get install https://releases.hashicorp.com/vagrant/2.1.2/vagrant_2.1.2_x86_64.deb
Once you're finished with *env.sh* script, and you are in the directory containing *env.sh*, run the script to set the ENV variables with:
-.. code-block:: console
+.. code-block:: shell
$ source ./env.sh
Enter container *cone*, and check the current network configuration:
-.. code-block:: console
+.. code-block:: shell
root@cone:/# ip -o a
1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever
Check if the interfaces are down or up:
-.. code-block:: console
+.. code-block:: shell
root@cone:/# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1
Make sure your loopback interface is up, and assign an IP and gateway to veth_link1.
-.. code-block:: console
+.. code-block:: shell
root@cone:/# ip link set dev lo up
root@cone:/# ip addr add 172.16.1.2/24 dev veth_link1
Run some commands to verify the changes:
-.. code-block:: console
+.. code-block:: shell
root@cone:/# ip -o a
1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever
After thats done for *both* containers, exit from the container if you're in one:
-.. code-block:: console
+.. code-block:: shell
root@ctwo:/# exit
exit
In the machine running the containers, run **ip link** to see the host *veth* network interfaces, and their link with their respective *container veth's*.
-.. code-block:: console
+.. code-block:: shell
root@localhost:~# ip link
1: lo: <LOOPBACK> mtu 65536 qdisc noqueue state DOWN mode DEFAULT group default qlen 1
With VPP in the host machine, show current VPP interfaces:
-.. code-block:: console
+.. code-block:: shell
root@localhost:~# vppctl show inter
Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count
Based on the names of the network interfaces discussed previously, which are specific to my systems, we can create VPP host-interfaces:
-.. code-block:: console
+.. code-block:: shell
root@localhost:~# vppctl create host-interface name vethQL7K0C
root@localhost:~# vppctl create host-interface name veth8NA72P
Verify they have been set up properly:
-.. code-block:: console
+.. code-block:: shell
root@localhost:~# vppctl show inter
Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count
Set their state to up:
-.. code-block:: console
+.. code-block:: shell
root@localhost:~# vppctl set interface state host-vethQL7K0C up
root@localhost:~# vppctl set interface state host-veth8NA72P up
Verify they are now up:
-.. code-block:: console
+.. code-block:: shell
root@localhost:~# vppctl show inter
Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count
Add IP addresses for the other end of each veth link:
-.. code-block:: console
+.. code-block:: shell
root@localhost:~# vppctl set interface ip address host-vethQL7K0C 172.16.1.1/24
root@localhost:~# vppctl set interface ip address host-veth8NA72P 172.16.2.1/24
Verify the addresses are set properly by looking at the L3 table:
-.. code-block:: console
+.. code-block:: shell
root@localhost:~# vppctl show inter addr
host-vethQL7K0C (up):
Or looking at the FIB by doing:
-.. code-block:: console
+.. code-block:: shell
root@localhost:~# vppctl show ip fib
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto ] locks:[src:plugin-hi:2, src:default-route:1, ]
At long last you probably want to see some pings:
-.. code-block:: console
+.. code-block:: shell
root@localhost:~# lxc-attach -n cone -- ping -c3 172.16.2.2
PING 172.16.2.2 (172.16.2.2) 56(84) bytes of data.
First you should have root privileges:
-.. code-block:: console
+.. code-block:: shell
- $ sudo bash
+ ~$ sudo bash
Then install packages for containers such as lxc:
-.. code-block:: console
+.. code-block:: shell
# apt-get install bridge-utils lxc
Look at the contents of *default.conf*, which should initially look like this:
-.. code-block:: console
+.. code-block:: shell
- # cat /etc/lxc/default.conf
+ # cat /etc/lxc/default.conf
lxc.network.type = veth
lxc.network.link = lxcbr0
lxc.network.flags = up
You can do this by piping *echo* output into *tee*, where each line is separated with a newline character *\\n* as shown below. Alternatively, you can manually add to this file with a text editor such as **vi**, but make sure you have root privileges.
-.. code-block:: console
+.. code-block:: shell
# echo -e "lxc.network.name = veth0\nlxc.network.type = veth\nlxc.network.name = veth_link1" | sudo tee -a /etc/lxc/default.conf
Inspect the contents again to verify the file was indeed modified:
-.. code-block:: console
+.. code-block:: shell
- # cat /etc/lxc/default.conf
+ # cat /etc/lxc/default.conf
lxc.network.type = veth
lxc.network.link = lxcbr0
lxc.network.flags = up
Creates an Ubuntu Xenial container named "cone".
-.. code-block:: console
+.. code-block:: shell
# lxc-create -t download -n cone -- --dist ubuntu --release xenial --arch amd64 --keyserver hkp://p80.pool.sks-keyservers.net:80
Make another container "ctwo".
-.. code-block:: console
+.. code-block:: shell
# lxc-create -t download -n ctwo -- --dist ubuntu --release xenial --arch amd64 --keyserver hkp://p80.pool.sks-keyservers.net:80
List your containers to verify they exist:
-.. code-block:: console
+.. code-block:: shell
# lxc-ls
cone ctwo
Start the first container:
-.. code-block:: console
+.. code-block:: shell
# lxc-start --name cone
And verify its running:
-.. code-block:: console
+.. code-block:: shell
# lxc-ls --fancy
NAME STATE AUTOSTART GROUPS IPV4 IPV6
Here are some `lxc container commands <https://help.ubuntu.com/lts/serverguide/lxc.html.en-GB#lxc-basic-usage>`_ you may find useful:
- .. code-block:: console
+ .. code-block:: shell
sudo lxc-ls --fancy
sudo lxc-start --name u1 --daemon
To enter our container via the shell, type:
-.. code-block:: console
+.. code-block:: shell
# lxc-attach -n cone
root@cone:/#
Run the linux DHCP setup and install VPP:
-.. code-block:: console
+.. code-block:: shell
root@cone:/# resolvconf -d eth0
root@cone:/# dhclient
After this is done, start VPP in this container:
-.. code-block:: console
+.. code-block:: shell
root@cone:/# service vpp start
Exit this container with the **exit** command (you *may* need to run **exit** twice):
-.. code-block:: console
+.. code-block:: shell
root@cone:/# exit
exit