1 Simulating networks with VPP
2 ============================
4 The “make test” framework provides a good way to test individual
5 features. However, when testing several features at once - or validating
6 nontrivial configurations - it may prove difficult or impossible to use
7 the unit-test framework.
9 This note explains how to set up lxc/lxd, and a 5-container testbed to
10 test a split-tunnel nat + ikev2 + ipsec + ipv6 prefix-delegation
13 OS / Distro test results
14 ------------------------
16 This setup has been tested on an Ubuntu 18.04 LTS system. If you’re
17 feeling adventurous, the same scenario also worked on a recent Ubuntu
18 20.04 “preview” daily build.
20 Other distros may work fine, or not at all.
25 If you need to use a proxy server e.g. from a lab system, you’ll
26 probably need to set HTTP_PROXY, HTTPS_PROXY, http_proxy and https_proxy
27 in /etc/environment. Directly setting variables in the environment
28 doesn’t work. The lxd snap *daemon* needs the proxy settings, not the
35 HTTP_PROXY=http://my.proxy.server:8080
36 HTTPS_PROXY=http://my.proxy.server:4333
37 http_proxy=http://my.proxy.server:8080
38 https_proxy=http://my.proxy.server:4333
40 Install and configure lxd
41 -------------------------
43 Install the lxd snap. The lxd snap is up to date, as opposed to the
44 results of “sudo apt-get install lxd”.
51 “lxd init” asks several questions. With the exception of the storage
52 pool, take the defaults. To match the configs shown below, create a
53 storage pool named “vpp.” Storage pools of type “zfs” and “files” have
54 been tested successfully.
56 zfs is more space-efficient. “lxc copy” is infinitely faster with zfs.
57 The path for the zfs storage pool is under /var. Do not replace it with
58 a symbolic link, unless you want to rebuild all of your containers from
59 scratch. Ask me how I know that.
61 Create three network segments
62 -----------------------------
68 # lxc network create respond
69 # lxc network create internet
70 # lxc network create initiate
72 We’ll explain the test topology in a bit. Stay tuned.
74 Set up the default container profile
75 ------------------------------------
77 Execute “lxc profile edit default”, and install the following
78 configuration. Note that the “shared” directory should mount your vpp
79 workspaces. With that trick, you can edit code from any of the
80 containers, run vpp without installing it, etc.
85 description: Default LXD profile
116 Set up the network configurations
117 ---------------------------------
119 Edit the fake “internet” backbone:
123 # lxc network edit internet
125 Install the ip addresses shown below, to avoid having to rebuild the vpp
126 and host configuration:
131 ipv4.address: 10.26.68.1/24
132 ipv4.dhcp.ranges: 10.26.68.10-10.26.68.50
145 Repeat the process with the “respond” and “initiate” networks, using
146 these configurations:
148 respond network configuration
149 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
154 ipv4.address: 10.166.14.1/24
155 ipv4.dhcp.ranges: 10.166.14.10-10.166.14.50
168 initiate network configuration
169 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
174 ipv4.address: 10.219.188.1/24
175 ipv4.dhcp.ranges: 10.219.188.10-10.219.188.50
188 Create a “master” container image
189 ---------------------------------
191 The master container image should be set up so that you can build vpp,
192 ssh into the container, edit source code, run gdb, etc.
194 Make sure that e.g. public key auth ssh works.
198 # lxd launch ubuntu:18.04 respond
200 # lxc exec respond bash
201 respond# cd /scratch/my-vpp-workspace
202 respond# apt-get install make ssh
203 respond# make install-dep
207 Mark the container image privileged. If you forget this step, you’ll
208 trip over a netlink error (-11) aka EAGAIN when you try to roll in the
213 # lxc config set respond security.privileged "true"
215 Duplicate the “master” container image
216 --------------------------------------
218 To avoid having to configure N containers, be sure that the master
219 container image is fully set up before you help it have children:
223 # lxc copy respond respondhost
224 # lxc copy respond initiate
225 # lxc copy respond initiatehost
226 # lxc copy respond dhcpserver # optional, to test ipv6 prefix delegation
231 See below for a handy script which executes lxc commands across the
232 current set of running containers. I call it “lxc-foreach,” feel free to
233 call the script Ishmael if you like.
240 <issues "lxc start" for each container in the list>
242 After a few seconds, use this one to open an ssh connection to each
243 container. The ssh command parses the output of “lxc info,” which
244 displays container ip addresses.
257 export containers="respond respondhost initiate initiatehost dhcpserver"
259 if [ x$1 = "x" ] ; then
264 if [ $1 = "ssh" ] ; then
267 inet=`lxc info $c | grep eth0 | grep -v inet6 | head -1 | cut -f 3`
268 if [ x$inet = "x" ] ; then
271 gnome-terminal --command "/usr/bin/ssh $inet"
288 Finally, we’re ready to describe a test topology. First, a picture:
292 ===+======== management lan/bridge lxdbr0 (dhcp) ===========+===
298 +------+ eth1 eth1 +------+
299 | respond | 10.26.88.100 <= internet bridge => 10.26.88.101 | initiate |
301 eth2 / bvi0 10.166.14.2 | 10.219.188.2 eth3 / bvi0
303 | ("respond" bridge) | ("initiate" bridge) |
306 eth2 10.166.14.3 | eth3 10.219.188.3
307 +----------+ | +----------+
308 | respondhost | | | respondhost |
309 +----------+ | +----------+
310 eth0 (management lan) <========+========> eth0 (management lan)
312 Test topology discussion
313 ~~~~~~~~~~~~~~~~~~~~~~~~
315 This topology is suitable for testing almost any tunnel encap/decap
316 scenario. The two containers “respondhost” and “initiatehost” are
317 end-stations connected to two vpp instances running on “respond” and
320 We leverage the Linux end-station network stacks to generate traffic of
323 The so-called “internet” bridge models the public internet. The
324 “respond” and “initiate” bridges connect vpp instances to local hosts
329 The end-station Linux configurations set up the eth2 and eth3 ip
330 addresses shown above, and add tunnel routes to the opposite end-station
333 respondhost configuration
334 ~~~~~~~~~~~~~~~~~~~~~~~~~
338 ifconfig eth2 10.166.14.3/24 up
339 route add -net 10.219.188.0/24 gw 10.166.14.2
341 initiatehost configuration
342 ~~~~~~~~~~~~~~~~~~~~~~~~~~
346 sudo ifconfig eth3 10.219.188.3/24 up
347 sudo route add -net 10.166.14.0/24 gw 10.219.188.2
352 Split nat44 / ikev2 + ipsec tunneling, with ipv6 prefix delegation in
353 the “respond” config.
355 respond configuration
356 ~~~~~~~~~~~~~~~~~~~~~
362 comment { "internet" }
363 create host-interface name eth1
364 set int ip address host-eth1 10.26.68.100/24
365 set int ip6 table host-eth1 0
366 set int state host-eth1 up
368 comment { default route via initiate }
369 ip route add 0.0.0.0/0 via 10.26.68.101
371 comment { "respond-private-net" }
372 create host-interface name eth2
373 bvi create instance 0
374 set int l2 bridge bvi0 1 bvi
375 set int ip address bvi0 10.166.14.2/24
376 set int state bvi0 up
377 set int l2 bridge host-eth2 1
378 set int state host-eth2 up
381 nat44 add interface address host-eth1
382 set interface nat44 in host-eth2 out host-eth1
383 nat44 add identity mapping external host-eth1 udp 500
384 nat44 add identity mapping external host-eth1 udp 4500
385 comment { nat44 untranslated subnet 10.219.188.0/24 }
387 comment { responder profile }
388 ikev2 profile add initiate
389 ikev2 profile set initiate udp-encap
390 ikev2 profile set initiate auth rsa-sig cert-file /scratch/setups/respondcert.pem
391 set ikev2 local key /scratch/setups/initiatekey.pem
392 ikev2 profile set initiate id local fqdn initiator.my.net
393 ikev2 profile set initiate id remote fqdn responder.my.net
394 ikev2 profile set initiate traffic-selector remote ip-range 10.219.188.0 - 10.219.188.255 port-range 0 - 65535 protocol 0
395 ikev2 profile set initiate traffic-selector local ip-range 10.166.14.0 - 10.166.14.255 port-range 0 - 65535 protocol 0
396 create ipip tunnel src 10.26.68.100 dst 10.26.68.101
397 ikev2 profile set initiate tunnel ipip0
399 comment { ipv6 prefix delegation }
400 ip6 nd address autoconfig host-eth1 default-route
401 dhcp6 client host-eth1
402 dhcp6 pd client host-eth1 prefix group hgw
403 set ip6 address bvi0 prefix group hgw ::2/56
404 ip6 nd address autoconfig bvi0 default-route
405 ip6 nd bvi0 ra-interval 5 3 ra-lifetime 180
407 set int mtu packet 1390 ipip0
408 set int unnum ipip0 use host-eth1
409 ip route add 10.219.188.0/24 via ipip0
411 initiate configuration
412 ~~~~~~~~~~~~~~~~~~~~~~
418 comment { "internet" }
419 create host-interface name eth1
420 comment { set dhcp client intfc host-eth1 hostname initiate }
421 set int ip address host-eth1 10.26.68.101/24
422 set int state host-eth1 up
424 comment { default route via "internet gateway" }
425 comment { ip route add 0.0.0.0/0 via 10.26.68.1 }
427 comment { "initiate-private-net" }
428 create host-interface name eth3
429 bvi create instance 0
430 set int l2 bridge bvi0 1 bvi
431 set int ip address bvi0 10.219.188.2/24
432 set int state bvi0 up
433 set int l2 bridge host-eth3 1
434 set int state host-eth3 up
436 nat44 add interface address host-eth1
437 set interface nat44 in bvi0 out host-eth1
438 nat44 add identity mapping external host-eth1 udp 500
439 nat44 add identity mapping external host-eth1 udp 4500
440 comment { nat44 untranslated subnet 10.166.14.0/24 }
442 comment { initiator profile }
443 ikev2 profile add respond
444 ikev2 profile set respond udp-encap
445 ikev2 profile set respond auth rsa-sig cert-file /scratch/setups/initiatecert.pem
446 set ikev2 local key /scratch/setups/respondkey.pem
447 ikev2 profile set respond id local fqdn responder.my.net
448 ikev2 profile set respond id remote fqdn initiator.my.net
450 ikev2 profile set respond traffic-selector remote ip-range 10.166.14.0 - 10.166.14.255 port-range 0 - 65535 protocol 0
451 ikev2 profile set respond traffic-selector local ip-range 10.219.188.0 - 10.219.188.255 port-range 0 - 65535 protocol 0
453 ikev2 profile set respond responder host-eth1 10.26.68.100
454 ikev2 profile set respond ike-crypto-alg aes-cbc 256 ike-integ-alg sha1-96 ike-dh modp-2048
455 ikev2 profile set respond esp-crypto-alg aes-cbc 256 esp-integ-alg sha1-96 esp-dh ecp-256
456 ikev2 profile set respond sa-lifetime 3600 10 5 0
458 create ipip tunnel src 10.26.68.101 dst 10.26.68.100
459 ikev2 profile set respond tunnel ipip0
460 ikev2 initiate sa-init respond
462 set int mtu packet 1390 ipip0
463 set int unnum ipip0 use host-eth1
464 ip route add 10.166.14.0/24 via ipip0
466 IKEv2 certificate setup
467 -----------------------
469 In both of the vpp configurations, you’ll see “/scratch/setups/xxx.pem”
470 mentioned. These certificates are used in the ikev2 key exchange.
472 Here’s how to generate the certificates:
476 openssl req -x509 -nodes -newkey rsa:4096 -keyout respondkey.pem -out respondcert.pem -days 3560
477 openssl x509 -text -noout -in respondcert.pem
478 openssl req -x509 -nodes -newkey rsa:4096 -keyout initiatekey.pem -out initiatecert.pem -days 3560
479 openssl x509 -text -noout -in initiatecert.pem
481 Make sure that the “respond” and “initiate” configurations point to the
487 If you need an ipv6 dhcp server to test ipv6 prefix delegation, create
488 the “dhcpserver” container as shown above.
490 Install the “isc-dhcp-server” Debian package:
494 sudo apt-get install isc-dhcp-server
496 /etc/dhcp/dhcpd6.conf
497 ~~~~~~~~~~~~~~~~~~~~~
499 Edit the dhcpv6 configuration and add an ipv6 subnet with prefix
500 delegation. For example:
504 subnet6 2001:db01:0:1::/64 {
505 range6 2001:db01:0:1::1 2001:db01:0:1::9;
506 prefix6 2001:db01:0:100:: 2001:db01:0:200::/56;
509 Add an ipv6 address on eth1, which is connected to the “internet”
510 bridge, and start the dhcp server. I use the following trivial bash
511 script, which runs the dhcp6 server in the foreground and produces dhcp
517 ifconfig eth1 inet6 add 2001:db01:0:1::10/64 || true
518 dhcpd -6 -d -cf /etc/dhcp/dhcpd6.conf
520 The “\|\| true” bit keeps going if eth1 already has the indicated ipv6
523 Container / Host Interoperation
524 -------------------------------
526 Host / container interoperation is highly desirable. If the host and a
527 set of containers don’t run the same distro *and distro version*, it’s
528 reasonably likely that the glibc versions won’t match. That, in turn,
529 makes vpp binaries built in one environment fail in the other.
531 Trying to install multiple versions of glibc - especially at the host
532 level - often ends very badly and is *not recommended*. It’s not just
533 glibc, either. The dynamic loader ld-linux-xxx-so.2 is glibc version
536 Fortunately, it’s reasonable easy to build lxd container images based on
537 specific Ubuntu or Debian versions.
539 Create a custom root filesystem image
540 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
542 First, install the “debootstrap” tool:
546 sudo apt-get install debootstrap
548 Make a temp directory, and use debootstrap to populate it. In this
549 example, we create an Ubuntu 20.04 (focal fossa) base image:
554 # debootstrap focal /tmp/myroot http://archive.ubuntu.com/ubuntu
556 To tinker with the base image (if desired):
564 Make a compressed tarball of the base image:
568 # tar zcf /tmp/rootfs.tar.gz -C /tmp/myroot .
570 Create a “metadata.yaml” file which describes the base image:
574 architecture: "x86_64"
575 # To get current date in Unix time, use `date +%s` command
576 creation_date: 1458040200
578 architecture: "x86_64"
579 description: "My custom Focal Fossa image"
583 Make a compressed tarball of metadata.yaml:
587 # tar zcf metadata.tar.gz metadata.yaml
589 Import the image into lxc / lxd:
593 $ lxc image import metadata.tar.gz rootfd.tar.gz --alias focal-base
595 Create a container which uses the customized base image:
596 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
600 $ lxc launch focal-base focaltest
601 $ lxc exec focaltest bash
603 The next several steps should be executed in the container, in the bash
604 shell spun up by “lxc exec…”
606 Configure container networking
607 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
609 In the container, create /etc/netplan/50-cloud-init.yaml:
619 Use “cat > /etc/netplan/50-cloud-init.yaml”, and cut-’n-paste if your
620 favorite text editor is AWOL.
622 Apply the configuration:
628 At this point, eth0 should have an ip address, and you should see a
629 default route with “route -n”.
634 Again, in the container, set up /etc/apt/sources.list via cut-’n-paste
635 from a recently update “focal fossa” host. Something like so:
639 deb http://us.archive.ubuntu.com/ubuntu/ focal main restricted
640 deb http://us.archive.ubuntu.com/ubuntu/ focal-updates main restricted
641 deb http://us.archive.ubuntu.com/ubuntu/ focal universe
642 deb http://us.archive.ubuntu.com/ubuntu/ focal-updates universe
643 deb http://us.archive.ubuntu.com/ubuntu/ focal multiverse
644 deb http://us.archive.ubuntu.com/ubuntu/ focal-updates multiverse
645 deb http://us.archive.ubuntu.com/ubuntu/ focal-backports main restricted universe multiverse
646 deb http://security.ubuntu.com/ubuntu focal-security main restricted
647 deb http://security.ubuntu.com/ubuntu focal-security universe
648 deb http://security.ubuntu.com/ubuntu focal-security multiverse
650 “apt-get update” and “apt-install” should produce reasonable results.
651 Suggest “apt-get install make git”.
653 At this point, you can use the “/scratch” sharepoint (or similar) to
654 execute “make install-dep install-ext-deps” to set up the container with
655 the vpp toolchain; proceed as desired.