1 Simulating networks with VPP
2 ============================
4 The “make test” framework provides a good way to test individual
5 features. However, when testing several features at once - or validating
6 nontrivial configurations - it may prove difficult or impossible to use
7 the unit-test framework.
9 This note explains how to set up lxc/lxd, and a 5-container testbed to
10 test a split-tunnel nat + ikev2 + ipsec + ipv6 prefix-delegation
13 OS / Distro test results
14 ------------------------
16 This setup has been tested on Ubuntu 18.04 - 22.04 LTS systems. Other
17 distros may work fine, or not at all.
22 If you need to use a proxy server e.g. from a lab system, you’ll
23 probably need to set HTTP_PROXY, HTTPS_PROXY, http_proxy and https_proxy
24 in /etc/environment. Directly setting variables in the environment
25 doesn’t work. The lxd snap *daemon* needs the proxy settings, not the
32 HTTP_PROXY=http://my.proxy.server:8080
33 HTTPS_PROXY=http://my.proxy.server:4333
34 http_proxy=http://my.proxy.server:8080
35 https_proxy=http://my.proxy.server:4333
37 Install and configure lxd
38 -------------------------
40 Install the lxd snap. The lxd snap is up to date, as opposed to the
41 results of “sudo apt-get install lxd”.
48 “lxd init” asks several questions. With the exception of the storage
49 pool, take the defaults. To match the configs shown below, create a
50 storage pool named “vpp.” Storage pools of type “zfs” and “files” have
51 been tested successfully.
53 zfs is more space-efficient. “lxc copy” is infinitely faster with zfs.
54 The path for the zfs storage pool is under /var. Do not replace it with
55 a symbolic link, unless you want to rebuild all of your containers from
56 scratch. Ask me how I know that.
58 Create three network segments
59 -----------------------------
65 # lxc network create respond
66 # lxc network create internet
67 # lxc network create initiate
69 We’ll explain the test topology in a bit. Stay tuned.
71 Set up the default container profile
72 ------------------------------------
74 Execute “lxc profile edit default”, and install the following
75 configuration. Note that the “shared” directory should mount your vpp
76 workspaces. With that trick, you can edit code from any of the
77 containers, run vpp without installing it, etc.
82 description: Default LXD profile
114 Set up the network configurations
115 ---------------------------------
117 Edit the fake “internet” backbone:
121 # lxc network edit internet
123 Install the ip addresses shown below, to avoid having to rebuild the vpp
124 and host configuration:
129 ipv4.address: 10.26.68.1/24
130 ipv4.dhcp.ranges: 10.26.68.10-10.26.68.50
143 Repeat the process with the “respond” and “initiate” networks, using
144 these configurations:
146 respond network configuration
147 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
152 ipv4.address: 10.166.14.1/24
153 ipv4.dhcp.ranges: 10.166.14.10-10.166.14.50
166 initiate network configuration
167 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
172 ipv4.address: 10.219.188.1/24
173 ipv4.dhcp.ranges: 10.219.188.10-10.219.188.50
185 Create a “master” container image
186 ---------------------------------
188 The master container image should be set up so that you can build vpp,
189 ssh into the container, edit source code, run gdb, etc.
191 Make sure that e.g. public key auth ssh works.
195 # lxd launch ubuntu:22.04 respond
197 # lxc exec respond bash
198 respond# cd /scratch/my-vpp-workspace
199 respond# apt-get install make ssh
200 respond# make install-dep
204 Mark the container image privileged. If you forget this step, you’ll
205 trip over a netlink error (-11) aka EAGAIN when you try to roll in the
210 # lxc config set respond security.privileged "true"
212 Duplicate the “master” container image
213 --------------------------------------
215 To avoid having to configure N containers, be sure that the master
216 container image is fully set up before you help it have children:
220 # lxc copy respond respondhost
221 # lxc copy respond initiate
222 # lxc copy respond initiatehost
223 # lxc copy respond dhcpserver # optional, to test ipv6 prefix delegation
228 See below for a handy host script which executes lxc commands across
229 the current set of running containers. I call it “lxc-foreach,” feel
230 free to call the script Ishmael if you like.
237 <issues "lxc start" for each container in the list>
239 After a few seconds, use this one to open an ssh connection to each
240 container. The ssh command parses the output of “lxc info,” which
241 displays container ip addresses.
254 export containers="respond initiate initiatehost respondhost"
256 if [ x$1 = "x" ] ; then
261 if [ $1 = "ssh" ] ; then
265 inet=`lxc info $c | grep 10.38.33 | sed "s/.*inet://" | sed "s/\/24.*//" | tr -d " "`
266 if [ x$inet != "x" ] ; then
267 gnome-terminal --title "$c(ssh)" --command "/usr/bin/ssh -Y root@$inet"
275 inet=`lxc info $c | grep 10.38.33 | sed "s/.*inet://" | sed "s/\/24.*//" | tr -d " "`
276 if [ x$1 = "xstart" ] ; then
279 elif [ x$1 = "xstop" ] ; then
282 elif [ x$inet != "x" ] ; then
293 Finally, we’re ready to describe a test topology. First, a picture:
297 ===+======== management lan/bridge lxdbr0 (dhcp) ==============+===
303 +---------+ eth1 eth1 +----------+
304 | respond | 10.26.88.100 <= internet bridge => 10.26.88.101 | initiate |
305 +---------+ +----------+
306 eth2 / bvi0 10.166.14.2 | 10.219.188.2 eth3 / bvi0
308 | ("respond" bridge) | ("initiate" bridge) |
311 eth2 10.166.14.3 | eth3 10.219.188.3
312 +-------------+ | +-------------+
313 | respondhost | | | respondhost |
314 +-------------+ | +-------------+
316 eth0 (management lan) <========+========> eth0 (management lan)
318 Test topology discussion
319 ~~~~~~~~~~~~~~~~~~~~~~~~
321 This topology is suitable for testing almost any tunnel encap/decap
322 scenario. The two containers “respondhost” and “initiatehost” are
323 end-stations connected to two vpp instances running on “respond” and
326 We leverage the Linux end-station network stacks to generate traffic of
329 The so-called “internet” bridge models the public internet. The
330 “respond” and “initiate” bridges connect vpp instances to local hosts
335 The end-station Linux configurations set up the eth2 and eth3 ip
336 addresses shown above, and add tunnel routes to the opposite end-station
339 respondhost configuration
340 ~~~~~~~~~~~~~~~~~~~~~~~~~
344 ifconfig eth2 10.166.14.3/24 up
345 route add -net 10.219.188.0/24 gw 10.166.14.2
347 initiatehost configuration
348 ~~~~~~~~~~~~~~~~~~~~~~~~~~
352 sudo ifconfig eth3 10.219.188.3/24 up
353 sudo route add -net 10.166.14.0/24 gw 10.219.188.2
358 Split nat44 / ikev2 + ipsec tunneling, with ipv6 prefix delegation in
359 the “respond” config.
361 respond configuration
362 ~~~~~~~~~~~~~~~~~~~~~
368 comment { "internet" }
369 create host-interface name eth1
370 set int ip address host-eth1 10.26.68.100/24
371 set int ip6 table host-eth1 0
372 set int state host-eth1 up
374 comment { default route via initiate }
375 ip route add 0.0.0.0/0 via 10.26.68.101
377 comment { "respond-private-net" }
378 create host-interface name eth2
379 bvi create instance 0
380 set int l2 bridge bvi0 1 bvi
381 set int ip address bvi0 10.166.14.2/24
382 set int state bvi0 up
383 set int l2 bridge host-eth2 1
384 set int state host-eth2 up
387 nat44 add interface address host-eth1
388 set interface nat44 in host-eth2 out host-eth1
389 nat44 add identity mapping external host-eth1 udp 500
390 nat44 add identity mapping external host-eth1 udp 4500
391 comment { nat44 untranslated subnet 10.219.188.0/24 }
393 comment { responder profile }
394 ikev2 profile add initiate
395 ikev2 profile set initiate udp-encap
396 ikev2 profile set initiate auth rsa-sig cert-file /scratch/setups/respondcert.pem
397 set ikev2 local key /scratch/setups/initiatekey.pem
398 ikev2 profile set initiate id local fqdn initiator.my.net
399 ikev2 profile set initiate id remote fqdn responder.my.net
400 ikev2 profile set initiate traffic-selector remote ip-range 10.219.188.0 - 10.219.188.255 port-range 0 - 65535 protocol 0
401 ikev2 profile set initiate traffic-selector local ip-range 10.166.14.0 - 10.166.14.255 port-range 0 - 65535 protocol 0
402 create ipip tunnel src 10.26.68.100 dst 10.26.68.101
403 ikev2 profile set initiate tunnel ipip0
405 comment { ipv6 prefix delegation }
406 ip6 nd address autoconfig host-eth1 default-route
407 dhcp6 client host-eth1
408 dhcp6 pd client host-eth1 prefix group hgw
409 set ip6 address bvi0 prefix group hgw ::2/56
410 ip6 nd address autoconfig bvi0 default-route
411 ip6 nd bvi0 ra-interval 5 3 ra-lifetime 180
413 set int mtu packet 1390 ipip0
414 set int unnum ipip0 use host-eth1
415 ip route add 10.219.188.0/24 via ipip0
417 initiate configuration
418 ~~~~~~~~~~~~~~~~~~~~~~
424 comment { "internet" }
425 create host-interface name eth1
426 comment { set dhcp client intfc host-eth1 hostname initiate }
427 set int ip address host-eth1 10.26.68.101/24
428 set int state host-eth1 up
430 comment { default route via "internet gateway" }
431 comment { ip route add 0.0.0.0/0 via 10.26.68.1 }
433 comment { "initiate-private-net" }
434 create host-interface name eth3
435 bvi create instance 0
436 set int l2 bridge bvi0 1 bvi
437 set int ip address bvi0 10.219.188.2/24
438 set int state bvi0 up
439 set int l2 bridge host-eth3 1
440 set int state host-eth3 up
442 nat44 add interface address host-eth1
443 set interface nat44 in bvi0 out host-eth1
444 nat44 add identity mapping external host-eth1 udp 500
445 nat44 add identity mapping external host-eth1 udp 4500
446 comment { nat44 untranslated subnet 10.166.14.0/24 }
448 comment { initiator profile }
449 ikev2 profile add respond
450 ikev2 profile set respond udp-encap
451 ikev2 profile set respond auth rsa-sig cert-file /scratch/setups/initiatecert.pem
452 set ikev2 local key /scratch/setups/respondkey.pem
453 ikev2 profile set respond id local fqdn responder.my.net
454 ikev2 profile set respond id remote fqdn initiator.my.net
456 ikev2 profile set respond traffic-selector remote ip-range 10.166.14.0 - 10.166.14.255 port-range 0 - 65535 protocol 0
457 ikev2 profile set respond traffic-selector local ip-range 10.219.188.0 - 10.219.188.255 port-range 0 - 65535 protocol 0
459 ikev2 profile set respond responder host-eth1 10.26.68.100
460 ikev2 profile set respond ike-crypto-alg aes-cbc 256 ike-integ-alg sha1-96 ike-dh modp-2048
461 ikev2 profile set respond esp-crypto-alg aes-cbc 256 esp-integ-alg sha1-96 esp-dh ecp-256
462 ikev2 profile set respond sa-lifetime 3600 10 5 0
464 create ipip tunnel src 10.26.68.101 dst 10.26.68.100
465 ikev2 profile set respond tunnel ipip0
466 ikev2 initiate sa-init respond
468 set int mtu packet 1390 ipip0
469 set int unnum ipip0 use host-eth1
470 ip route add 10.166.14.0/24 via ipip0
472 IKEv2 certificate setup
473 -----------------------
475 In both of the vpp configurations, you’ll see “/scratch/setups/xxx.pem”
476 mentioned. These certificates are used in the ikev2 key exchange.
478 Here’s how to generate the certificates:
482 openssl req -x509 -nodes -newkey rsa:4096 -keyout respondkey.pem -out respondcert.pem -days 3560
483 openssl x509 -text -noout -in respondcert.pem
484 openssl req -x509 -nodes -newkey rsa:4096 -keyout initiatekey.pem -out initiatecert.pem -days 3560
485 openssl x509 -text -noout -in initiatecert.pem
487 Make sure that the “respond” and “initiate” configurations point to the
493 If you need an ipv6 dhcp server to test ipv6 prefix delegation, create
494 the “dhcpserver” container as shown above.
496 Install the “isc-dhcp-server” Debian package:
500 sudo apt-get install isc-dhcp-server
502 /etc/dhcp/dhcpd6.conf
503 ~~~~~~~~~~~~~~~~~~~~~
505 Edit the dhcpv6 configuration and add an ipv6 subnet with prefix
506 delegation. For example:
510 subnet6 2001:db01:0:1::/64 {
511 range6 2001:db01:0:1::1 2001:db01:0:1::9;
512 prefix6 2001:db01:0:100:: 2001:db01:0:200::/56;
515 Add an ipv6 address on eth1, which is connected to the “internet”
516 bridge, and start the dhcp server. I use the following trivial bash
517 script, which runs the dhcp6 server in the foreground and produces dhcp
523 ifconfig eth1 inet6 add 2001:db01:0:1::10/64 || true
524 dhcpd -6 -d -cf /etc/dhcp/dhcpd6.conf
526 The “\|\| true” bit keeps going if eth1 already has the indicated ipv6
529 Container / Host Interoperation
530 -------------------------------
532 Host / container interoperation is highly desirable. If the host and a
533 set of containers don’t run the same distro *and distro version*, it’s
534 reasonably likely that the glibc versions won’t match. That, in turn,
535 makes vpp binaries built in one environment fail in the other.
537 Trying to install multiple versions of glibc - especially at the host
538 level - often ends very badly and is *not recommended*. It’s not just
539 glibc, either. The dynamic loader ld-linux-xxx-so.2 is glibc version
542 Fortunately, it’s reasonable easy to build lxd container images based on
543 specific Ubuntu or Debian versions.
545 Create a custom root filesystem image
546 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
548 First, install the “debootstrap” tool:
552 sudo apt-get install debootstrap
554 Make a temp directory, and use debootstrap to populate it. In this
555 example, we create an Ubuntu 20.04 (focal fossa) base image:
560 # debootstrap focal /tmp/myroot http://archive.ubuntu.com/ubuntu
562 To tinker with the base image (if desired):
570 Make a compressed tarball of the base image:
574 # tar zcf /tmp/rootfs.tar.gz -C /tmp/myroot .
576 Create a “metadata.yaml” file which describes the base image:
580 architecture: "x86_64"
581 # To get current date in Unix time, use `date +%s` command
582 creation_date: 1458040200
584 architecture: "x86_64"
585 description: "My custom Focal Fossa image"
589 Make a compressed tarball of metadata.yaml:
593 # tar zcf metadata.tar.gz metadata.yaml
595 Import the image into lxc / lxd:
599 $ lxc image import metadata.tar.gz rootfd.tar.gz --alias focal-base
601 Create a container which uses the customized base image:
602 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
606 $ lxc launch focal-base focaltest
607 $ lxc exec focaltest bash
609 The next several steps should be executed in the container, in the bash
610 shell spun up by “lxc exec…”
612 Configure container networking
613 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
615 In the container, create /etc/netplan/50-cloud-init.yaml:
625 Use “cat > /etc/netplan/50-cloud-init.yaml”, and cut-’n-paste if your
626 favorite text editor is AWOL.
628 Apply the configuration:
634 At this point, eth0 should have an ip address, and you should see a
635 default route with “route -n”.
640 Again, in the container, set up /etc/apt/sources.list via cut-’n-paste
641 from a recently update “focal fossa” host. Something like so:
645 deb http://us.archive.ubuntu.com/ubuntu/ focal main restricted
646 deb http://us.archive.ubuntu.com/ubuntu/ focal-updates main restricted
647 deb http://us.archive.ubuntu.com/ubuntu/ focal universe
648 deb http://us.archive.ubuntu.com/ubuntu/ focal-updates universe
649 deb http://us.archive.ubuntu.com/ubuntu/ focal multiverse
650 deb http://us.archive.ubuntu.com/ubuntu/ focal-updates multiverse
651 deb http://us.archive.ubuntu.com/ubuntu/ focal-backports main restricted universe multiverse
652 deb http://security.ubuntu.com/ubuntu focal-security main restricted
653 deb http://security.ubuntu.com/ubuntu focal-security universe
654 deb http://security.ubuntu.com/ubuntu focal-security multiverse
656 “apt-get update” and “apt-install” should produce reasonable results.
657 Suggest “apt-get install make git”.
659 At this point, you can use the “/scratch” sharepoint (or similar) to
660 execute “make install-dep install-ext-deps” to set up the container with
661 the vpp toolchain; proceed as desired.