5 This directoctory contains the *high-level* process to set up a hardware
6 machine as a CSIT testbed, either for use as a physical testbed host or
9 Code in this directory is NOT executed as part of a regular CSIT test case
10 but is stored here merely for archiving and documentation purposes.
13 ## Setting up a hardware host
15 Documentation below is just bullet points and assumes and understanding
16 of PXE boot and ansible.
18 This process is specific for LF lab, and both examples given here as
19 well as associated code, are based on the assumption that they are run
20 in LF environment. If run elsewhere, changes will be required to IP addresses
23 The process below assumes that there is a host used for boostrapping (referred
24 to as "PXE boostrap server" below), and that the directory containig this README
25 is available on the PXE bootstrap server in ~testuser/host-setup.
27 ### Prepare the PXE bootstrap server when there is no http server
29 - `sudo apt-get install isc-dhcp-server tftpd-hpa nginx-light ansible`
30 - edit dhcpd.conf and place it to /etc/dhcp/
31 - `sudo cp dhcpd.cfg /etc/dhcp/`
32 - `sudo service isc-dhcp-server restart`
33 - `cd ~testuser/host-setup`
34 - `wget 'http://releases.ubuntu.com/16.04.2/ubuntu-16.04.2-server-amd64.iso'`
35 - `sudo mkdir /mnt/cdrom`
36 - `sudo mount -o loop ubuntu-16.04.2-server-amd64.iso /mnt/cdrom/`
37 - `sudo cp -r /mnt/cdrom/install/netboot/* /var/lib/tftpboot/`
38 - figure out where nginx will look for files on the filesystem when
39 responding to HTTP requests. The configuration is in one of the
40 files in /etc/nginx/conf.d/, /etc/nginx/sites-enabled/ or in
41 /etc/nginx/nginx.conf under section server/root. Save the path to NGINX_ROOT
42 - `sudo mkdir -p ${NGINX_ROOT}/download/ubuntu`
43 - `sudo cp -r /mnt/cdrom/* ${NGINX_ROOT}/download/ubuntu/`
44 - `sudo cp /mnt/cdrom/ubuntu/isolinux/ldlinux.c32 /var/lib/tftpboot`
45 - `sudo cp /mnt/cdrom/ubuntu/isolinux/libcom32.c32 /var/lib/tftpboot`
46 - `sudo cp /mnt/cdrom/ubuntu/isolinux/libutil.c32 /var/lib/tftpboot`
47 - `sudo cp /mnt/cdrom/ubuntu/isolinux/chain.c32 /var/lib/tftpboot`
48 - `sudo umount /mnt/cdrom`
49 - edit ks.cfg and replace IP address with that of your PXE bootstrap server
50 - `sudo cp ks.cfg ${NGINX_ROOT}/download/ks.cfg`
51 - edit boot-screens_txt.cfg and replace IP address with that of your PXE bootstrap server
52 - `sudo cp boot-screens_txt.cfg /var/lib/tftpboot/ubuntu-installer/amd64/boot-screens/txt.cfg`
53 - `sudo cp syslinux.cfg /var/lib/tftpboot/ubuntu-installer/amd64/boot-screens/syslinux.cfg`
55 ### PREFERED: Prepare the PXE bootstrap server when an http server is already configured
57 - `sudo apt-get install isc-dhcp-server tftpd-hpa ansible`
58 - edit dhcpd.conf and place it to /etc/dhcp/
59 - `sudo cp dhcpd.cfg /etc/dhcp/`
60 - `sudo service isc-dhcp-server restart`
61 - `cd ~testuser/host-setup`
62 - `wget 'http://releases.ubuntu.com/16.04.2/ubuntu-16.04.2-server-amd64.iso'`
63 - `sudo mkdir /mnt/cdrom`
64 - `sudo mount -o loop ubuntu-16.04.1-server-amd64.iso /mnt/cdrom/`
65 - `sudo cp -r /mnt/cdrom/install/netboot/* /var/lib/tftpboot/`
66 - `sudo mkdir /var/www/download/ubuntu`
67 - `sudo cp -r /mnt/cdrom/* /var/www/download/ubuntu/`
68 - `sudo cp /mnt/cdrom/ubuntu/isolinux/ldlinux.c32 /var/lib/tftpboot`
69 - `sudo cp /mnt/cdrom/ubuntu/isolinux/libcom32.c32 /var/lib/tftpboot`
70 - `sudo cp /mnt/cdrom/ubuntu/isolinux/libutil.c32 /var/lib/tftpboot`
71 - `sudo cp /mnt/cdrom/ubuntu/isolinux/chain.c32 /var/lib/tftpboot`
72 - `sudo umount /mnt/cdrom`
73 - edit ks.cfg and replace IP address with that of your PXE bootstrap server and subdir in /var/www (in this case /download)
74 - `sudo cp ks.cfg /var/www/download/ks.cfg`
75 - edit boot-screens_txt.cfg and replace IP address with that of your PXE bootstrap server and subdir in /var/www (in this case /download)
76 - `sudo cp boot-screens_txt.cfg /var/lib/tftpboot/ubuntu-installer/amd64/boot-screens/txt.cfg`
77 - `sudo cp syslinux.cfg /var/lib/tftpboot/ubuntu-installer/amd64/boot-screens/syslinux.cfg`
79 ### New testbed host - manual preparation
82 - set CIMC username, password and hostname
84 - set IPMI username, password and hostname
86 ### Bootstrap the host
88 Optional: From PXE boostrap server in case of installing Haswell
90 - `cd resources/tools/testbed-setup/cimc`
91 - Initialize args.ip: Power-Off, reset BIOS defaults, Enable console redir, get LOM MAC addr
92 - `./cimc.py -u admin -p Cisco1234 $CIMC_ADDRESS -d -i`
93 - Adjust BIOS settings
94 - `./cimc.py -u admin -p Cisco1234 $CIMC_ADDRESS -d -s '<biosVfIntelHyperThreadingTech rn="Intel-HyperThreading-Tech" vpIntelHyperThreadingTech="disabled" />' -s '<biosVfEnhancedIntelSpeedStepTech rn="Enhanced-Intel-SpeedStep-Tech" vpEnhancedIntelSpeedStepTech="disabled" />' -s '<biosVfIntelTurboBoostTech rn="Intel-Turbo-Boost-Tech" vpIntelTurboBoostTech="disabled" />'`
95 - add MAC address to DHCP (/etc/dhcp/dhcpd.conf)
96 - Reboot server with boot from PXE (restart immediately)
97 - `./cimc.py -u admin -p Cisco1234 $CIMC_ADDRESS -d -pxe`
99 Optional: If RAID is not created on Haswells. Execute while Ubuntu install is running
101 - create RAID array. Reboot if needed.
102 - `./cimc.py -u admin -p Cisco1234 $CIMC_ADDRESS -d --wipe`
103 - `./cimc.py -u admin -p Cisco1234 $CIMC_ADDRESS -d -r -rl 1 -rs <disk size> -rd '[1,2]'`
104 Alternatively, create the RAID array manually.
106 - Set the next boot from HDD (without restart)
107 - `./cimc.py -u admin -p Cisco1234 $CIMC_ADDRESS -d -hdd`
109 Optional: If installing Skylake machine, connect to IPMI and boot from PXE via F12
111 When installation is finished:
113 - Copy ssh keys for no pass access: `ssh-copy-id 10.30.51.x`
114 - Clone CSIT actual repo: `git clone https://gerrit.fd.io/r/csit`
115 - Go to ansible directory: `cd csit/resources/tools/testbed-setup/ansible`
116 - Edit production file and uncomment servers that are supposed to be installed: `ansible-playbook --ask-become-pass --inventory production site.yaml --list-hosts`
117 - Run ansible on selected hosts: `ansible-playbook --ask-become-pass --inventory production site.yaml`
119 For non-VIRL hosts, stop here.
121 ### VIRL installation
123 After the host has rebooted:
125 - `ansible-playbook 02-virl-bootstrap.yaml`
128 - `cd virl-bootstrap`
129 - `./virl-bootstrap-wrapper`
131 This command will error out when run the first time, as the VIRL host is not yet licensed.
133 Make sure we contact all three VIRL SALT masters:
135 - `for a in 1 2 4 ; do sudo salt-call --master us-${a}.virl.info test.ping ; done`
137 - Contact the VIRL team, provide the hostname and domain (linuxfoundation.org), and ask them
140 - After the key has been accepted, verify that connectivity with the SALT master is now OK:
142 `for a in 1 2 4 ; do sudo salt-call --master us-${a}.virl.info test.ping ; done`
144 - `./virl-bootstrap-wrapper`
147 After reboot, ssh to host again
148 - as VIRL user, NOT AS ROOT:
152 After reboot, ssh to host again
154 - `sudo salt-call state.sls virl.routervms.all`
155 - `sudo salt-call state.sls virl.vmm.vmmall`
157 Back on the PXE bootstrap server:
159 - obtain the current server disk image and place it into
160 `files/virl-server-image/` as `server.qcow2`
162 TO-DO: Need to find a place to store this image
164 - `ansible-playbook 03-virl-post-install.yaml`
166 - Run the following command ONLY ONCE. Otherwise it will create
167 duplicates of the VIRL disk image:
169 `ansible-playbook 04-disk-image.yaml`
171 The VIRL host should now be operational. Test, and when ready, create a ~jenkins-in/status file with the appropriate status.