7 This directory contains the *high-level* process to set up a hardware machine
8 as a CSIT testbed, either for use as a physical performance testbed host or as
11 Code in this directory is NOT executed as part of a regular CSIT test case
12 but is stored here for ad-hoc installation of HW, archiving and documentation
15 Setting up a hardware host
16 --------------------------
18 Documentation below is step by step tutorial and assumes an understanding of PXE
19 boot and Ansible and managing physical hardware via CIMC or IPMI.
21 This process is not specific for LF lab, but associated files and code, is based
22 on the assumption that it runs in LF environment. If run elsewhere, changes
23 will be required in following files:
25 #. Inventory directory: `ansible/inventories/sample_inventory/`
26 #. Inventory files: `ansible/inventories/sample_inventory/hosts`
27 #. Kickseed file: `pxe/ks.cfg`
28 #. DHCPD file: `pxe/dhcpd.conf`
29 #. Bootscreen file: `boot-screens_txt.cfg`
31 The process below assumes that there is a host used for bootstrapping (referred
32 to as "PXE bootstrap server" below).
34 Prepare the PXE bootstrap server when there is no http server AMD64
35 ```````````````````````````````````````````````````````````````````
37 #. Clone the csit repo:
41 git clone https://gerrit.fd.io/r/csit
42 cd csit/resources/tools/testbed-setup/pxe
44 #. Setup prerequisities (isc-dhcp-server tftpd-hpa nginx-light ansible):
48 sudo apt-get install isc-dhcp-server tftpd-hpa nginx-light ansible
54 sudo cp dhcpd.cfg /etc/dhcp/
55 sudo service isc-dhcp-server restart
58 #. Download Ubuntu 18.04 LTS - X86_64:
62 wget http://cdimage.ubuntu.com/ubuntu/releases/18.04/release/ubuntu-18.04-server-amd64.iso
63 sudo mount -o loop ubuntu-18.04-server-amd64.iso /mnt/cdrom/
64 sudo cp -r /mnt/cdrom/install/netboot/* /var/lib/tftpboot/
66 # Figure out root folder for NGINX webserver. The configuration is in one
67 # of the files in /etc/nginx/conf.d/, /etc/nginx/sites-enabled/ or in
68 # /etc/nginx/nginx.conf under section server/root. Save the path to
70 sudo mkdir -p ${WWW_ROOT}/download/ubuntu
71 sudo cp -r /mnt/cdrom/* ${WWW_ROOT}/download/ubuntu/
72 sudo cp /mnt/cdrom/ubuntu/isolinux/ldlinux.c32 /var/lib/tftpboot
73 sudo cp /mnt/cdrom/ubuntu/isolinux/libcom32.c32 /var/lib/tftpboot
74 sudo cp /mnt/cdrom/ubuntu/isolinux/libutil.c32 /var/lib/tftpboot
75 sudo cp /mnt/cdrom/ubuntu/isolinux/chain.c32 /var/lib/tftpboot
76 sudo umount /mnt/cdrom
78 #. Edit ks.cfg and replace IP address of PXE bootstrap server and subdir in
79 `/var/www` (in this case `/var/www/download`):
83 sudo cp ks.cfg ${WWW_ROOT}/download/ks.cfg
85 #. Edit boot-screens_txt.cfg and replace IP address of PXE bootstrap server and
86 subdir in `/var/www` (in this case `/var/www/download`):
90 sudo cp boot-screens_txt.cfg /var/lib/tftpboot/ubuntu-installer/amd64/boot-screens/txt.cfg
91 sudo cp syslinux.cfg /var/lib/tftpboot/ubuntu-installer/amd64/boot-screens/syslinux.cfg
93 New testbed host - manual preparation
94 `````````````````````````````````````
96 Set CIMC/IPMI address, username, password and hostname an BIOS.
101 Optional: CIMC - From PXE boostrap server
102 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
104 #. Initialize args.ip: Power-Off, reset BIOS defaults, Enable console redir, get
109 ./cimc.py -u admin -p Cisco1234 $CIMC_ADDRESS -d -i
111 #. Adjust BIOS settings:
115 ./cimc.py -u admin -p Cisco1234 $CIMC_ADDRESS -d -s '<biosVfIntelHyperThreadingTech rn="Intel-HyperThreading-Tech" vpIntelHyperThreadingTech="disabled" />' -s '<biosVfEnhancedIntelSpeedStepTech rn="Enhanced-Intel-SpeedStep-Tech" vpEnhancedIntelSpeedStepTech="disabled" />' -s '<biosVfIntelTurboBoostTech rn="Intel-Turbo-Boost-Tech" vpIntelTurboBoostTech="disabled" />'
117 #. If RAID is not created in CIMC. Create RAID array. Reboot:
121 ./cimc.py -u admin -p Cisco1234 $CIMC_ADDRESS -d --wipe
122 ./cimc.py -u admin -p Cisco1234 $CIMC_ADDRESS -d -r -rl 1 -rs <disk size> -rd '[1,2]'
124 #. Reboot server with boot from PXE (restart immediately):
128 ./cimc.py -u admin -p Cisco1234 $CIMC_ADDRESS -d -pxe
130 #. Set the next boot from HDD (without restart). Execute while Ubuntu install
135 ./cimc.py -u admin -p Cisco1234 $CIMC_ADDRESS -d -hdd
137 Optional: IPMI - From PXE boostrap server
138 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
140 #. Get MAC address of LAN0:
144 ipmitool -U ADMIN -H $HOST_ADDRESS raw 0x30 0x21 | tail -c 18
146 #. Reboot into PXE for next boot only:
150 ipmitool -I lanplus -H $HOST_ADDRESS -U ADMIN chassis bootdev pxe
151 ipmitool -I lanplus -H $HOST_ADDRESS -U ADMIN power reset
153 #. For live watching SOL (Serial-over-LAN console):
157 ipmitool -I lanplus -H $HOST_ADDRESS -U ADMIN sol activate
158 ipmitool -I lanplus -H $HOST_ADDRESS -U ADMIN sol deactivate
163 Prerequisities for running Ansible
164 ..................................
166 - Ansible can run on any machine that has direct SSH connectivity to target
167 machines that will be provisioned (does not need to be PXE server).
168 - User `testuser` with password `Csit1234` is created with home folder
169 initialized on all target machines that will be provisioned.
170 - Inventory directory is created with same or similar content as
171 `inventories/lf_inventory` in `inventories/` directory (`sample_inventory`
173 - Group variables in `ansible/inventories/<inventory>/group_vars/all.yaml` are
174 adjusted per environment. Special attention to `proxy_env` variable.
175 - Host variables in `ansible/inventories/<inventory>/host_vars/x.x.x.x.yaml` are
181 Ansible is defining roles `TG` (Traffic Generator), `SUT` (System Under Test),
182 `VPP_DEVICE` (vpp_device host for functional testing). `COMMON` (Applicable
183 for all servers in inventory).
185 Each Host has corresponding Ansible role mapped and is applied only if Host
186 with that role is present in inventory file. As a part of optimization the role
187 `common` contains Ansible tasks applied for all Hosts.
191 You may see `[WARNING]: Could not match supplied host pattern, ignoring:
192 <role>` in case you have not define hosts for that particular role.
194 Ansible structure is described below:
199 ├── inventories # Contains all inventories.
200 │ ├── sample_inventory # Sample, free for edits outside of LF.
201 │ │ ├── group_vars # Variables applied for all hosts.
203 │ │ ├── hosts # Inventory list with sample hosts.
204 │ │ └── host_vars # Variables applied for single host only.
205 │ │ └── 1.1.1.1.yaml # Sample host with IP 1.1.1.1
206 │ └── lf_inventory # Linux Foundation inventory.
211 ├── roles # CSIT roles.
212 │ ├── common # Role applied for all hosts.
213 │ ├── sut # Role applied for all SUTs only.
214 │ ├── tg # Role applied for all TGs only.
215 │ ├── tg_sut # Role applied for TGs and SUTs only.
216 │ └── vpp_device # Role applied for vpp_device only.
217 ├── site.yaml # Main playbook.
218 ├── sut.yaml # SUT playbook.
219 ├── tg.yaml # TG playbook.
220 ├── vault_pass # Main password for vualt.
221 ├── vault.yml # Ansible vualt storage.
222 └── vpp_device.yaml # vpp_device playbook.
227 Every task, handler, role, playbook is tagged with self-explanatory tags that
228 could be used to limit which objects are applied to target systems.
230 You can see which tags are applied to tasks, roles, and static imports by
231 running `ansible-playbook` with the `--list-tasks` option. You can display all
232 tags applied to the tasks with the `--list-tags` option.
237 #. Go to ansible directory: `cd csit/resources/tools/testbed-setup/ansible`
238 #. Run ansible on selected hosts:
239 `ansible-playbook --vault-password-file=vault_pass --extra-vars '@vault.yml'
240 --inventory <inventory_file> site.yaml --limit x.x.x.x`
244 In case you want to provision only particular role. You can use tags: `tg`,
250 Manually reboot hosts after Ansible provisioning succeeded.