1 # Automated Building Of FD.io CI Executor Docker Images
3 This collection of bash scripts and libraries is used to automate the process
4 of building FD.io docker 'builder' images (aka Nomad executors). The goal is to
5 create a completely automated CI/CD pipeline. The bash code is designed to be
6 run in a regular Linux bash shell in order to bootstrap the CI/CD pipeline
7 as well as in a docker 'builder' image started by a ci-management jenkins job.
8 The Dockerfile is generated prior to executing 'docker build' based on the os
9 parameter specified. The project git repos are also copied into the docker
10 container and retained for optimization of git object retrieval by the Jenkins
11 jobs running the CI/CD tasks.
13 ## Image Builder Algorithm
15 The general algorithm to automate the generation of the docker images such that
16 the downloadable requirements for each project are pre-installed or cached in
17 the executor image is as follows:
19 1. Run the docker image builder on a host of the target architecture. Bootstrap
20 images will be built 'by hand' on target hosts until such a time when the
21 CI is capable of executing the docker image builder scripts inside docker
22 images running on Nomad instances via jenkins jobs.
24 2. For each OS package manager, there is a bash function which generates the
25 Dockerfile for the specified OS which uses said package manager. For example,
26 lib_apt.sh contains 'generate_apt_dockerfile()' which is executed for Ubuntu
29 3. The Dockerfiles contain the following sections:
30 - a. Environment setup and copying of project workspace git repos
31 - b. Installation of OS package pre-requisites
32 - c. Docker install and project requirements installation (more on this below)
33 - d. Working environment setup
36 4. The Project installation section (c.) above is where all of the packages
37 for each of the supported project branches are installed or cached to
38 save time and bandwidth when the CI jobs are run. Each project script
39 defines the branches supported for each OS and iterates over them from
40 oldest to newest using the dependency and requirements files or build
41 targets in each supported project branch.
43 5. `docker build` is run on the generated Dockerfile.
45 ## Bash Libraries (lib_*.sh)
47 The bash libraries are designed to be sourced both inside of the docker build
48 environment (e.g. from a script invoked in a Dockerfile RUN statement) as well
49 as in a normal Linux shell. These scripts create environment variables and
50 bash functions for use by the operational scripts.
52 - `lib_apt.sh`: Dockerfile generation functions for apt package manager.
54 - `lib_common.sh`: Common utility functions and environment variables
56 - `lib_csit.sh`: CSIT specific functions and environment variables
58 - `lib_vpp.sh`: VPP specific functions and environment variables
63 There are two types of bash scripts, those intended to be run solely inside
64 the docker build execution environment, the other run either inside or
67 ### Docker Build (dbld_*.sh) Scripts
69 These scripts run inside the 'docker build' environment are either per-project
70 scripts that install OS and python packages or scripts that install other docker
71 image runtime requirements.
73 Python packages are not retained because they are typically installed in virtual
74 environments. However installing the python packages in the Docker Build scripts
75 populates the pip/http caches. Therefore packages are installed from the cache
76 files during CI job execution instead of being downloaded from the Internet.
78 - `dbld_csit_find_ansible_packages.sh`: Script to find OS packages installed by
81 - `dbld_csit_install_packages.sh`: Install OS and python packages for CSIT
84 - `dbld_dump_build_logs.sh`: Find warnings/errors in the build logs and dump
85 the build_executor_docker_image.sh execution log.
87 - `dbld_install_docker.sh`: Install docker ce
89 - `dbld_lfit_requirements.sh`: Install requirements for LFIT global-jjb
92 - `dbld_vpp_install_packages.sh`: Install OS and python packages for VPP
95 ### Executor Docker Image Management Bash Scripts
97 These scripts are used to build executor docker images, inspect the results, and
98 manage the docker image tags in the Docker Hub fdiotools repositories.
100 - `build_executor_docker_image.sh`: Build script to create one or more executor
103 - `update_dockerhub_prod_tags.sh`: Inspect/promote/revert production docker tag
104 in the Docker Hub fdiotools repositories.
106 ## Running The Scripts
108 ### Bootstrapping The Builder Images
110 The following commands are useful to build the initial builder images:
112 `cd <ci-managment repository directory>`
114 `sudo ./docker/scripts/build_executor_docker_image.sh ubuntu-20.04 2>&1 | tee u2004-$(uname -m).log | grep -ve '^+'`
116 `sudo ./docker/scripts/build_executor_docker_image.sh -apr sandbox 2>&1 | tee all-sandbox-$(uname -m).log | grep -ve '^+'`
118 Note: The initial population of a Docker Hub repository is performed manually by
119 tagging and pushing the verified sandbox image as 'prod-<arch>' and
120 'prod-prev-<arch>' as the update_dockerhub_prod_tags.sh script assumes that
121 both labels exist in the repo. After the intial images have been pushed to the
122 Docker Hub respository, the update script is used to prevent inadvertently
123 applying the wrong tags to images in the repository.
125 ### Building in a Builder Image
127 By running the docker image with docker socket mounted in the container,
128 the docker build environment runs on the host's docker daemon. This
129 avoids the pitfalls encountered with Docker-In-Docker environments:
131 `sudo docker run -it -v /var/run/docker.sock:/var/run/docker.sock <docker-image>`
133 The environment in the docker shell contains all of the necessary
134 environment variable definitions so the docker scripts can be run
135 directly on the cli. Here is an example command that would be used in a CI job
136 which automates the generation and testing of a new ubuntu-20.04 docker image
137 and push it to Docker Hub fdiotools/builder-ubuntu2004:test-<arch>:
139 `build_executor_docker_image.sh -pr test ubuntu-20.04`
141 In the future, a fully automated CI/CD pipeline may be created for production
144 # Docker Image Script Workflow
146 This section describes the current workflow used for managing the CI/CD pipeline
147 for the Docker Images used by the FD.io CI Jobs.
149 Note: all operations that push images or image tags to Docker Hub require an
150 account with management privileges of the fdiotools repositories.
152 ## Update Production Docker Images
154 Note: Presently only the 'builder' class executor docker images are supported.
155 The others will be supported in the near future.
157 ### Build Docker Images and Push to Docker Hub with Sandbox CI Tag
159 For each hardware architecture, the build_executor_docker_image.sh script is
160 used to build all variants of the each executor class:
162 1. `git clone https://gerrit.fd.io/r/ci-management && cd ci-management`
164 2. `sudo ./docker/scripts/build_executor_docker_image.sh -p -r sandbox -a | tee builder-all-sandbox-$(uname -m).log | grep -ve '^+'``
166 3. `Inspect the build log for Errors and other build anomalies`
168 This step will take a very long time so best to do it overnight. There is not
169 currently an option to automatically run builds in parallel, so if optimizing
170 build times is important, then run the jobs in separate shells for each OS.
171 The aarch64 builds are particularly slow, thus may benefit from being run on
172 separate hosts in parallel.
174 Note: the 'prod' role is disallowed in the build script to prevent accidental
175 deployment of untested docker images to production.
177 ### Test Docker Images in the Jenkins Sandbox
179 In the future, this step will be automated using the role 'test' and associated
180 tags, but for now testing is a manual operation.
182 1. `git clone https://gerrit.fd.io/r/vpp ../vpp && source ../vpp/extras/bash/functions.sh`
184 2. Edit jjb/vpp/vpp.yam (or other project yaml file) and replace '-prod-' with '-sandbox-' for all of the docker image
188 4. For each job using one of the docker images:
190 a. `jjsb-update <job name(s)>` # bash function created by jjb-sandbox-env to
191 push job to the sandbox
193 b. manually run the job in https://jenkins.fd.io/sandbox
195 c. Inspect the console output of each job for unnecessary downloads & errors.
197 ### Promote Docker Images to Production
199 Once all of the docker images have been tested, promote each one to production:
201 `sudo ./docker/scripts/update_dockerhub_prod_tags.sh promote <image name>`
203 Note: this script currently requires human acceptance via the terminal to ensure
205 It pulls all tags from the Docker Hub repos, does an Inspect action (displaying
206 the current state of 'prod' & 'prod-prev' tags) and local Promotion action (i.e.
207 tags local images with 'prod-<arch>' and 'prod-prev-<arch>') with a required
208 confirmation to continue the promotion by pushing the tags to Docker Hub. If
209 'no' is specified, it restores the previous local tags so they match the state
210 of Docker Hub and does a new Inspect action for verification. If 'yes' is
211 specified, it prints out the command to use to restore the existing state of the
212 production tags on Docker Hub in case the script is terminated prior to
213 completion. If necessary, the restore command can be repeated multiple times
214 until it completes successfully since it promotes the 'prod-prev-<arch>' image,
215 then the 'prod-<arch>' image in succession.
217 ## Other Docker Hub Operations
219 ### Inspect Production Docker Image Tags
221 Inspect the current production docker image tags:
223 `sudo ./docker/scripts/update_dockerhub_prod_tags.sh inspect fdiotools/<class>-<os name>:prod-$(uname -m)`
225 ### Revert Production Docker Image To Previous Docker Image
227 Inspect the current production docker image tags:
229 `sudo ./docker/scripts/update_dockerhub_prod_tags.sh revert fdiotools/<class>-<os name>:prod-$(uname -m)`
231 ### Restoring Previous Production Image State
233 Assuming that the images still exist in the Docker Hub repository, any previous
234 state of the production image tags can be restored by executing the 'restore
235 command' as output by the build_executor_docker_image.sh script. This script
236 writes a copy of all of the terminal output to a log file in
237 /tmp/build_executor_docker_image.sh.<date>.log thus providing a history of the
238 restore commands. When the building of executor docker images is peformed by a
239 CI job, the logging can be removed since the job execution will be captured in
240 the Jenkins console output log.
242 ### Docker Image Garbage Collection
244 Presently, cleaning up the Docker Hub repositories of old images/tags is a
245 manual process using the Docker Hub WebUI. In the future, a garbage collection
246 script will be written to automate the process.
248 # DockerHub Repository & Docker Image Tag Nomenclature:
250 ## DockerHub Repositories
252 - fdiotools/builder-debian11
253 - fdiotools/builder-ubuntu2004
254 - fdiotools/builder-ubuntu2204
255 - fdiotools/csit_dut-ubuntu2004
256 - fdiotools/csit_shim-ubuntu2004
260 - prod-x86_64: Tag used to select the x86_64 production image by the associated
262 - prod-prev-x86_64: Tag of the previous x86_64 production image used to revert
263 a production image to the previous image used in production.
264 - prod-aarch64: Tag used to select the aarch64 production image by the
265 associated Jenkins-Nomad Label.
266 - prod-prev-aarch64 Tag of the previous aarch64 production image used to revert
267 a production image to the previous image used in production.
268 - sandbox-x86_64: Tag used to select the x86_64 sandbox image by the associated
270 - sandbox-aarch64: Tag used to select the aarch64 sandbox image by the
271 associated Jenkins-Nomad Label.