Commit 5954f5c1 authored by Emanuele Aina's avatar Emanuele Aina

Release unpublished documents

A few documents written in the past have not been released publicly.

Let's push them out now.
Signed-off-by: Emanuele Aina's avatarEmanuele Aina <>
parent 9bb7cb5b
This diff is collapsed.
title: Jenkins and Docker
short-description: Standardizing on Docker as the environment for Jenkins jobs
- name: Emanuele Aina
# Jenkins and Docker
This document provides a high-level overview of the reasons to adopt Docker for
the Jenkins jobs used by the Apertis infrastructure and covers the steps needed
to transition existing non-Docker jobs.
## What is Jenkins
Jenkins is the automation server that ties all the components of the Apertis
infrastructure together.
It is responsible for:
* building source packages from git repositories and submitting them to OBS
* building ospacks and images
* submitting test jobs to LAVA
* rendering documentation from Markdown to HTML and PDF and publishing it
* building sample app-bundles
* bundling test helpers
## What is Docker
Docker is the leading system to build, manage and run server applications in
a containerized environment.
It simplifies reproducibility by:
* providing an easy way to build container images
* providing a registry for already built container images
* isolating the applications using the container images from the host system
## Why Docker with Jenkins
Running Jenkins jobs directly on a worker machine has several drawbacks:
* all the jobs share the same work environment which can cause unwanted interactions
* the work environment has to be provisioned manually by installing packages on
the machine and hand-tweaking the configuration
* the work environment has to be kept up-to-date manually
* reproducing the same work environment on different workers is very error
prone as it relies on manual action
* customizing the work environment needs privileged operations
* the work environment can't be reproduced on developers' machines
* conflicting requirements (for instance, building against different releases)
cannot be fulfilled as the work environment is shared
* scaling is complex
Jenkins jobs can be instead configured to use Docker containers as their
environment, which gives the advantages below:
* each job runs in a separate container, giving more control about resource usage
* Docker containers are instantiated automatically by Jenkins
* rebuilding Docker containers from scratch to get the latest updates can be
done with a single click
* the containers provide a reproducible environment across workers
* Docker container images are built from `Dockerfiles` controlled by developers
using the normal review workflow with no special privileges
* the same container images used on the Jenkins workers can be used to
reproduce the work environment on developers' machines
* containers are ephemeral, a job changing the work environment does not affect
other jobs nor subsequent runs of the same job
* containers are isolated from each other and allow to address conflicting
requirements using different images
* several service providers offer Docker support which can be used for scaling
## Apertis jobs using Docker
Apertis already uses Docker containers for a few key jobs: in particular the
transition to Debos has been done by targeting Docker from the start,
greatly simplifying setup and maintenance compared to the previous mechanism.
### Image recipes
The [jobs building ospacks and images](
use the [image-builder](
Docker container, based on Debian `stretch`.
A special requirement for those jobs is that `/dev/kvm` must be made accessible
inside the container: particular care must then be taken for the worker
machines that will run these jobs, ruling out incompatible virtualization
mechanisms (for instance VirtualBox) and service providers that cannot provide
access to the KVM device.
Developers can retrieve and launch the same environment used by Jenkins with a
single command:
$ docker run \
-t \
### Documentation
The [jobs building the designs and development websites](
use the [documentation-builder](
Docker container, based on Debian `stretch`.
Unlike the containers used to build images, the documentation builder does not
have any special requirement.
### Docker images
The Docker images used are generated and kept up-to-date
[through a dedicated Jenkins job](
that checks out the [docker-images](
repository, uses Docker to build all the needed images and pushes them to
[our Docker registry]( to make them
available to Jenkins and to developers.
## Converting the remaining jobs to Docker
All the older jobs still run directly on a specifically configured worker
machine. By converting them to use Docker we would get the benefits listed
above and we would also be able to repurpose the special worker machine to
become another Docker host, doubling the number of jobs that can be run
in parallel.
The affected jobs are:
* [`packaging/*`](
* [`packages/*`](
* [`samples/*`](
* [`apertis-check-commit`](
* [`apertis-master-build-snapshot`](
* [`apertis-build-package-all-masters`](
### Creating a new Docker image
The first step is to create a new Docker image to reproduce the work
environment needed by the jobs.
A new [package-builder](
recipe is introduced.
Unlike other images so far, this one is based on Apertis itself rather than
Debian. This means that a minimal Apertis ospack is produced during the build
and is then used to seed the `Dockerfile`, which installs all the needed
packages on top of it.
### Converting the packaging jobs
All the `packages/*` and `packaging/*` jobs are similar as they involve
checking out a git tree for a package, launching `build-snapshot` to build them
against the work environment and submit the resulting source package to OBS.
Once all the dependencies have been made available in the work environment again,
[converting the job templates](
only requires minor changes.
### Converting the sample app-bundle jobs
The jobs building the sample applications need `ade` and the dependencies of the app-bundles themselves.
The changes required to
[switch the job template to use Docker](
are pretty similar to the ones required by the packaging jobs.
### Converting the build-package-all-masters job
This job's purpose is to check that no API breakage is introduced in
the application framework and HMI packages by building them from sources
in sequence.
The changes required to
[switch the job template to use Docker](
are pretty similar to the ones required by the packaging jobs.
### Converting the Phabricator jobs
While the plan is to officially switch to GitLab for all the code reviews, the
jobs used to validate the patches submitted to Phabricator need to be ported to
avoid regressions.
The changes [to port them to Docker](
are similar to the ones for the other jobs, but
additional fixes are needed to ensure they work smoothly in ephemeral Docker containers,
[relaxing ssh host keys checking]( and
[avoiding the interactive behavior of git-phab](
## Steps to be taken by downstreams
Downstreams are already likely to have a Docker-capable worker machine for their
Jenkins instance in order to run the Debos-based jobs.
By merging the latest changes in the
repository a new `package-builder` image should be available in their
Docker registry.
The updates to the templates in the
repository can then be merged and deployed to Jenkins to make use of the new
Docker image.
This diff is collapsed.
title: Maintaining workspace across SDK updates
- name: Andre Moreira Magalhaes
# Background
The SDK is distributed as a VirtualBox image, and developers make changes to
adjust the SDK to their needs. These changes include installing tools, libraries,
changing system configuration files, and adding content to their workspace.
There is one VirtualBox image for each version of the SDK, and currently a
version upgrade requires each developer to manually migrate their SDK
customization to the new version. This migration is time consuming,
and involves frustrating and repetitive work.
One additional problem is the need some product teams have to support different
versions of the SDK at the same time. The main challenge in this scenario is
the synchronization of the developer’s customizations between multiple
VirtualBox images.
The goal of this document is to define a model to decouple developer
customization from SDK images and thus allowing the developer to have
persistence for workspace, configuration, and binary packages (libraries
and tools) over different SDK images.
# Use cases
* SDK developer wants to share the workspace among different SDK images
with minimal effort. In particular, the user doesn't want to have to
rely on manually copying the workspace across SDK images in order to keep
them in sync.
* SDK developer wants a simple way to share custom system configuration
(i.e. changes to `/etc`) across SDK images.
* SDK developer wants to keep tools and libraries selection in
sync over different SDK images.
# Solution
For addressing workspace persistence, and partially addressing tools and
libraries synchronization across different SDK images the following options
were considered:
* Use [VirtualBox shared folders] as mount points for `/home` and `/opt`
* Ship a preconfigured second disk as part of the SDK images using
the [OVF format] (`.ova` files)
* Use a second (optional) disk with partitions for `/home` and `/opt` directories
and leave it to the developer to setup the disk. Helper scripts
would then be provided to help the developer setting up the disk
(e.g. setup partitions, mountpoints, copy existing content of `/home` and
`/opt` directories, etc)
The use of shared folders would be ideal here given that the setup would be
simpler while also allowing the developer to easily share data between the host
and guest (SDK).
[The problem with shared folders][VirtualBox shared folders and symlinks] is
that they don't support the creation of symlinks, which is essential for
development given that they are frequently used when configuring a source tree
to build.
However, the issue with symlinks is nonexistent when using a secondary disk,
as the disk can be partitioned and formatted using a filesystem that
supports them, making it a viable option here.
While the option to ship a preconfigured second disk as part of the SDK images
(using the OVF format) seems like a better approach at first, it brings some
* The disk/partitions size would be limited to what is preconfigured during
image build
* Although some workarounds exist for VirtualBox to use `.vdi` images
(native VirtualBox image format) on `.ova` files, this is not officially
supported and VirtualBox will even convert any `.vdi` file to `.vmdk` format
when exporting an appliance using the OVF format
* In order to allow the same disk to be used by multiple virtual machines at
the same time (concurrently), VirtualBox requires the disk to be made
[shareable][VirtualBox image write modes], which in turn requires fixed size
disks (not dynamically allocated). While this may not be a common usecase,
some developers may still want it to be supported, in which case the SDK
images would have a huge increase in size, thus impacting
That said, we recommend the usage of a second disk configured by the developer
itself. This should add more flexibility to the developer, while avoiding the
limitations of using the OVF format.
Helper scripts could also be provided to ease the work of setting up the second
Another advantage of this solution is that current SDK users can also rely on
it the same way as new users would.
However it is important to note that using this option would also impact QA,
as it would need to support the two different setups (with and without a second
disk) for proper testing.
Also important to note that while this solution partially address tools and
libraries synchronization among different SDK images, it won't cover
synchronization of tools/libraries installed outside the developer workspace
or `/opt` directories.
Supporting it for any tools/libraries, despite of where they are installed,
would be quite complex and not pratically viable for several reasons such as
the fact that `dpkg` uses a single database for installed packages.
For that reason we recommend developers that want to keep their package
installation in full sync among different images to do it manually.
To address synchronization of system configuration changes (i.e. `/etc`)
the following options were considered:
* Use OverlayFS on top of /etc
* Use symlinks in the second disk (e.g. on `/opt/etc`) for each configuration
file changed
Although the use of an [OverlayFS] seems simpler at first, it has some
drawbacks such as the fact that after an update, changes stored at the developer
customization layer are likely to cause dependency issues and hide changes to
the content and structure of the base system.
For example if a developer upgrades an existing SDK image (or downloads a new one)
and sets up the second disk/partition as overlay for this image's `/etc`, it may
happen that if the image had changes to the same configuration files present
in the overlay, these changes would simply get ignored and it would be hard for the
user to notice it.
The other option would be to use symlinks in the second disk for each configuration
file changed. While this should require a bit more effort to setup, it should
at the same time give the user more control/flexibility over which configuration files
get used, and also should make it easier to notice changes in the default image
configuration, given that it is likely that the user would check the original/system
configuration files before replacing them with a symlink.
For this option, the user would still have to manually create the symlinks in all SDK
images it wants to share the configuration, but that process could be eased with the
use of helper scripts to create and setup the symlinks.
Note that this approach may also cause some issues, such as the fact that some specific
software may not work with symlinked configuration files or that early boot could
potentially break if there are symlinks to e.g. `/opt`.
Given that the most common use cases for customizing system configuration would be
to setup things like a system proxy (e.g. `cntlm`) and that not many customizations
are expected, the recommended approach would be to use symlinks, as it would allow
the user to have more control over the changes.
As mentioned above, no single solution would work for all use cases and the
developers/users should evaluate the best approach based on their requirements.
# Implementation notes
To setup a new second disk, the following would be required:
* Create a new empty disk image
* Add the disk to the SDK image in question using the VirtualBox UI
* Partition and format the disk accordingly
* Setup mountpoints (i.e. `/etc/fstab`) such that the disk is mounted during
* Copy existing content of `/home` and `/opt` to the respective new disk
partitions - such that things like the default user files/folders are
properly populated on the new disk
Optionally, if the developer plans to use the same disk across multiple SDK
instances at the same time, it must create a fixed size disk above
and mark it as `shareable` using the VirtualBox UI.
To setup an existing disk on a new SDK image, the following would be required:
* Add the existing disk to the SDK image in question using the VirtualBox UI
* Setup mountpoints (i.e. `/etc/fstab`) such that the disk is mounted during
As mentioned above, helper scripts could be provided to ease this work.
A script could for example do all the work of partitioning/formatting the
disk, setting up the mountpoints and copying existing content over the new
partitions when on setup mode or only setup the mountpoints otherwise.
It could also allow the user to optionally specify the partitions size and
other configuration options.
For system configuration changes, considering the recommended approach,
the same or another script could also be used to setup the symlinks based
on the content of `/opt/etc` when setting up the disk.
It is recommended that the content of `/opt/etc` mimics the dir structure
and filenames of the original files in `/etc`, such that a script could walk
through all dirs/files in `/opt/etc` to create the symlinks on `/etc`.
The user would still have to manually install the packages living outside
`/opt` or the user workspace, but that can be easily done by retrieving the
list of installed packages in one image (e.g. using `dpkg --get-selections`)
and using that to install the packages in other images.
[VirtualBox shared folders]:
[VirtualBox shared folders and symlinks]:
[VirtualBox image write modes]:
[OVF format]:
This diff is collapsed.
This diff is collapsed.
title: Test case dependencies on immutable rootfs
short-description: Ship test case dependencies avoiding changes to the rootfs images.
- name: Denis Pynkin
- name: Emanuele Aina
- name: Frederic Dalleau
# Test case dependencies on immutable rootfs
## Overview
Immutable root filesystems have several security and maintainability advantages,
and avoiding changes to them increases the value of testing as the system under
test would closely match the production setup.
This is fundamental for setups that don't have the ability to install packages
at runtime, like OSTree-based deployments, but it's largely beneficial for
package based setups as well.
To achieve that, tests should then ship their own dependencies in a
self-contained way and not rely on package installation at runtime.
## Possible solutions
For adding binaries into OStree-based system, the following approaches are possible:
- Build the tests separately on Jenkins and have them run from
- Create a Jenkins job to extract tests from their .deb packages
shipped on OBS and to publish the results, so they can be run from
- Use layered filesystem for binaries install on top of testing image;
- Publish a separate OStree branch for tests created at build time from the same OS
pack as image to test;
- Produce OStree static deltas at build time from the same OS pack as
image to test with additional packages/binaries installed;
- Create mechanism for `dpkg` similar to RPM-OStree project* to allow installation
of additional packages in the same manner as we have for now.
* Creation of `dpkg-ostree` project will use a lot of time and human
resources due to changes in `dpkg` and `apt` system utilities.
## Overview of applicable approach
### Rework tests to ship their dependencies in '/var/lib/tests`
Build the tests separately and have them run from `/var/lib/tests` or
create a Jenkins job to extract tests from their `.`deb packages to `/var/lib/tests`
#### Pros:
- 'clean' testing environment -- the image is not polluted by additions, so
tests and dependencies have no influence on SW installed on image
- possibility to install additional packages/binaries in runtime
#### Cons:
- some binaries/scripts expect to find the dependencies in standard places --
additional changes are needed to create the directory with relocated test
tools installed
- we need to be sure if SW from packages works well from relocated directory
- additional efforts are needed to maintain 2 versions of some packages and/or
packaging for some binaries/libraries might be tricky
- can't install additional packages without some preparations in a build time
(save dpkg/apt-related infrastructure or create a tarball from pre-installed SW)
- possible versions mismatch between SW installed into testing image and SW
from tests directory
- problems in dependencies installation are detected only in runtime
### OStree branch or static deltas usage
Both approaches are based on native OStree upgrade/rollback mechanism -- only
transport differs.
#### Pros:
- test of OStree upgrade mechanism is integrated
- easy to create and maintain branches for different groups of tests -- so
only SW needed for the group is installed during the tests
- developer can obtain the same environment as used in LAVA in a few `ostree` commands
- problems with installation of dependencies for the test are detected in a buildtime
- the original image do not need to have `wget`, `curl` or any other tool for
download -- `ostree` tool have own mechanism for download needed commit from test
- with OStree static deltas we are able to test 'offline' upgrades without
network access
- saves a lot of disk space for infrastructure due OStree repository usage
#### Cons:
- 'dirty' testing environment -- the list of packages is not the same as we have
in testing image; e.g. system places for binaries and libraries are used by
additional packages installed, as well as changes in system configuration
might occur (the same behavior we have in current test system with installation
of additional packages via `apt`)
- not possible to install additional packages at runtime
- additional branch(es) should be created at build time
- reboot is needed to apply the test environment
- in case of OStree static deltas -- creation of delta is an expensive
operation in terms of time and resources usage
### OStree overlay
Overlay is a native option provided by `ostree` project, re-mounting "/usr"
directory in R/W mode on top of 'overlayfs'. This allows to add any software
into "/usr" but changes will disappear just after reboot.
#### Pros:
- limited possibility to install additional packages at runtime (with saved
state of `dpkg` and `apt`) -- merged "/usr" is desirable
- possibility to copy/unpack prepared binaries directly to "/usr" directory
- able to use OStree pull/checkout mechanism to apply overlay
#### Cons:
- dirty testing environment -- the list of packages is not the same as we have
in testing image
- OStree branch should contain only "/usr" if used. In other case need to use
foreign for OStree methods to store binaries and/or filesystem tree
- can't apply additional software without some preparations in a build time
(save dpkg/apt-related infrastructure, create a tarball from pre-installed
SW or create an ostree branch)
- possible versions mismatch between SW installed into testing image and SW
from tests directory
- problems in dependencies installation are detected only in runtime
## Overall proposal
The proposal consist of a transition from a full apt based test mechanism to
a more independant test mechanism.
Each tests will be pulled of `apertis-tests` and moved to its own git
repository. During the move, the test will be made relocatable, and its
dependencies will be reduced.
Dependencies that could not be removed would be added to the test itself.
At any time, it would still be possible to run the old tests on the non OSTree
platform. The new test that have already be transitionned could run on both
OSTree and apt platforms.
The following steps are envisioned.
### Create separate git repository for each test
In order to run the tests on LAVA, the use of git is recommended.
LAVA is already able to pull test definitions from git, but it can pull only one
git repository for each test.
To satisfy this constraint, each test definition, scripts, and
dependencies must be grouped in a single git repository.
In order to run the tests manually, GitLab is able to dynamically build a
tarball with the content of a git repository at any time. The tarball can be
retrieved at a specific URL.
By specifying a branch other than master, a release-specific test can be
A tool such as `wget` or `curl` can be used, or it might be necessary to
download the test tarball from a host, and copy it to the device under test
using `scp`.
### Reduce dependencies
To minimize impact of the tests dependencies on the target environment,
some dependencies need to be dropped. For example, Python requires several
megabytes of binaries and dependencies itself, so all the Python scripts will
need to be rewritten using Posix shell scripts or compiled binaries.
For tests using data files, the data should be integrated in the git repository.
### Make test relocatable
Most of the tests rely on static path to find binaries. It is straightforward
to modify a test to use a custom `PATH` instead of static one. This custom
`PATH` would point to a subdirectory in the test repository itself.
This applies to dependencies which could be relocated, such as statically
linked binaries, scripts, and media files.
For the test components that might not be ported easily, such as For example
AppArmor profiles that are designed to work on binaries at fixed locations, a
case-by-case approach needs to be taken.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -4,6 +4,7 @@
......@@ -15,9 +16,12 @@
......@@ -33,6 +37,9 @@
Markdown is supported
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment