Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • docs/apertis-website
  • em/apertis-website
  • martyn/apertis-website
  • Tino.Lippold-ext/apertis-website
  • sjoerd/apertis-website
  • Andreas.Elvstam/apertis-website
  • Muthukumaran.Monisha/apertis-website
  • JWD/apertis-website
  • cmueller/apertis-website
  • sietze.vanbuuren-ext/apertis-website-fork
  • sietze.vanbuuren-ext/apertis-website-nested-menus
  • sietze.vanbuuren-ext/apertis-website-edit-on-gitlab
  • andreas.huebner/apertis-website
  • sietze.vanbuuren-ext/apertis-website-nested-menu-fix-in-deselected-sections
  • sietze.vanbuuren-ext/apertis-website-fix-page-title-ref
  • sietze.vanbuuren-ext/apertis-website-include-centralized-alt-source-link
  • sietze.vanbuuren-ext/apertis-website-nested-menu-toggle-expand-bugfix
  • sietze.vanbuuren-ext/apertis-website-upstream-sourcelink
18 results
Show changes
Commits on Source (54)
Showing
with 1710 additions and 531 deletions
......@@ -2,9 +2,9 @@
This is the source for the main Apertis website. It is generated using
[Hugo](https://gohugo.io/) with a modified
[beautifulHugo theme](https://github.com/halogenica/beautifulhugo), changing
[Beautiful Hugo theme](https://github.com/halogenica/beautifulhugo), changing
the look, implementing search and allowing for the generation of PDFs. The page
is served from gitlab pages.
is served from GitLab pages.
---
......@@ -17,7 +17,8 @@ layout guidelines:
- Documenting procedures and rules
- Minimal requirements for project involvement
- Concepts:
- Topics that have been researched and/or planned but which haven't yet been implemented
- Topics that have been researched and/or planned but which haven't yet been
implemented
- Architecture:
- Description of project infrastructure
- Details of technologies and software used by Apertis
......@@ -26,12 +27,30 @@ layout guidelines:
- Worked examples of expected project workflows
- QA:
- Test reports
- Test procedures (realistically, a description of the testing performed and a pointer to qa.apertis.org)
- Test procedures (realistically, a description of the testing performed and
a pointer to qa.apertis.org)
- Releases:
- Release notes
- Release schedules
## Use of Hugo `ref` shortcode
## Spelling
In order to provide some consistency and quality to the website, we would like
to ensure that all documents have been spellchecked. In the mid-term we would
like checks to be performed as part of the websites CI/CD, using Aspell to
check the spellings. As it is likely that many of the documents use words not
in Aspell's dictionaries, we are starting with a manual approach to start
building up a personal dictionary of additional words that the Apertis project
is happy with.
When making changes or adding documents, please run:
aspell --personal="./dictionary.aspell" --lang="en_us" --mode=markdown check <document>
Any issues caught should be rectified or added to the `dictionary.aspell` file
(maintaining the alphabetical order being used).
## Use of Hugo `ref` Shortcode
Hugo provides the `ref` shortcode to aid with
[creating links between documents](https://gohugo.io/content-management/cross-references/).
......@@ -44,17 +63,17 @@ provided on the website.
In order to generate PDFs, we are getting Hugo to create a simplified HTML
pages. So as not to have every page generating a PDF, to get a page to be
generated as a PDF (and html at the same time) add the following to the
frontmatter of the page:
generated as a PDF (and HTML at the same time) add the following to the front
matter of the page:
```
outputs = ["html", "pdf-in"]
```
This will result in the simplifed HTML being produced in a file called
`index.pdf-in` in the relevant directory. The CI is configured to look
for these files once Hugo has generated the site and create PDFs of them. For
the page `www.apertis.org/concepts/foo/`, a PDF will be available as
This will result in the simplified HTML being produced in a file called
`index.pdf-in` in the relevant directory. The CI is configured to look for
these files once Hugo has generated the site and create PDFs of them. For the
page `www.apertis.org/concepts/foo/`, a PDF will be available as
`www.apertis.org/concepts/foo/foo.pdf`.
## GitLab CI
......@@ -76,6 +95,6 @@ Read more at Hugo's [documentation][https://gohugo.io/overview/introduction/].
### Preview your site
If you clone or download this project to your local computer and run `hugo server`,
your site can be accessed under `localhost:1313/hugo/`.
If you clone or download this project to your local computer and run `hugo
server`, your site can be accessed under `localhost:1313/hugo/`.
......@@ -317,6 +317,18 @@ by the [Apertis GitLab instance](https://gitlab.apertis.org/):
## Image creation
Image creation is the point where a set of standard packages are combined to
build a solution for a specific use case. This goal is accomplish thanks to
[Debos](https://github.com/go-debos/debos), a flexible tool to configure the
build of Debian-based operating systems. Debos uses tools like `debootstrap`
already present in the environment and relies on virtualisation to securely do
privileged operations without requiring root access.
Additionally at this stage customizations can be applied by using overlays.
This process allows the default content of the packages to be combined with
custom modifications to provide the desired solution. A common case is to apply overlays to
change some default system settings found in `/etc` such as default hostname.
Ospacks and how they should be processed to generate images are defined through
YAML files.
......
......@@ -51,11 +51,16 @@ how would that be named so that version-revision will not conflict with other
changes? What about if we want to backport the version in development, to build
against stable branch? What about us, as a downstream of the Debian package,
how should we version the package if we want to apply some changes on top of
Debian package? The convention, when modifiying the package for security
Debian package? The convention, when modifying the package for security
updates, backports and downstream modification, is to append to the end of the
existing Debian version number. As a result of this policy, many packages in
existing Debian version number. As a result of this policy, packages in
Apertis bear the addition `coX`, where `X` is a incremented number, which
shows the number of modifications made to the package by Collabora for Apertis.
The `co0` suffix means that the only difference between the upstream package
from Debian and the package in Apertis is the metadata under `debian/apertis/`
and the changelog entry itself. This is to highlight the fact that this metadata
ends up in the generated source package, so this source package carries a
small delta against the corresponding Debian package.
Additionally, there are a number of symbols that are used to separate these
portions of the revision. The symbol `~` is used to infer "less", and `+` for
......
......@@ -212,6 +212,70 @@ Based on the above comparison the best option is `uutils-coreutils`, since it is
These risks enumerated will be handled by the testing and migration in order to provide a reliable approach.
As it has been mentioned the license used is MIT, and detailed information about its dependencies can be found in the
[FOSSA analysis](https://app.fossa.io/projects/git%2Bgithub.com%2Fuutils%2Fcoreutils?ref=badge_large"). Unfortunately, this report is not reliable since it shows several incorrect dependencies.
The following list shows the dependencies as reported by `cargo`
| Package | License |
|---------------------|----------------------|
|ansi_term | MIT |
|arrayvec | MIT/Apache-2.0 |
|autocfg | Apache-2.0/MIT |
|backtrace-sys | MIT/Apache-2.0 |
|bitflags | MIT/Apache-2.0 |
|bit-set | MIT/Apache-2.0 |
|bit-vec | MIT/Apache-2.0 |
|blake2-rfc | MIT OR Apache-2.0 |
|byteorder | Unlicense OR MIT |
|cfg-if | MIT/Apache-2.0 |
|chrono | MIT/Apache-2.0 |
|constant_time_eq | CC0-1.0 |
|data-encoding | MIT |
|dunce | CC0-1.0 |
|either | MIT/Apache-2.0 |
|failure | MIT OR Apache-2.0 |
|fake-simd | MIT/Apache-2.0 |
|fnv | Apache-2.0 / MIT |
|fs_extra | MIT |
|glob | MIT/Apache-2.0 |
|half | MIT/Apache-2.0 |
|hex | MIT OR Apache-2.0 |
|ioctl-sys | MIT OR Apache-2.0 |
|isatty | MIT/Apache-2.0 |
|maybe-uninit | Apache-2.0 OR MIT |
|md5 | Apache-2.0/MIT |
|num-integer | MIT OR Apache-2.0 |
|onig | MIT |
|onig_sys | MIT |
|pkg-config | MIT/Apache-2.0 |
|platform-info | MIT |
|ppv-lite86 | MIT/Apache-2.0 |
|rand_chacha | MIT OR Apache-2.0 |
|rand_pcg | MIT OR Apache-2.0 |
|rust-ini | MIT |
|semver | MIT/Apache-2.0 |
|semver-parser | MIT/Apache-2.0 |
|sha1 | BSD-3-Clause |
|sha2 | MIT/Apache-2.0 |
|sha3 | MIT/Apache-2.0 |
|smallvec | MIT/Apache-2.0 |
|strsim | MIT |
|syn | MIT OR Apache-2.0 |
|synom | MIT/Apache-2.0 |
|synstructure | MIT |
|tempfile | MIT OR Apache-2.0 |
|term_grid | MIT |
|termsize | MIT |
|term_size | MIT/Apache-2.0 |
|thread_local | Apache-2.0/MIT |
|typenum | MIT/Apache-2.0 |
|unix_socket | MIT/Apache-2.0 |
|vec_map | MIT/Apache-2.0 |
|wild | Apache-2.0 OR MIT |
|winapi-util | Unlicense/MIT |
|xattr | MIT/Apache-2.0 |
# Testing
In order to confirm the missing features/commands in the `uutils-coreutils` which are required by Apertis a testing needs to be performed. The steps proposed are:
......@@ -220,7 +284,7 @@ In order to confirm the missing features/commands in the `uutils-coreutils` whic
- Test installing/removing packages
- Run current `coreutils-gplv2` test plan with `uutils-coreutils`
- Run `uutils-coreutils` as default on development environments
- Make `uuutils-coreutils` and all the Rust crates it depends on available in Debian
- Make `uutils-coreutils` and all the Rust crates it depends on available in Debian
- Provide long-term maintenance of the new packages in Debian as well
Note that some effort is being driven by `uutils-coreutils` community to use the `coreutils` test case to generate a report for the still missing features. This will be a nice to have feature but it is more than it is actually required for this stage.
......@@ -242,5 +306,6 @@ The following guidelines will be followed to assure a smooth transition minimizi
- Determine the list of tools supported and successfully tested provided by `uutils-coreutils`.
- Create a new package based on `uutils-coreutils` named `coreutils-uutils` with all the tools that are supported and successfully tested.
- For missing tools a replacement will be provided on case by case basis.
- Generate APT and OSTree based images for target and minimal configuration.
Due to the [Apertis release flow]( {{< ref "release-flow.md" >}} ) this process will start on development releases allowing any potential issue to be addressed before a stable point release, with the possibility of switching back to `coreutils-gplv2` if a proper fix cannot be implemented on time.
+++
title = "GPL-3-free replacements of GnuPG"
weight = 100
outputs = [ "html", "pdf-in",]
date = "2021-01-28"
+++
# Introduction
In accordance to its [Open Source License Expectations]({{< ref "policies/license-expectations.md" >}}), Apertis currently ships a very old version of `GnuPG` which is still released under the `GPL-2.0` terms, before the upstream project switched to `GPL-3.0`.
This is problematic in the long term: the purpose of this document is to investigate alternative implementations with licensing conditions that are suitable for Apertis target devices.
The use cases for Apertis target images only depend on GnuPG for verification purposes, not for signing or encrypting. This is usually done through the `gpgv` tool or through the `libgpgme` library which invokes the `gpg` tool and interacts with it via the [`--with-colons` machine parsable mode](https://github.com/gpg/gnupg/blob/master/doc/DETAILS) or the [Assuan](https://www.gnupg.org/documentation/manuals/assuan/index.html) IPC protocol.
Newer `GPL-3`-licensed versions of GnuPG can be provided in the `development` package repository for any additional need outside that do not affect targets.
# Terminology and concepts
- **OpenPGP**: The OpenPGP protocol defines standard formats for encrypted messages, signatures, and certificates for exchanging public keys.
- **GnuPG**: GnuPG is a complete and free implementation of the OpenPGP standard.
# Use cases
- A developer wants to install an additional package on the Apertis APT-based image flashed on their device, and relies on OpenPGP signatures to assert trust in the remote package repositories.
- A user wants to install a Flatpak application from Flathub, which only provides OpenPGP signatures to assert trust on the provided application bundles.
# Non-use cases
- Sending emails encrypted with OpenPGP
- Creating OpenPGP signatures
# Requirements
The chosen approach to replace GnuPG on targets must:
* have a license that matches the Apertis [Open Source License Expectations]({{< ref "policies/license-expectations.md" >}}), including its dependencies
* provide OpenPGP signature verification support
* require minimal changes in tools currently depending on GnuPG
* require minimal non-upstreamable changes
* have an active upstream community
* have a high code quality track
# Depending components
`GnuPG` and the related components are currently used in Apertis for the following packages (based on `apt-rdepends` results):
| component | dependent package | source | repository
| ----------------- | ----------------------- | ----------- | ----------
| **gnupg** | flatpak-tests | flatpak | target
| | libgpgme11 | gpgme1.0 | target
| | libvolume-key1 | volume-key | target
| | ostree-tests | ostree | target
| | python-apt | | development
| | devscripts | | development
| | gnupg2 | | development
| | jetring | | development
| **libgpgme11** | flatpak | flatpak | target
| | flatpak-tests | flatpak | target
| | libflatpak0 | flatpak | target
| | gmime-bin | gmime | target
| | libgmime-3.0-0 | gmime | target
| | libgpgmepp6 | gpgme1.0 | target
| | libvolume-key1 | volume-key | target
| | samba-dsdb-modules | samba | development
| **gpgv** | apertis-archive-keyring | | target
| | apt | | target
| | gnupg | | target
| | devscripts | | development
| | gpgv2 | | development
Current packages using `GnuPG` or `gpgv` are:
component | dependencies
------------------------| ------------
apertis-archive-keyring | gpgv
apt | gpgv
flatpak | gnupg, libgpgme11
gmime | libgpgme11
ostree | gnupg, libgpgme11(1)
volume-key | gnupg, libgpgme11
(1) Currently `OSTree` in Apertis does not depend on `GnuPG` as it exclusively uses `Ed25519` signatures. However, the reintroduction of OpenPGP signature verification support may be requested in the future to be able to install applications from third-party Flatpak repositories that only provide OpenPGP signatures.
## apertis-archive-keyring
This package contains all necessary GnuPG cryptographic keys needed to sign all Apertis archives.
The runtime dependency on `gpgv` can be removed with no ill effect.
## APT
`gpgv` is used by `APT`:
- to assert trust on remote package repository indexes
- by `apt-key` which [is deprecated](https://manpages.debian.org/testing/apt/apt-key.8.en.html) and will be removed
- in build-time tests
Calls to `gpgv` are encapsulated in `ExecGPGV` function located in `apt-pkg/contrib/gpgv.cc`.
At the time this document is written, there's a discussion in Debian mailing list [regarding ideas to replace gpgv with sqv](https://lists.debian.org/deity/2021/01/msg00088.html). The emerging long term idea is to have the `APT` code link to the Sequoia cryptographic library underlying `sqv`, rather than the current approach of invoking an external process.
## Flatpak
Flatpak application and library use both `libgpgme11` and `libostree`.
`GnuPG` is used by `Flatpak`:
- during development to sign the package and summaries,
- and on target to verify the signatures.
Apertis is currently adding `Ed25519` support to `Flatpak`.
## gmime
`GnuPG` is used by `gmime` to encrypt, decrypt, sign and verify messages with `Multipurpose Internet Mail Extension`.
## OSTree
`GnuPG` is used by `OSTree`:
- during development to sign the commits,
- and on target to verify the commits.
Current version of `OSTree` in Apertis is also able to use `Ed25519` cryptography.
## volume-key
See [Debian manpage](https://manpages.debian.org/buster/volume-key/volume_key.8.en.html).
`GnuPG` is used by `volume-key` to encrypt or decrypt the file used to store extracted "secrets" used for volume encryption (for example keys or passphrases).
# Approach
The following alternative replacements have been considered:
library | License | language | comment
---------------------------- | ---------------------------------------- | -------- | ----
RNP | BSD-2-Clause + BSD-3-Clause + Apache-2.0 | C++
rpgp | Apache-2.0 or MIT | Rust
Sequoia | GPL-2+ | Rust | uses Nettle/GMP but with the GPL-2 licensing it should match the Apertis license expectations
golang.org/x/crypto/openpgp | BSD-3-Clause | Golang
gpgrv | Apache-2.0 or MIT | Rust | only provides gpgv
## RNP
https://github.com/rnpgp/rnp
Started in 2017.
RNP originated as an attempt to modernize the NetPGP codebase originally created by Alistair Crooks of NetBSD in 2016. RNP has been heavily rewritten, and carries minimal if any code from the original codebase
Version | # commits | # contributors | CI | gpgv replacement | C API
:-----: | --------: | -------------: | :-: | :--------------: | -----
0.14 | 2700 | 31 | yes | yes | yes
Used by:
- Thunderbird
- [EnMail](https://github.com/riboseinc/enmail) ruby gem
## rpgp
https://github.com/rpgp/rpgp
Started in 2017.
> rPGP is the only full Rust implementation of OpenPGP, following RFC4880
> and RFC2440. It offers a minimal low-level API and does not prescribe
> trust schemes or key management policies. It fully supports all
> functionality required by the Autocrypt 1.1 e-mail encryption specification.
>
> …
>
> rPGP and its RSA dependency got a first independent security review
> mid 2019. No critical flaws were found. We have fixed and are fixing some high,
> medium and low risk ones. We will soon publish the full review report.
>
> Further independent security reviews are upcoming.
>
> …
>
> How is rPGP different from Sequoia?
>
> Some key differences:
>
> * rPGP has a more libre license than Sequoia that allows a broader usage
> * rPGP is a library with a well-defined, relatively small feature-set where Sequoia also tries to be a replacement for the GPG command line tool
> * All crypto used in rPGP is implemented in pure Rust, whereas sequoia uses Nettle, which is implemented in C.
Version | # commits | # contributors | CI | gpgv replacement | C API
:-----: | --------: | -------------: | :-: | :--------------: | -----
0.7.1 | 334 | 12 | no | no | no, but possible via a Rust shim
Used by:
- [Delta Chat, the e-mail based messenger app suite](https://delta.chat/)
## Sequoia
https://sequoia-pgp.org/
https://gitlab.com/sequoia-pgp/sequoia
Started in 2017.
Project status:
> The low-level API is quite feature-complete and can be used encrypt,
> decrypt, sign, and verify messages. It can create, inspect, and
> manipulate OpenPGP data on a very low-level.
>
> The high-level API is effectively non-existent, though there is some
> functionality related to key servers and key stores.
>
> The foreign function interface provides a C API for some of Sequoia's
> low- and high-level interfaces, but it is incomplete.
>
> There is a mostly feature-complete command-line verification tool for
> detached messages called 'sqv'.
`Sequoia` uses [Nettle](https://git.lysator.liu.se/nettle/nettle) which is dual licensed LGPL-3.0 and GPL-2.0. This is compliant with the Apertis [Open Source License Expectations]({{< ref "policies/license-expectations.md" >}}) since Sequoia itself is licensed under the GPL-2.0 terms.
Version | # commits | # contributors | CI | gpgv replacement | C API
------- | --------: | -------------: | :-: | :--------------: | -----
library: 1.0.0<BR>other: 0.23.0 | 3948 | 33 | yes | yes | yes
Used by:
- Pijul, KIPA, Radicle, see https://sequoia-pgp.org/projects/
`Sequoia` is already packaged for Debian bullseye.
## golang.org/x/crypto/openpgp
https://pkg.go.dev/golang.org/x/crypto/openpgp
https://github.com/golang/crypto/tree/master/openpgp
This package is part of the Go crypto package.
Version | # commits | # contributors | CI | gpgv replacement | C API
:-----: | --------: | -------------: | :-: | :--------------: | -----
v0.0.0-20201221181555-eec23a3978ad | | | no | no | no
Used by:
- Imported by a lot of Go projects, see https://pkg.go.dev/golang.org/x/crypto/openpgp?tab=importedby
## gpgrv
https://github.com/FauxFaux/gpgrv
Started in 2017.
`gpgrv` is a Rust library for verifying some types of GPG signatures.
It currently able to verify RSA, SHA1, SHA256 and SHA512 signatures.
Version | # commits | # contributors | CI | gpgv replacement | C API
:-----: | --------: | -------------: | :-: | :--------------: | -----
[0.3.0](https://crates.io/crates/gpgrv/0.3.0) | 109 | 2 | no | yes | NA
Used by:
- APT
# Evaluation Report
The `golang.org/x/crypto/openpgp` package only provides a Go interface and would then require substantial effort to be integrated in other places.
`gpgrv` doesn't seem to be actively developed, with the last commit being on August 2020.
`RNP` and `Sequoia` provide C interfaces and CLI interfaces to encrypt, decrypt, sign or verify files. They have both received a lot of commits, and have many contributors.
`rpgp` does not provide any CLI interface and a C interface would require a Rust shim, but its licensing terms are much more flexible than the Sequoia ones. It is actively developed. but it has fewer commits and contributors than Sequoia.
Red Hat removed the OpenPGP support from Thunderbird in Red Hat Enterprise Linux (RHEL), which uses `RNP`, due to not wanting to distribute [Botan](https://botan.randombit.net/), which has inadequate side-channel protection, see Red Hat bugs [1837512](https://bugzilla.redhat.com/show_bug.cgi?id=1837512) and [1886958](https://bugzilla.redhat.com/show_bug.cgi?id=1886958).
## Debian upstream discussion
The Debian APT maintainers are discussing and planning the removal of the dependency on `gpgv` and potentially on OpenPGP as a whole.
For the replacement of `gpgv` Debian will likely not use `RNP` due to its Apache License, see [here](https://lists.debian.org/deity/2021/02/msg00011.html), and expressed some interest in [linking directly to Sequoia](https://lists.debian.org/deity/2021/02/msg00004.html).
However, the Debian APT maintainers expressed concrete interest in [moving away from OpenPGP altogether](https://lists.debian.org/deity/2021/02/msg00023.html), by changing the [signature mechanism to use Ed25519 instead](https://wiki.debian.org/Teams/Apt/Spec/AptSign).
Adopting a solution which is aligned to the upstream goals would save maintenance effort in the long term.
# Recommendations
The problems to be addressed are:
1. the use of GnuPG via `gpgv` on the target reference images
1. the use of GnuPG via `libgpgme` on the target reference images
For `gpgv` there are two possible approaches:
1. use `sqv` from Sequoia to replace `gpgv` with basically no changes in the depending components
1. for GPL-2.0 applications, link to Sequoia directly as the APT maintainers said
For `libgpgme` the situation is more complex because the API surface is way bigger and there are no drop-in replacements.
In addition Sequoia, by being GPL-2.0 licensed, is not suitable to be directly linked from `GMime`, `OSTree` and `Flatpak` which are LGPL-2.1 and provide libraries that are meant to be linked by applications that may be released under licenses incompatible with the GPL-2.0 or even proprietary.
`rpgp` may be a better choice in this regard.
The approach could then be:
1. ship `sqv` on target images and symlink it as `gpgv` so that it gets transparently picked up by APT
1. patch `apertis-archive-keyring` to install the .asc directly, avoiding any build-dependency on GnuPG
1. disable OpenPGP support from `OSTree`, replacing it with the use of Ed25519 signatures
1. disable OpenPGP support from `Flatpak`, replacing it with the use of Ed25519 signatures
1. disable OpenPGP support from `GMime`
1. disable key escrow support from `libblockdev` so we can drop the `volume-key` package as a whole with its dependency on `libgpgme`
1. move the `gpgme` source package to the `development` package repository
1. move the `gnupg` source package to the `development` package repository
1. re-align the `gnupg` source package to Debian
With the steps above it would be possible to stop shipping an outdated GnuPG version with limited effort and limited regressions.
In particular, disabling OpenPGP support from Flatpak means that it would not be possible to verify the provenance of applications shipped by third-party stores which use OpenPGP like Flathub, and disabling it from GMime would mean that it could not verify or decrypt OpenPGP emails: both regressions have a very limited impact on the Apertis use-cases.
In the longer term, other activities can be undertaken to get rid of the downstream delta introduced above:
1. engage with the APT upstream maintainers to help them [move away from OpenPGP signatures](https://wiki.debian.org/Teams/Apt/Spec/AptSign)
1. engage with OSTree and Flatpak upstream maintainers to dynamically load `libgpgme` that it can be picked up on the SDK where installing GPL-3.0 components is not an issue and where it can be useful to install applications from third-party store like Flathub
1. fully re-enable OpenPGP support in the components where it has been disabled by either:
1. porting them to use `rpgp` by engaging with the upstream maintainers about implementing minimal Rush shims
1. implementing a `libgpgme` backend that invokes Sequoia externally to avoid licensing issues, either by engaging with the `libgpgme` maintainers or the Sequoia maintainers by providing compatibility with the [`--with-colons` machine parsable mode](https://github.com/gpg/gnupg/blob/master/doc/DETAILS)
# Risks
Drop-in reimplementations may not be 100% compatible and thus may cause subtle issues.
The split between `rpgp` (more permissive license, more limited goals) and Sequoia (more active, GPL-2.0 only) is unfortunate since `rpgp` would be more suitable for us but is also more risky regarding long term maintenance, with Sequoia being more promising in this regard.
This diff is collapsed.
+++
date = "2021-01-06"
weight = 100
title = "Preparing hawkBit for Production Use"
outputs = ["html", "pdf-in"]
+++
# Introduction
The Apertis project has been experimenting with the use of
[Eclipse hawkBit](https://www.eclipse.org/hawkbit/) as a mechanism for the
deployment of [system updates]({{< ref "system-updates-and-rollback.md" >}})
and [applications]({{< ref "application-framework.md#the-app-store" >}}) to
target devices in the field. The current emphasis is being placed on system
updates, though hawkBit can also be used to address different software
distribution use cases such as to distribute system software, updates and even
apps from an app store.
Apertis has recently deployed a [hawkBit instance](https://hawkbit.apertis.org)
into which the
[image build pipelines](https://gitlab.apertis.org/infrastructure/apertis-image-recipes/-/pipelines)
are uploading builds. The
[apertis-hawkBit-agent](https://gitlab.apertis.org/pkg/apertis-hawkbit-agent)
has been added to OSTree based images and a guide produced detailing how this
can be used to
[deploy updates to an Apertis target]({{< ref "deployment-management.md" >}}).
The current instance is proving valuable for gaining insight into how hawkBit
can be used as part of the broader Apertis project. hawkBit is already in use
elsewhere, notably by
[Bosch as part of its IoT infrastructure](https://docs.bosch-iot-rollouts.com/documentation/index.html),
however more work is required to reach the point where the Apertis
infrastructure (or a deployment based on the Apertis infrastructure) would be
ready for production use. In this document we will describe the steps we feel
that need to be taken to provide a reference deployment that could be more
readily suitable for production.
# Evaluation Report
## Server configuration
The current hawkBit deployment is hosted on Collabora's infrastructure. The
example
[Docker Compose configuration file](https://github.com/eclipse/hawkbit/blob/master/hawkbit-runtime/docker/docker-compose-stack.yml)
has been modified to improve stability, security and adding a reverse proxy
providing SSL encryption. This has been wrapped with
[Chef](https://www.chef.io/) configuration to improve maintainability. Whilst
this configuration has limitations (that will be discussed later), it provides
a better starting point for the deployment of a production system. These
configuration files are currently stored in Collabora's private infrastructure
repository and thus not visible to 3rd parties.
## Considering the production workflow
The currently enabled process for the enrollment and configuration of a target
device into the hawkBit deployment infrastructure requires the following steps:
- Install Apertis OSTree based image on the target device.
- Define or determine the `controllerid` for the device. This ID needs to be unique on
the hawkBit instance as it is used to identify the target.
- Enroll the target on the hawkBit instance, either via the
[UI](https://www.eclipse.org/hawkbit/ui/#deployment-management) or
[API](https://www.eclipse.org/hawkbit/rest-api/targets-api-guide/#_post_rest_v1_targets).
- If adding via the UI, hawkBit creates a security token, if adding via the
API the security token can be generated outside of hawkBit.
- Modify the configuration file for `apertis-hawkbit-agent` to contain the
correct URL for the hawkBit instance, the targets `controllerid` and the
generated security token. This configuration file is
`/etc/apertis-hawkbit-agent.ini`. Without these options being set, the
target will be unable to find and access the deployment server to discover
updates.
This workflow presents a number of points that could prove contentious in a
production environment:
- A need for access to the hawkBit deployment server (that may be hosted on
external cloud infrastructure) from the production environment to register
the `controllerid` and security token.
- The requirement to have a mechanism to add configuration to the device post
software load.
The security token based mechanism is one of a
[number of options](https://www.eclipse.org/hawkbit/concepts/authentication/)
available for authentication via the DDI API. The security token must be shared
between the target and the hawkBit server. This approach has a number of
downsides:
- The Token needs to added to the hawkBit server and tied to the target devices
`controllerid`. This may necessitate a link between the production
environment and an external network to access the hawkBit server.
- The need for the shared token to be registered with the server for
authentication would make it impossible to use the "plug n' play"
enrollment of the target devices supported by hawkBit.
hawkBit allows for a certificate based authentication mechanism (using a
reverse proxy before the hawkBit server to perform authentication) which would
remove the need to share a security token with the server. Utilizing signed
keys would allow authentication to be achieved independently from enrollment,
thus allowing enrollment to be carried out at a later date and would remove the
need to store data per device in the hawkBit from the production environment.
hawkBit allows for
"[plug'n play](https://gitter.im/eclipse/hawkbit/archives/2016/07/27)"
enrollment, the enrollment of the device when it's first seen by hawkBit, thus
the device could potentially be enrolled once the end user has switched on the
device and successfully connected it to a network for the first time when using
certificate based authentication.
For many devices it would not be practical or desired to have remote access
into the production firmware to add device specific configuration, such as a
security token or device specific signed key. `apertis-hawkbit-agent` currently
expects such configuration to be saved in `/etc/apertis-hawkbit-agent.ini`. An
option that this presents is for the image programmed onto the target to
provide 2 OSTree commits, one with the software expected on the device when
shipped and the other for factory use, with boot defaulting to the latter.
OSTree will attempt to merge any local changes made to the configuration when
updating the image. The factory image could be used to perform any testing
and factory configuration tasks required before switching the device to the
shipping software load. Customizations to the configuration made in the
factory should then be merged as part of the switch to the shipping load,
and the factory commit can be removed from the device.
Such an approach could provide some remote access to the target as part of the
factory commit, but not the shipping commit, thus avoiding remote access being
present in the field.
As previously mentioned, a unique `controllerid` is needed by hawkBit to identify
the device and needs to be stored in the configuration file. An alternative
approach may be to generated this ID from other unique data provided by the
device, such as a MAC address or unique ID provided by the SoC used in the
device.
## Management UI access
We currently have a number of static users defined with passwords available to
trusted maintainers. Such as scheme is not going to scale in a production
environment, nor provide an adequate level of security for a production
deployment. hawkBit provides the ability to configure authentication using a
provider implementing the OpenID Connect standard, which would allow for much
greater flexibility in authenticating users.
## Enabling device filtering
hawkBit provides functionality to perform update rollouts in a controlled way,
allowing a subset of the deployed base to get an update and only moving on to
more devices when a target percentage of devices have received the update and
with a configurable error rate. When rolling out updates, in an environment
where more than one hardware platform or revision of hardware is present, it
will be necessary to be able to ensure the correct updates are targeted towards
the correct devices. For example, two revisions of a gadget could use different
SoCs with different architectures each requiring a different build of the
update and different versions of a device may need to be updated with different
streams of updates. In order to cater for such scenarios, it is important for
hawkBit to be able to accurately distinguish between differing hardware.
Support to achieve this is provided via hawkBit's ability to store attributes.
These attributes can be set by the target device via the DDI interface once
enrolled and used by hawkBit to filter target devices into groups. At the
moment the `apertis-hawkbit-agent` is not setting any attributes.
## Provisioning for multiple product teams or partners
In order to use hawkBit for multiple products or partners it would be either
beneficial or necessary for each to have some isolation from each other. This
could be achieved via hawkBit's multi-tenant functionality or via the
deployment of multiple instances of hawkBit. It is likely that both of these
options would be deployed depending on the demands and requirements of the
product team or partner. It is expected that some partners may like to use
a deployment server provided by Apertis or one of it's partners. In this
instance multi-tenancy would make sense. Others may wish to have their own
instance, possibly hosted by themselves, in which case providing a simple way
to deploy a hawkBit instance would be beneficial.
Deploying multiple instances of hawkBit using the docker configuration would
be trivial. The multi-tenant configuration requires the authentication
mechanism for accessing the management API, web interface and potentially DDI
API to be multi-tenant aware.
## Life management of artifacts
The GitLab CI pipeline generally performs at least 2 builds a day, pushing
multiple artifacts for each architecture and version of Apertis. In order to
minimize the space used to store artifacts and so as not to store many defunct
artifacts, they are currently deleted after 7 days.
Whilst this approach enables the Apertis project to frequently exercise the
artifact upload path and has been adequate for Apertis during it's initial
phase, a more comprehensive strategy will be required for production use. For
shipped hardware, it is unlikely that any units will be updated as frequently.
In addition, depending on the form and function of the device, it may only poll
the infrastructure to check for updates sporadically, either due to the device
not needing to be on or not having access to a network connection capable of reaching
the deployment server. Artifacts will needed to be more selectively kept to
ensure that the most up-to-date version is kept available for each device type
and hardware revision. Older artifacts that are no longer the recommended
version should be safe to delete from hawkBit as no targets should be
attempting to update to them.
## Platform scalability
hawkBit provides support for clustering to scale beyond the bandwidth that a
single deployment server could handle. The Apertis hawkBit instance is not
expected to need to handle a high level of use, though this may be important to
product teams who might quite quickly have many devices connected to hawkBit in
the field.
# Recommendation
## Server Configuration
- The improvements made to the Docker Compose configuration file should be
published either in a publicly visible Apertis repository and/or improvements
should be submitted back to the hawkBit project to be included in the
reference Docker configuration.
## Considering the production workflow
- The hawkBit deployment should be updated to use a signed key based
security strategy.
- `apertis-hawkbit-agent` should be improved to enable authentication via
signed keys.
- `apertis-hawkbit-agent` should be improved to auto-enroll when the target
device is not already found.
- `apertis-hawkbit-agent` is currently storing its configuration in `/etc`,
this should be extended to look under `/var` and the default configuration
should be moved there.
- A mechanism should be added to `apertis-hawkbit-agent` to enable the
`controllerid` to be generated from supported hardware sources.
## Management UI access
- The Apertis hawkBit instance should be configured to use the OpenID
authentication mechanism, ideally using the same SSO used to authenticate
users for other Apertis resources.
## Enabling device filtering
- Update `apertis-hawkbit-agent` to set attributes based on information known
about the target device. This should include (where possible):
- Device Architecture
- Device Type
- Device Revision
## Provisioning for multiple product teams or partners
- Apertis does not have a direct need for a multi-tenant deployment nor for
multiple deployments. Investigate and document what's involved for setting up
a multi-tenanted installation.
## Life management of artifacts
- Apertis is developing a base platform to be used by production teams and thus
the images it produces for it's reference hardware needs a subtly
[different scheme]({{< ref "long-term-reproducibility.md" >}}) from that
which would be anticipated to be needed by a production team. It is
therefore recommended that the process removing old artifacts should adhere
to the following rules:
- Retain all point releases for current Apertis releases
- Retain 7 days of daily development builds
- Delete all artifacts for versions of Apertis no longer supported
## Platform scalability
- At this current point in time we do not feel that investigating platform
scalability has immediate value.
+++
title = "Status Page Review"
weight = 100
outputs = [ "html", "pdf-in",]
date = "2021-02-15"
+++
# Introduction
As interest and use of Apertis grows it is becoming increasingly important to
show the health of the Apertis infrastructure. This enables users to
proactively discover the health of the resources provided by Apertis and
determine if any issues they may be having are due to Apertis or their
infrastructure.
# Terminology and concepts
- **Hosted**: Service provided by an external provider that can typically be
accessed over the internet.
- **Self-hosted**: Service installed and run from computing resources directly
owned by the user.
# Use cases
- A developer is releasing a new version of a package they maintain, but
the upload to OBS is failing and they need to find out if it is a
misconfiguration on their part or if the OBS service actually down.
# Non-use cases
- Providing the Apertis system administrators with a granular over-view of the
infrastructure state.
# Requirements
- An automated system monitoring status of user accessible resources provided
by the Apertis platform.
- The system displays a simple indication of the availability of the
resources.
- The chosen system appears to be actively maintained:
- Hosted services have activity on their website in the last six months
- Self-hosted projects show signs of activity in the six months
- (Optional) The system is hosted on a distinct infrastructure to reduce shared
infrastructure that could lead to inaccurate results.
# Existing systems
Numerous externally hosted services and open source projects are available
which provide the functionality required to show a status page.
## Self-hosted
The self-hosted options fall into 2 categories:
- **Static**: The status page is generated to html pages, stored on a web
server which then provides the latest status page when requested.
- **Dynamic**: The page is generated via a web scripting language on the server
and served to the user per request.
These include the following options:
### Static
- [Statusfy](https://marquez.co/statusfy)
- [ClearStatus](https://github.com/weeblrpress/clearstatus/)
- [CState](https://github.com/cstate/cstate)
- [status.sh](https://github.com/Cyclenerd/static_status)
- [upptime](https://upptime.js.org/)
### Dynamic
- [Cachet](http://cachethq.io/)
- [Gatus](https://github.com/TwinProduction/gatus)
## Hosted
Many of the hosted services understandably charge a fee to provide a status
page. A small number have free options which provide a basic service. As we are
looking for a simple option and as a self-hosted option is expected to cost us
very little once setup, we will only be considering the free services. The
following options have been found:
- [Better Uptime](https://betteruptime.com/status-page)
- [Freshstatus](https://www.freshworks.com/status-page/)
- [HetrixTools](https://hetrixtools.com/pricing/uptime-monitor/)
- [Instatus](https://instatus.com/)
- [Nixstats](https://nixstats.com/)
- [Pagefate](https://pagefate.com/)
- [Squadcast](https://www.squadcast.com/)
- [StatusKit](https://statuskit.com/)
- [StatusCake](https://www.statuscake.com/features/uptime/)
- [UptimeRobot](https://uptimerobot.com/status-page/)
# Approach
As there are an abundance of tools and services available which provide status
page functionality, choosing from these existing solutions will be preferred
over a home grown solution, assuming that one can be found to fit our
requirements, with a home grown solution only concidered if none of the
existing solutions are appropriate. Our approach is to:
- Determine services that need to be monitored, this will be critical to
discount some of the free services that limit the number of services that cam
be monitored.
- Each option will be evaluated against the following criteria:
- Tool provides automated update to status of monitored services
- Tool can be used to monitor all services that we wish to monitor
(preferably with some capacity to monitor more in the future if desired).
- Simple interface, providing clear picture of status.
- The tool is actively maintained, either appearing to have active contributions or
in the case of services activity on its website.
# Evaluation Report
## Monitored services
The following services could be monitored to gauge the status of the Apertis
project:
- **GitLab**: This is the main service used by Apertis developers which hosts
the source code used and developed as part of the project.
- **Website**: This is the main site at www.apertis.org. This is hosted by
GitLab pages which is a distinct from the main GitLab service.
- **APT repositories**: This service hosts the `.deb` packages that are build
by the Apertis project. This is required in order to build images or
update/extend existing apt based installations.
- **Artifacts hosting**: This is where the images built by Apertis are stored
along with the OSTree repositories. This service is therefore important for
anyone wanting to install a fresh copy of Apertis or update one based on
OSTree.
- **OBS**: Apertis utilizes Collabora's instance of the Open Build Service.
This performs compilation of the source into `.deb` packages. Whilst this
will not be directly interacted with by most users, it is required to be
available for updates to be generated when releases are made to packages in
GitLab and there may be some cases where advanced users may need access to
OBS.
- **LAVA**: Apertis utilizes Collabora's instance of LAVA. This is primarily
used to test images built by Apertis and is thus a critical part of the
automated QA infrastructure.
- **lavaphabbridge**: This records the outcome of LAVA runs and displays the
test cases used for QA.
- **hawkBit**: This is a deployment management system that is being integrated
into Apertis. It provides both a web UI and rest API. Both of these should be
monitored.
- **docs**: This holds the generated documentation for some packages. It is not
as important as some of the other pages, but wouldn't necessarily get noticed
quickly if it wasn't working.
Whilst this list could arguably be reduced a little to just target core
services, it would be prudent to choose a service that would allow Apertis room
to grow and add services that need monitoring.
## Tool comparison
The following table was created whilst evaluating the options listed under
existing systems. To save time, where it was apparent that the option was not
going to meet the initial criteria, no further attempt was made to evaluate
later criterion, hence the lack of answers on less suitable options.
| Tool | Hosting | Automated | 8+ Services? | Simplicity | Activity |
| ---- | ------- | --------- | ------------ | ---------- | -------- |
| [UptimeRobot](https://uptimerobot.com/status-page/) | Service | Yes | Yes - 50 | Simple | Active |
| [status.sh](https://github.com/Cyclenerd/static_status) | Self | Yes | Yes - Unlimited | Simple | Active |
| [Gatus](https://github.com/TwinProduction/gatus) | Self | Yes | Yes - Unlimited | Simple | Active |
| [Better Uptime](https://betteruptime.com/status-page) | Service | Yes | Yes - 10 | Moderate | Active |
| [upptime](https://upptime.js.org/) | Self | Yes | Yes - Unlimited | Moderate | Active |
| [HetrixTools](https://hetrixtools.com/uptime-monitor/) | Service | Yes | Yes - 15 | Complex | ? |
| [StatusCake](https://www.statuscake.com/features/uptime/) | Service | Yes | Yes - 10 | ? | Active |
| [Pagefate](https://pagefate.com/) | Service | ? | ? | - | - |
| [Nixstats](https://nixstats.com/) | Service | ? | No - 5 | - | - | - |
| [Statusfy](https://marquez.co/statusfy) | Self | No | Yes - Unlimited | - | - |
| [ClearStatus](https://github.com/weeblrpress/clearstatus/) | Self | No | Yes - Unlimited | - | - |
| [CState](https://github.com/cstate/cstate) | Self | No | Yes - Unlimited | - | - |
| [Cachet](http://cachethq.io/) | Self | No | yes - Unlimited | - | - |
| [Freshstatus](https://www.freshworks.com/status-page/) | Service | No - Requires freshping | - | - | - |
| [Instatus](https://instatus.com/) | Service | No - Requires extra service | - | - | - |
| [Squadcast](https://www.squadcast.com/) | Service | No | ? | - | - |
| [StatusKit](https://statuskit.com/) | Service | No | ? | - | - |
# Recommendation
Based on the above evalution, the top 4 options would appear to be:
- Better Uptime
- Gatus
- status.sh
- UptimeRobot
The choice can be further slimmed by making a decision between a service and a
self-hosted solution.
A self-hosted solution has the advantage that it will remain available
long-term, not being reliant on an outside provider, however they will also
require mantenance and up keep. A externally provided service has the advantage
that it is hosted on distinct infrastructure from that hosting the other
Apertis services and thus less likely to be made unavailable by a fault
affecting the whole platform. An external service is also likely to provide a
more independent and reliable evaluation of the platform status.
Based on this our recommendation would be to utilise UptimeRobot to provide a
status page for Apertis.
# Risks
- UptimeRobot stops providing free service: In the event that the free service
ceases to be offered or changes such that it is no longer suitable to
Apertis, it would appear to be fairly trivial to migrate to an alternative
service or decide to self-host.
......@@ -1337,7 +1337,7 @@ Resource usage here refers to the limitation and prioritization of
hardware resources usage. Common resources to limit usage of are CPU,
memory, network, disk I/O and IPC.
The proposed solution is Control Groups ([cgroups]), which is a
The proposed solution is Control Groups ([cgroup-v1], [cgroup-v2]), which is a
Linux kernel feature to limit, account, isolate and prioritize resource
usage of process groups. It protects the platform from resource
exhaustion and DoS attacks. The groups of processes can be dynamically
......@@ -1833,11 +1833,8 @@ environment.
## The IMA Linux Integrity Subsystem
The basics of the Integrity Measurement Architecture ([IMA])
subsystem have been a part of Linux since the version 2.6.30, viewing of
the records has been included in 2.6.36, and local verification has been
[submitted][kernel-local-verif] to the kernel maintainers very recently, in late January 2012.
The goal of the subsystem is to make sure that a given set
The goal of the Integrity Measurement Architecture ([IMA])
subsystem is to make sure that a given set
of files have not been altered and are authentic – in other words,
provided by a trusted source. The mechanism used to provide these two
features are essentially keeping a database of file hashes and RSA
......@@ -2113,25 +2110,27 @@ from iterating on an implementation.
[smack-embedded-tv]: http://www.embeddedalley.com/pdfs/Smack_for_DigitalTV.pdf
[cgroups]: http://www.kernel.org/doc/Documentation/cgroups/cgroups.txt
[cgroup-v1]: https://www.kernel.org/doc/Documentation/cgroup-v1/cgroups.txt
[blkio-doc]: http://www.kernel.org/doc/Documentation/cgroups/blkio-controller.txt
[cgroup-v2]: https://www.kernel.org/doc/Documentation/cgroup-v2.txt
[blkio-doc]: https://www.kernel.org/doc/Documentation/cgroup-v1/blkio-controller.txt
[udev]: http://en.wikipedia.org/wiki/Udev
[clone]: http://www.kernel.org/doc/man-pages/online/pages/man2/clone.2.html
[clone]: https://man7.org/linux/man-pages/man2/clone.2.html
[man-in-the-middle]: https://en.wikipedia.org/wiki/Man-in-the-middle_attack
[cross-site scripting]: http://en.wikipedia.org/wiki/Cross-site_scripting
[cross-site scripting]: https://en.wikipedia.org/wiki/Cross-site_scripting
[omnibox]: http://chrome.blogspot.com.br/2010/10/understanding-omnibox-for-better.html
[omnibox]: https://chrome.googleblog.com/2010/10/understanding-omnibox-for-better.html
[Secure APT]: http://wiki.debian.org/SecureApt
[Secure APT]: https://wiki.debian.org/SecureApt
[Release file]: http://wiki.debian.org/SecureApt#Secure_apt_groundwork:_checksums
[Release file]: https://wiki.debian.org/SecureApt#Secure_apt_groundwork:_checksums
[Secrets D-Bus service]: http://standards.freedesktop.org/secret-service/re01.html
[Secrets D-Bus service]: https://specifications.freedesktop.org/secret-service/latest/re01.html
[GNOME-secret-service]: https://wiki.gnome.org/Projects/GnomeKeyring
......@@ -2139,24 +2138,22 @@ from iterating on an implementation.
[SSP]: https://wiki.ubuntu.com/GccSsp
[LXC]: http://lxc.sourceforge.net/
[LXC]: https://linuxcontainers.org/
[dbus-tcp]: http://www.freedesktop.org/wiki/Software/DBusRemote
[dbus-tcp]: https://www.freedesktop.org/wiki/Software/DBusRemote/
[Virtual GL]: http://www.virtualgl.org/
[Virtual GL]: https://virtualgl.org/
[Flatpak]: https://flatpak.org/
[IMA]: http://sourceforge.net/apps/mediawiki/linux-ima/index.php?title=Main_Page
[kernel-local-verif]: http://thread.gmane.org/gmane.linux.file-systems/61111/focus=61121
[IMA]: https://sourceforge.net/p/linux-ima/wiki/Home/
[IMA LPC]: http://linuxplumbersconf.org/2009/slides/David-Stafford-IMA_LPC.pdf
[IMA LPC]: https://blog.linuxplumbersconf.org/2009/slides/David-Stafford-IMA_LPC.pdf
[EVM]: http://sourceforge.net/apps/mediawiki/linux-ima/index.php?title=Main_Page#Linux_Extended_Verification_Module_.28EVM.29
[EVM]: https://sourceforge.net/p/linux-ima/wiki/Home/#linux-extended-verification-module-evm
[kernel-EVM]: http://kernelnewbies.org/Linux_3.2#head-03576b924303bb0fad19cabb35efcbd33eeed084
[kernel-EVM]: https://kernelnewbies.org/Linux_3.2#head-03576b924303bb0fad19cabb35efcbd33eeed084
[Seccomp]: https://github.com/torvalds/linux/blob/master/Documentation/prctl/seccomp_filter.txt
[Seccomp]: https://www.kernel.org/doc/Documentation/prctl/seccomp_filter.txt
[libseccomp]: https://lwn.net/Articles/494252/
This diff is collapsed.
......@@ -148,6 +148,23 @@ be customizable. For instance, some products may chose to only roll back the
base OS and keep applications untouched, some other products may choose to roll
applications back as well.
Rollbacks can be misused to perform
[downgrade attacks](https://en.wikipedia.org/wiki/Downgrade_attack) where the
attacker purposefully initiates a rollback to an older version to leverage
vulnerabilities fixed in the currently deployed version.
For this reason care need to be taken about the conditions on which a rollback
is to be initiated. For instance, if the system is not explicitly in the
process of performing an upgrade, rollback should never be initiated even in
case of boot failure as those are likely due to external reasons and rolling
back to a previous version would not produce any benefit. Relatedly, once
a specific version has been booted successfully, the system should never
roll back to earlier versions. This also simplifies how applications have to
deal with base OS updates: since the version of the successfully booted
deployment can only monotonically increase, user applications that get launched
after the successful system boot has been confirmed will never have to deal
with downgrades.
### Reset to clean state
The user must be able to restore his device to a clean state, destroying
......@@ -158,6 +175,14 @@ all user data and all device-specific system configuration.
An interface must be provided by the updates and rollback mechanism to allow
HMI to query the current update status, and trigger updates and rollback.
### Handling settings and data
System upgrades should keep both settings and data safe and intact as this
process should be as transparent as possible to the end user. As described in
[preferences and persistence]( {{< ref "preferences-and-persistence.md" >}} )
settings have a default value, which can change on upgrade, this results in
the required solution being more complex than it might initially seem.
## Existing system update mechanisms
### Debian tools
......@@ -173,6 +198,28 @@ management is not required for final users of Apertis. For example:
way. This can be an error prone manual process and might not be accomplished
cleanly.
In relation to system settings as defined in
[preferences and persistence]( {{< ref "preferences-and-persistence.md" >}} ),
Debian tools use a very simple approach. On package upgrades the `dpkg`
will perform a check taking into account
- current version default configuration file
- new version default configuration file
- current configuration file
Different scenarios arise depending on whether user has applied changes to the
configuration file. If current default configuration file is the same as
current, then the user hadn't change it, which implies that it can be safely
upgraded (if it is required).
However, if the current default configuration file is different from current
the user had applied some changes, so it can't be upgraded silently. In this
case `dpkg` asks the user to choose the version to use. This approach is not
suitable for automated upgrades where there is no user interaction.
To overcome some of these limitations modern systems tend to use overlays
to have a read-only partition with default values and an upper layer with
custom values.
### ChromeOS
ChromeOS uses an A/B parallel partition approach. Instead of upgrading the system
......@@ -740,6 +787,36 @@ image for decompression.
The content of the update file is extracted into the temporary directory
and the signature is checked for the extracted commit tree.
### Settings
As described in
[preferences and persistence]( {{< ref "preferences-and-persistence.md" >}} )
there are different types of settings which should be preserved across updates.
The setting should either be kept intact or updated to reflect new logic of the
application.
When using `OSTree`, most of the file system is read-only. Since system
settings need write support, the `/etc` and `/var` partitions
are configured to be read-write. This also applies to the `/home`
partition, with it being configured as read-write so user data and
settings can be preserved.
During an `OSTree` upgrade, a new commit is applied on the `OSTree` repo,
this provides the new content that will be used for the read-only portions of
the rootfs, but does not modify the read-write parts.
To handle the upgrade of system settings stored in `/etc`, a copy of its
default values are kept in `/usr/etc` which is updated with the new commit.
default values is kept in `/usr/etc` which is updated with the new commit.
Thanks to this information `OSTree` can detect the files that have been changed
and apply a 3-way merge, to update the `/etc`.
This process allows to update settings to new defaults for files that were not
modified and keep intact those that were.
Applications are encouraged to handle settings adaptation to new version
following the guidelines described in [user and user data management]( {{< ref "#user-and-user-data-management" >}} )
and [preference and persistence]( {{< ref preferences-and-persistence.md >}} ).
### Error handling
If for any reason the update process fails to complete, the update will
......
......@@ -7,8 +7,9 @@ title = "Apertis Development Guide"
# Apertis Packaging CI
Apertis stores the source of all the shipped packages in GitLab and uses a set
of GitLab CI pipelines to manage the workflows to:
Apertis stores the source of all the shipped packages in GitLab and uses a
[GitLab CI pipeline](https://gitlab.apertis.org/infrastructure/ci-package-builder/-/blob/master/ci-package-builder.yml)
to manage the workflows to:
* land updated sources to OBS which will then build the binary outputs
* pull updates from upstream distributions like Debian 10 Buster
......@@ -106,19 +107,21 @@ entries are kept up-to-date when the commit messages get changed via rebase.
## Pulling updates or security fixes from upstream distributions
A separate set of pipeline steps are configured on the `debian/$RELEASE-gitlab-update-job`
branches (for instance, `debian/buster-gitlab-update-job`) of each package.
Updates coming from upstream can be pulled it by triggering a CI pipeline on a
branches like `debian/buster` or `debian/bullseye`.
The pipeline will check the Debian archive for updates, pull them in the
`debian/$RELEASE` branch (for instance, `debian/buster`), try to merge the new
contents with the matching `apertis/*` branches and, if successful, push a
`debian/$RELEASE`-like branch (for instance, `debian/bullseye` or
`debian/bullseye-security`), try to merge the new contents with the matching
`apertis/*` branches and, if successful, push a
proposed updates branch while creating a Merge Request for each `apertis/*`
branches it should be landed on.
The upstream update pipeline is scheduled to run automatically each weekend,
The upstream update pipeline is usually triggered from
[the infrastructure dashboard](https://infrastructure.pages.apertis.org/dashboard/)
but can be manually triggered from the GitLab web UI by selecting the
`Run Pipeline` button in the `Pipelines` page of each repository under `pkg/*`
and selecting the `debian/buster-gitlab-update-job` branch as the reference.
and selecting the `debian/bullseye` branch as the reference.
![Run Pipeline button](/images/run-pipeline-button.png)
......@@ -189,7 +192,7 @@ would lead to errors difficult to diagnose.
When targeting a specific release, `~${RELEASE}.${COUNTER}` needs to be
appended to the version identifier after the local build suffix:
* `0.42` → append `co1~v2020pre.0``0.42co1~v2020pre.0`
* `0.42` → append `co0~v2020pre.0``0.42co0~v2020pre.0`
* `0.42co3` → bump to `co4` and append `~v2020pre.0``0.42co4~v2020pre.0`
* `0.42co4~v2020pre.0` → increase the release-specific counter → `0.42co4~v2020pre.1`
......@@ -211,28 +214,24 @@ This is the process to import a new package from Debian to Apertis:
* invoke `import-debian-package` from the [packaging-tools
repository](https://gitlab.apertis.org/infrastructure/packaging-tools/)
to populate the local git repository:
* fetch a specific version: `import-debian-package --upstream buster --downstream apertis/v2020dev0 --create-ci-branches --package hello --version 2.10-2`
* fetch the latest version: `import-debian-package --upstream buster --downstream apertis/v2020dev0 --create-ci-branches --package hello`
* fetch a specific version: `import-debian-package --upstream buster --downstream apertis/v2020dev0 --component target --create-ci-branches --package hello --version 2.10-2`
* fetch the latest version: `import-debian-package --upstream buster --downstream apertis/v2020dev0 --component target --create-ci-branches --package hello`
* the argument to `--component` reflects the repository component it is part of (for instance, `target`); it will be stored in `debian/apertis/component`
* multiple downstream branches can be specified, in which case all of them
will be updated to point to the newly imported package version
* the Apertis version of the package will have a local suffix (`co0`) appended
* don't use `import-debian-package` on existing repositories, it does not
attempt to merge `apertis/*` branches and instead it re-sets them to new
branches based on the freshly imported Debian package
* Add a `debian/apertis/component` file reflecting the repository component it is part of (for instance, `target`)
* check out the apertis repository: `git checkout apertis/v2021dev0`
* add the component file: `echo target > debian/apertis/component`
* add the component file to git: `git add debian/apertis/component`
* commit the file: `git commit -m "Add debian/apertis/component file pointing to target" debian/apertis/component`
* create an empty project on GitLab under the `pkg/*` namespaces (for instance, `pkg/target/hello`)
* configure the origin remote on your local git: `git remote add origin git@gitlab.apertis.org:pkg/target/hello`
* create an empty project on GitLab under the `pkg/*` namespaces (for instance, `pkg/hello`)
* configure the origin remote on your local git: `git remote add origin git@gitlab.apertis.org:pkg/hello`
* push your local git contents to the newly created GitLab project: `git push --all --follow-tags origin`
* set it up with `gitlab-rulez apply rulez.yaml --filter pkg/target/hello` from
* set it up with `gitlab-rulez apply rulez.yaml --filter pkg/hello` from
the [gitlab-rulez repository](https://gitlab.apertis.org/infrastructure/gitlab-rulez)
* sets the CI config path to `debian/apertis/gitlab-ci.yml`
* sets the CI config path to `ci-package-builder.yml@infrastructure/ci-package-builder`
* changes the merge request settings:
* only allow fast-forward merges
* ensure merges are only allowed if pipelines succeed
* adds a schedule on the `debian/buster-gitlab-update-job` branch to run weekly
* marks the `apertis/*` and `debian/*` branches as protected
* follow the process described in the [section about landing downstream changes
to the main archive](#landing-downstream-changes-to-the-main-archive) above to
......@@ -365,8 +364,8 @@ is not released into any of the Debian releases. In such case, we can try:
* Generate a source package out of the packaging repository using `gbp buildpackage -S`
* If successful, this will give us a proper *libgpiod source package*.
* Clone the Apertis libgpiod git packaging repostiory
* Use the [import-tarballs](https://gitlab.apertis.org/infrastructure/packaging-tools/-/blob/master/import-tarballs) tool to import the source package generated from the Debian repository into Apertis packaging repository. Eg. `import-tarballs libgpiod-1.4.2-1.dsc`
* Note: The `import-tarballs` script imports the new tarball into the git repository and commits it to the `pristine-lfs` branch. While, a user can commit to the branch manually by-hand, we recommend the use of the `import-tarballs` tool to import new tarballs and commiting them to the packaging repository
* Use the `pristine-lfs` tool to import the source package generated from the Debian repository into Apertis packaging repository. Eg. `pristine-lfs import-dsc libgpiod-1.4.2-1.dsc`
* Note: The `import-dsc` subcommand imports the new tarball into the git repository and commits it to the `pristine-lfs` branch. While a user can commit to the branch manually by-hand, we recommend the use of `import-dsc` to import new tarballs and committing them to the packaging repository
## License scans
......@@ -418,19 +417,31 @@ under `debian/apertis/copyright`, updating the merge request when necessary.
[Dpkg::Copyright::Scanner]: https://manpages.debian.org/testing/libconfig-model-dpkg-perl/Dpkg::Copyright::Scanner.3pm.en.html
[gitignore]: https://manpages.debian.org/testing/git-man/gitignore.5.en.html
## Custom pipelines
When using the packaging pipeline, developers cannot put their CI/CD automation
in `.gitlab-ci.yml` anymore, as the CI config path points to the
ci-package-builder definition.
However, developers can put their jobs in the
`debian/apertis/local-gitlab-ci.yml` file and have them executed in a child
pipeline whenever the main packaging pipeline is executed. This is specially
handy to run tests before the actual packaging process begins.
# Internals
Main components:
* [`ci-package-builder`](https://gitlab.apertis.org/infrastructure/ci-package-builder):
centralized location of the GitLab-to-OBS and Debian-to-GitLab pipeline definitions
* [`debian/apertis/gitlab-ci.yaml`](https://gitlab.apertis.org/pkg/target/base-files/blob/apertis/v2019/debian/apertis/gitlab-ci.yml):
imports the `ci-package-builder` pipelines from each packaging repository
* [`apertis-package-source-builder`](https://gitlab.apertis.org/infrastructure/apertis-docker-images/tree/apertis/v2019/apertis-package-source-builder):
Docker environment for the GitLab pipelines
* [`pristine-lfs`](https://salsa.debian.org/andrewsh/pristine-lfs): stores
upstream original tarballs and packaging source tarballs using Git-LFS, as a
more robust replacement for `pristine-tar`
![DEP-14 in Apertis](/images/apertis-dep-14-gitlab-curves.svg)
Branches:
* `pristine-lfs`: stores references to the Git-LFS-hosted original tarballs
* `debian/$DEBIAN_RELEASE` (for instance, `debian/buster`): contains the extracted
......@@ -438,18 +449,15 @@ Branches:
* `pristine-lfs-source`: stores references to the Git-LFS-hosted packaging
tarballs, mainly to ensure that each (package, version) tuple is built only
once and no conflicts can arise
* `apertis/$APERTIS_RELEASE` (for intance, `apertis/v2020dev0`): contains the
* `apertis/$APERTIS_RELEASE` (for instance, `apertis/v2020dev0`): contains the
extracted upstream sources and possibly patched packaging information for
Apertis, including the `debian/apertis/gitlab-ci.yaml` to set up the
GitLab-to-OBS pipeline
* `apertis/$APERTIS_RELEASE-security` and `apertis/$APERTIS_RELEASE-updates`
(for intance, `apertis/v2019-updates`): similar to ``apertis/$APERTIS_RELEASE`
(for intance, `apertis/v2019-updates`): similar to `apertis/$APERTIS_RELEASE`
but respectively target the Security and Updates repositories for published
stable releases as described in [Process after a product
release]( {{< ref "release-flow.md#process-after-a-product-release" >}} )
* `debian/$DEBIAN_RELEASE-gitlab-update-job` (for instance,
`debian/buster-gitlab-update-job`): hosts the `debian/apertis/gitlab-ci.yaml`
file to configure the Debian-to-GitLab pipeline
Tags:
* `debian/*`: tags for Debian releases in the `debian/*` branches
......
......@@ -101,7 +101,7 @@ is as big as it needs to be.
Once the profile is working as required add it to the relevant package
(typically in the `debian/apparmor.d` directory) and
[submit it for review](http://localhost:1313/policies/upstreaming/),
[submit it for review]({{< ref "upstreaming.md" >}}),
# External links
......
......@@ -42,17 +42,6 @@ Clone the forked repository
* Every commit must have an appropriate
[`Signed-off-by:` tag]( {{< ref "contributions.md#sign-offs" >}} ) in
the commit message.
* Add a `Fixes: APERTIS-<task_id>` tag for each task in the proposed commit
messages (as explained in the section "Automatically closing tasks" below or
in the envelope message of the merge request itself) in order to link the
merge request to one or more tasks in Phabricator.
* Note: The tag will ensure that Phabricator tasks are kept up-to-date with
regard to the status of related merge requests, through the creation of a new
comment with the link to the merge request every time a merge request is
created/updated/merged. This syntax has been chosen for the tag because it is
already
[supported by gitlab](https://docs.gitlab.com/ce/user/project/integrations/custom_issue_tracker.html).
## Merge Request
......@@ -70,12 +59,12 @@ Clone the forked repository
[good practices for code review](https://mtlynch.io/code-review-love/) help
the process to go as smoothly as possible:
1. Review your own code first
2. Write a clear changelist description
2. Write a clear change list description
3. Automate the easy stuff
4. Answer questions with the code itself
5. Narrowly scope changes
6. Separate functional and non-functional changes
7. Break up large changelists
7. Break up large change lists
8. Respond graciously to critiques
9. Be patient when your reviewer is wrong
10. Communicate your responses explicitly
......
......@@ -5,6 +5,7 @@ weight = 100
title = "GitLab-based Packaging Workflow"
aliases = [
"/old-wiki/Guidelines/Git-based_packaging_workflow",
"/old-wiki/Guidelines/Gitlab-based_packaging_workflow"
]
+++
......@@ -19,6 +20,8 @@ The packaging git repositories follow the
[DEP-14](http://dep.debian.net/deps/dep14/) Git layout specification with the
following conventions:
![DEP-14 in Apertis](/images/apertis-dep-14-gitlab-curves.svg)
- `upstream/${UPSTREAM_DISTRIBUTION}` branches with the unpacked upstream
project code from the debian package (e.g. `upstream/buster`)
- `debian/${UPSTREAM_DISTRIBUTION}` branches with the Debian changes on top of
......@@ -31,10 +34,6 @@ following conventions:
tarballs stored on Git-LFS via
[pristine-lfs](https://gitlab.apertis.org/infrastructure/pristine-lfs)
In addition, all the packaging repositories contains a
`debian/apertis/gitlab-ci.yml` file to enable the CI pipeline which generates
source packages and uploads them to OBS.
# Development environment
All the instructions below assume an Apertis development enviroment: either
......@@ -49,8 +48,9 @@ or enter the `apertis-*-package-source-builder` Docker container:
# How to manually sync an Apertis package with a new version
Upstream updates are usually handled automatically by the
[`ci-update-from-upstream.yml`](https://gitlab.apertis.org/infrastructure/ci-package-builder/)
Continous Integration pipeline, which fetches upstream packages, merges them
[`ci-package-builder.yml`](https://gitlab.apertis.org/infrastructure/ci-package-builder/)
Continous Integration pipeline, which
[fetches upstream packages, merges them]({{< ref "apertis_packaging_guide.md#pulling-updates-or-security-fixes-from-upstream-distributions" >}})
with the Apertis contents and directly creates Merge Requests to be reviewed by
[maintainers]({{< ref "contributions.md#the-role-of-maintainers" >}}).
......@@ -129,10 +129,10 @@ back](https://honk.sigxcpu.org/projects/git-buildpackage/manual-html/gbp.patches
# How to issue a release
The process for landing downstream changes is documented in the
[ci-package-builder documentation](https://gitlab.apertis.org/infrastructure/ci-package-builder#landing-downstream-changes-to-the-main-archive).
[ci-package-builder documentation]({{< ref "apertis_packaging_guide.md#landing-downstream-changes-to-the-main-archive" >}}).
# How to add a new packaging repository
The process for adding new packages from Debian is documented in the
[ci-package-builder documentation](https://gitlab.apertis.org/infrastructure/ci-package-builder#adding-new-packages-from-debian).
[ci-package-builder documentation]({{< ref "apertis_packaging_guide.md#adding-new-packages-from-debian" >}}).
......@@ -71,6 +71,13 @@ project such as `dbus`, the git repositories follow these rules:
based, should be imported onto `debian/${UPSTREAM_DISTRIBUTION}` (e.g.
`debian/buster`). The required process is covered in the
[ci-package-builder documentation](https://gitlab.apertis.org/infrastructure/ci-package-builder#adding-new-packages-from-debian).
- Each version of an Apertis package derived from a Debian package must
have a local version suffix. If the Apertis package has no functional
changes and the only difference from the upstream is metadata under
`debian/apertis/`, the suffix should be `co0`. Due to how Debian packaging
tools work, this version change must be reflected in by adding a new entry
at the top of `debian/changelog` with the distribution field set to
`apertis`.
# Guidelines for making commits
......
+++
date = "2020-06-09"
lastmod = "2021-02-03"
weight = 100
title = "VirtualBox"
......@@ -54,16 +55,34 @@ Technology is enabled in BIOS settings.
## Software
- Windows/Linux/Mac OS
- Oracle VirtualBox, version 5.0.12 or above (installed in next step)
- Windows OS
- Oracle VirtualBox. See supported version and installation instructions below.
### VirtualBox supported version
{{% notice warning %}}
While you can use VirtualBox in other environments, and even use other
virtualization solutions, the supported setup is to run VirtualBox on
Microsoft Windows.
{{% /notice %}}
The following table contains the supported version of VirtualBox and VirtualBox Guest additions for each release of Apertis:
| Apertis release | VirtualBox version | VirtualBox Guest Additions version |
| ------ | ------ | ----- |
| v2019 | [6.1.12 r139181 (Qt5.6.2)](https://download.virtualbox.org/virtualbox/6.1.12/VirtualBox-6.1.12-139181-Win.exe) | [6.1.12](https://download.virtualbox.org/virtualbox/6.1.12/VBoxGuestAdditions_6.1.12.iso) |
| v2020 | [6.1.12 r139181 (Qt5.6.2)](https://download.virtualbox.org/virtualbox/6.1.12/VirtualBox-6.1.12-139181-Win.exe) | [6.1.12](https://download.virtualbox.org/virtualbox/6.1.12/VBoxGuestAdditions_6.1.12.iso) |
| v2021 | [6.1.12 r139181 (Qt5.6.2)](https://download.virtualbox.org/virtualbox/6.1.12/VirtualBox-6.1.12-139181-Win.exe) | [6.1.12](https://download.virtualbox.org/virtualbox/6.1.12/VBoxGuestAdditions_6.1.12.iso) |
| v2022 | [6.1.12 r139181 (Qt5.6.2)](https://download.virtualbox.org/virtualbox/6.1.12/VirtualBox-6.1.12-139181-Win.exe) | [6.1.12](https://download.virtualbox.org/virtualbox/6.1.12/VBoxGuestAdditions_6.1.12.iso) |
# Installing VirtualBox
If you have not yet installed Oracle VM VirtualBox, to install the current version of this software, please follow these steps:
- [Download](https://www.virtualbox.org/wiki/Downloads) the most recent version
of the VirtualBox installation file for your host platform. Apertis recommend
version 5.0.12 or above.
- [Download](https://www.virtualbox.org/wiki/Downloads) the required version of
the VirtualBox installation file for your host platform. Check the table of
supported versions above to determine which version of VirtualBox is supported
for the Apertis Release you want to use.
- Follow the installation procedure provided in the
[VirtualBox installation guide](https://www.virtualbox.org/manual/ch02.html)
......
......@@ -34,7 +34,8 @@ specific to other APIs are covered on their respective pages.
## Summary
* [Use the GLib coding style]( {{< ref "#code-formatting" >}} ), with vim modelines.
* [Align the happy path to the left edge]( {{< ref "#code-formatting" >}} ) and
when programming in the C language use the GLib coding style, with vim modelines.
* [Consistently namespace files]( {{< ref "#namespacing" >}} ), functions and types.
* [Always design code to be modular]( {{< ref "#modularity" >}} ), encapsulated and loosely coupled.
* Especially by keeping object member variables inside the object’s private structure.
......@@ -55,6 +56,10 @@ Using a consistent code formatting style eases maintenance of code, by meaning
contributors only have to learn one coding style for all modules, rather than
one per module.
Regardless of the programming language, a good guideline for the organization
of the control flow is
[aligning the happy path to the left edge](https://medium.com/@matryer/line-of-sight-in-code-186dd7cdea88).
The coding style in use is the popular
[GLib coding style](https://developer.gnome.org/programming-guidelines/unstable/c-coding-style.html.en),
which is a slightly modified version of the
......