- May 10, 2022
-
-
Dylan Aïssi authored
using the debian-security repository before being available in the main Debian repository through a point release. Signed-off-by:
Dylan Aïssi <dylan.aissi@collabora.com>
-
- Dec 22, 2021
-
-
Emanuele Aina authored
Rather than using plaintext error messages, use readable codes and structured metadata for errors and updates to make them easier to process. This will be particularly useful for filtering: for instance we preserve the branch information rather than muddling it in the error message. Signed-off-by:
Emanuele Aina <emanuele.aina@collabora.com>
-
- Jul 04, 2021
-
-
Emanuele Aina authored
The current code was not reporting updates for destinations like `debian/buster-updates` unless the branch already existed. Tweak the code to check all the sources even if only a subset of the matching branches exists, using a new `base` key in the source definition to group them. For instance, if a repository has only the `debian/buster-updates` branch the `base` key points to `debian/buster` which in turn matches the `debian/buster`, `debian/buster-security`, and `debian/buster-updates` sources. The availability of updates will be checked for each of them. Signed-off-by:
Emanuele Aina <emanuele.aina@collabora.com>
-
Emanuele Aina authored
The `packaging-updates` tool operates exclusively on local data and it does not need a connection to the GitLab API, so drop the relevant code. Signed-off-by:
Emanuele Aina <emanuele.aina@collabora.com>
-
- Jun 22, 2021
-
-
Add 'source' key entry for source data. This will allow to add 'binaries' key entry later. Signed-off-by:
Frédéric Danis <frederic.danis@collabora.com>
-
- Feb 20, 2021
-
-
Emanuele Aina authored
When clicking the new update links generated by the dashboard eveything gets recomputed from scratch: this means that we waste resources but more importantly we slow down the developers who triggered the updates. To avoid that, point the pipeline triggering the updates to the previously computed data and skip the jobs that are now superfluous. Signed-off-by:
Emanuele Aina <emanuele.aina@collabora.com>
-
- Feb 16, 2021
-
-
Emanuele Aina authored
Add a job that, if `TRIGGER_UPDATES` is set, triggers all the matching update pipelines. Set TRIGGER_UPDATES by manually triggering the pipeline to actually initiate the updates * use "*" to match everything * use "dash" to only process the dash package If TRIGGER_UPDATES is left empty, do a dry run (this is the default). For instance: https://gitlab.apertis.org/infrastructure/dashboard/-/pipelines/new?var[TRIGGER_UPDATES]=* Signed-off-by:
Emanuele Aina <emanuele.aina@collabora.com>
-
- Jan 11, 2021
-
-
Emanuele Aina authored
With the switch to the external gitlab-ci pipeline definition in the packaging repositories the `debian/buster-gitlab-update-job` branches are no longer needed since they were only used to host the upstream pull pipeline definition which it could not reside on the `debian/` branches. Now we can kick the pipeline on the `debian/` branches directly using the external config, so let's do that. Signed-off-by:
Emanuele Aina <emanuele.aina@collabora.com>
-
- Jul 29, 2020
-
-
Rather than indexing by repository name, use the package name as the main key since it is the common concept that ties GitLab, OBS and upstream sources. This simplifies some parts of the code as all the information is available from a single object instead of being spread across multiple data sources. Error reporting is also largely simplified by having a single `errors:` array on each package and have each error to be an object rather than a single string: iterating over every error is thus much simpler and the information about the error itself is now explicit rather than implicit based on its surrounding context (for instance, whether it was located on a branch, on the git project, or on the OBS package entry). The YAML structure went from: obs: packages: aalib: entries: apertis:v2020:target: name: aalib errors: - "ooops" projects: pkg/target/aalib: branches: debian/buster: name: debian/buster errors: - "eeeww" errors: - "aaargh" sources: debian/buster: packages: aalib: [...] to: packages: aalib: obs: entries: apertis:v2020:target: {...} git: branches: debian/buster: {...} upstreams: debian/buster: [...] errors: - msg: "aaargh" - msg: "eeeww" branch: debian/buster - msg: "ooops" projects: [ "apertis:v2020:target" ] Signed-off-by:
Emanuele Aina <emanuele.aina@collabora.com>
-
- Jul 13, 2020
-
-
Emanuele Aina authored
Go through the `apertis/*` branches and complain if the `debian/*` branches for their base distribution have not been merged. This ensures that all the Debian updates get packaged for Apertis. Signed-off-by:
Emanuele Aina <emanuele.aina@collabora.com>
-
- Jul 08, 2020
-
-
Emanuele Aina authored
When an update is available for `buster-security`, we currently attempt to trigger a pipeline on the `debian/buster-security-gitlab-update-job` branch, which does not exist. We instead want to trigger it on `debian/buster-gitlab-update-job`, so trim everything after the first `-` from the branch name. Signed-off-by:
Emanuele Aina <emanuele.aina@collabora.com>
-
- May 15, 2020
-
-
Emanuele Aina authored
Introduce a pipeline to fetch data from multiple sources, cross-check the retrieved information and trigger actions. Each step emits YAML data that can be consumed by later steps and then merged again to render a dashboard, with the goal of easing the addition of more data sources and checks as much as possible. The current steps are: * packaging-data-fetch-upstream: grab package listings from the configured upstream sources * packaging-data-fetch-downstream: scan GitLab to collect data about the packaging repositories and branches * yaml-merge: dedicated tool to merge data from multiple sources * packaging-sanity-check: verify some invariants and report mismatches * packaging-updates: compute which packages have a newer upstream and trigger the pipeline to pull them in * dashboard: render a basic dashboard listing the identified errors By triggering only the pipelines where there's a known update pending we avoid the issues with the previous approach that involved running the pipeline on each of the 4000+ repositories every week, which ended up overwhelming GitLab. Signed-off-by:
Emanuele Aina <emanuele.aina@collabora.com>
-