- Dec 22, 2021
-
-
Emanuele Aina authored
Rather than using plaintext error messages, use readable codes and structured metadata for errors and updates to make them easier to process. This will be particularly useful for filtering: for instance we preserve the branch information rather than muddling it in the error message. Signed-off-by:
Emanuele Aina <emanuele.aina@collabora.com>
-
- Nov 18, 2021
-
-
Ryan Gonzalez authored
This adds a new tool, storage_stats, to gather statistics on the used storage, as well as a new dashboard page to display the statistics. Due to exceedingly poor performance from attempting to use pure Python to parse all the packaging files in over 2000 snapshots (a total of several thousand files to parse & process), Rust was used for the storage_stats implementation; on my system, it determines the full extent of storage usage in under 20 minutes. On the dashboard's side, the common HTML and styling was moved into a single shared template that both the current index/packages page *and* the new storage page can use, and navigational links were added to the pages. https://phabricator.apertis.org/T8197 Signed-off-by:
Ryan Gonzalez <ryan.gonzalez@collabora.com>
-
- Oct 21, 2021
-
-
Emanuele Aina authored
Inline the link to the per-release package indices with the summary instead of just having it misaligned somewhere in the header. Signed-off-by:
Emanuele Aina <emanuele.aina@collabora.com>
-
- Oct 01, 2021
-
-
Emanuele Aina authored
Make it easier to share a pointer to a specific report by making the anchors more discoverable with self-links that appear on hover. Signed-off-by:
Emanuele Aina <emanuele.aina@collabora.com>
-
- Sep 30, 2021
-
-
Emanuele Aina authored
Signed-off-by:
Emanuele Aina <emanuele.aina@collabora.com>
-
- Sep 20, 2021
-
-
Emanuele Aina authored
The `golang-k8s-sigs-yaml` update in `apertis/v2021-updates` broke the rendering for downstream instances since it does not have a base branch: golang-k8s-sigs-yaml: git: path_with_namespace: pkg/golang-k8s-sigs-yaml updates: - branch: name: apertis/v2021-updates version: 1.2.0-2+apertis1 upstream: component: development name: golang-k8s-sigs-yaml source: apertis/v2021-updates version: 1.2.0-3+apertis1 This is due to the fact that `golang-k8s-sigs-yaml` is a new dependency introduced by a package that got updated in `apertis/v2021-updates` and has not been folded into `apertis/v2021` yet: no `golang-k8s-sigs-yaml` exists in `apertis/v2021`, and for this reason there's no `base` object. In this case it is enough to trigger the pipeline directly on the `apertis/v2021-updates` branch. Signed-off-by:
Emanuele Aina <emanuele.aina@collabora.com>
-
- Sep 04, 2021
-
-
Emanuele Aina authored
The `public/data.yaml` generated by the dashboard contains all the combined data to let triggers do their job, using it gives more flexibility later on. Signed-off-by:
Emanuele Aina <emanuele.aina@collabora.com>
-
- Jul 14, 2021
-
-
Emanuele Aina authored
When an update is available on e.g. `buster-security` try to trigger the update pipeline on the `debian/buster` branch rather than the `debian/buster-security` one, for consistency. The latter is currently buggy, but that's a bug in the rules of the infrastructure/ci-package-builder> pipeline that needs to be fixed. Signed-off-by:
Emanuele Aina <emanuele.aina@collabora.com>
-
- Jun 09, 2021
-
-
Ritesh Raj Sarraf authored
This script extracts the packages list from the respective Apertis releases Sources metadata file. It extracts the following fields: * Source Package Names * Package Version * Repository Component It then writes these values in a TSV format to a file per release. The tsv data is stored in a folder 'tsv/', treated as an artifact by the CI. The same artifact is pushed to the rendered dashboard for serving over HTTP Signed-off-by:
Ritesh Raj Sarraf <ritesh.sarraf@collabora.com>
-
- Mar 14, 2021
-
-
Emanuele Aina authored
Signed-off-by:
Emanuele Aina <emanuele.aina@collabora.com>
-
- Feb 26, 2021
-
-
Emanuele Aina authored
This skips an indirection step when triggering updates on a single package. Keep the indirection for the "update all" button. Signed-off-by:
Emanuele Aina <emanuele.aina@collabora.com>
-
- Feb 24, 2021
-
-
Help maintainers spot failed packaging pipelines where manual intervention is needed. Signed-off-by:
Emanuele Aina <emanuele.aina@collabora.com>
-
- Feb 21, 2021
-
-
Emanuele Aina authored
Let users start a pipeline that triggers all the upstream pulls at the same time. This should hopefully reduce the amount of clicks maintainers have to do on a daily basis. Signed-off-by:
Emanuele Aina <emanuele.aina@collabora.com>
-
- Feb 20, 2021
-
-
Emanuele Aina authored
When clicking the new update links generated by the dashboard eveything gets recomputed from scratch: this means that we waste resources but more importantly we slow down the developers who triggered the updates. To avoid that, point the pipeline triggering the updates to the previously computed data and skip the jobs that are now superfluous. Signed-off-by:
Emanuele Aina <emanuele.aina@collabora.com>
-
- Feb 16, 2021
-
-
Emanuele Aina authored
Add a job that, if `TRIGGER_UPDATES` is set, triggers all the matching update pipelines. Set TRIGGER_UPDATES by manually triggering the pipeline to actually initiate the updates * use "*" to match everything * use "dash" to only process the dash package If TRIGGER_UPDATES is left empty, do a dry run (this is the default). For instance: https://gitlab.apertis.org/infrastructure/dashboard/-/pipelines/new?var[TRIGGER_UPDATES]=* Signed-off-by:
Emanuele Aina <emanuele.aina@collabora.com>
-
- Feb 15, 2021
-
-
Emanuele Aina authored
Signed-off-by:
Emanuele Aina <emanuele.aina@collabora.com>
-
Emanuele Aina authored
Rename `--pipeline-url` to `--current-pipeline-url` so we can later add more parameter about other pipelines, for instance the URL to trigger enw pipelines. Signed-off-by:
Emanuele Aina <emanuele.aina@collabora.com>
-
- Oct 12, 2020
-
-
Emanuele Aina authored
At the moment there's no way to spot whether the dashboard is outdated due to pipeline failures without going to the CI pipeline page. Showing a timestamp should help detect issues timely. Signed-off-by:
Emanuele Aina <emanuele.aina@collabora.com>
-
- Aug 21, 2020
-
-
Emanuele Aina authored
Check the versions tagged in the git repository and the ones on OBS to ensure they are perfectly aligned, any discrepancy would be an error. Signed-off-by:
Emanuele Aina <emanuele.aina@collabora.com>
-
Emanuele Aina authored
When a branch is missing the version tag we can't compute the branch version, so do not print it in these cases. Signed-off-by:
Emanuele Aina <emanuele.aina@collabora.com>
-
- Jul 29, 2020
-
-
Rather than indexing by repository name, use the package name as the main key since it is the common concept that ties GitLab, OBS and upstream sources. This simplifies some parts of the code as all the information is available from a single object instead of being spread across multiple data sources. Error reporting is also largely simplified by having a single `errors:` array on each package and have each error to be an object rather than a single string: iterating over every error is thus much simpler and the information about the error itself is now explicit rather than implicit based on its surrounding context (for instance, whether it was located on a branch, on the git project, or on the OBS package entry). The YAML structure went from: obs: packages: aalib: entries: apertis:v2020:target: name: aalib errors: - "ooops" projects: pkg/target/aalib: branches: debian/buster: name: debian/buster errors: - "eeeww" errors: - "aaargh" sources: debian/buster: packages: aalib: [...] to: packages: aalib: obs: entries: apertis:v2020:target: {...} git: branches: debian/buster: {...} upstreams: debian/buster: [...] errors: - msg: "aaargh" - msg: "eeeww" branch: debian/buster - msg: "ooops" projects: [ "apertis:v2020:target" ] Signed-off-by:
Emanuele Aina <emanuele.aina@collabora.com>
-
- Jul 14, 2020
-
-
Emanuele Aina authored
Get information from OBS about which file each package ships on OBS to reliably asses the version. We do not use that for checks yet, though. At the moment we only check that each GitLab project maps to a package in OBS and that there are no duplicated packages in OBS. Signed-off-by:
Emanuele Aina <emanuele.aina@collabora.com>
-
- May 15, 2020
-
-
Emanuele Aina authored
Introduce a pipeline to fetch data from multiple sources, cross-check the retrieved information and trigger actions. Each step emits YAML data that can be consumed by later steps and then merged again to render a dashboard, with the goal of easing the addition of more data sources and checks as much as possible. The current steps are: * packaging-data-fetch-upstream: grab package listings from the configured upstream sources * packaging-data-fetch-downstream: scan GitLab to collect data about the packaging repositories and branches * yaml-merge: dedicated tool to merge data from multiple sources * packaging-sanity-check: verify some invariants and report mismatches * packaging-updates: compute which packages have a newer upstream and trigger the pipeline to pull them in * dashboard: render a basic dashboard listing the identified errors By triggering only the pipelines where there's a known update pending we avoid the issues with the previous approach that involved running the pipeline on each of the 4000+ repositories every week, which ended up overwhelming GitLab. Signed-off-by:
Emanuele Aina <emanuele.aina@collabora.com>
-