- Aug 11, 2023
-
-
Signed-off-by:
Sagar <SagarKishore.Benani@in.bosch.com>
-
- Jul 21, 2023
-
-
Walter Lozano authored
The sanity manual test is the first test run to confirm the health of the image. For this reason, the test itself states that no other manual test should run if there is a failure. This policy aims to avoid running tests on an image which is not healthy enough since the results will not be valid. This policy is too strict, since due to failure in one are, like WiFi, will block the rest of the tests. To provide a better trade off, update the policy to avoid running other manual tests until the issue is triaged. Signed-off-by:
Walter Lozano <walter.lozano@collabora.com>
-
- Jul 19, 2023
-
-
- Jul 14, 2023
-
-
Improve the wording of the pre conditions section in the secure boot test to instruct the flashing to the correct u-boot image based on the release under test. Signed-off-by:
Walter Lozano <walter.lozano@collabora.com>
-
- Jul 12, 2023
-
-
Siva Krishna Prasad Chinthalapudi authored
Signed-off-by:
SivaKrishnaPrasad <sivakrishnaprasad.chinthalapudi@in.bosch.com>
-
- Jul 11, 2023
-
-
- Jul 03, 2023
-
-
Walter Lozano authored
This test relies in a Debian LXC image to be deployed in the DUT which presents a problem in some scenarios, devices requires access to public Internet and QEMU requires LXC support. Disable this test until a it is reimplemented to overcome the current limitations. Signed-off-by:
Walter Lozano <walter.lozano@collabora.com>
-
- Jun 30, 2023
-
-
Apertis CI authored
Signed-off-by:
Apertis CI <devel@lists.apertis.org>
-
- Jun 19, 2023
-
-
-
Signed-off-by:
SivaKrishnaPrasad <sivakrishnaprasad.chinthalapudi@in.bosch.com>
-
- Apr 13, 2023
-
-
Andre Moreira Magalhaes authored
Signed-off-by:
Andre Moreira Magalhaes <andre.magalhaes@collabora.com>
-
- Mar 01, 2023
-
-
Andre Moreira Magalhaes authored
Signed-off-by:
Andre Moreira Magalhaes <andre.magalhaes@collabora.com>
-
- Feb 27, 2023
-
-
Apertis CI authored
Signed-off-by:
Apertis CI <devel@lists.apertis.org>
-
- Feb 08, 2023
-
-
- Jan 25, 2023
-
-
Walter Lozano authored
In commit 5bf3c4a9 tests for amd64 board were re enabled after adding support for Up Squared 6000 board, and having enough boards in LAVA. After this, it was seen that tests that relies in bootcount fail and require a rework to align u-boot and uefi. In the mean time, to avoid creating noise and masking issues disable tests that rely in bootcount. Signed-off-by:
Walter Lozano <walter.lozano@collabora.com>
-
- Jan 23, 2023
-
-
Walter Lozano authored
The use of the git repos pointing at v2023dev1 was introduced in 6ae11598 which was merged later on v2023dev2, causing the branching script not being able to update the branch name. Manually fix the issue. Signed-off-by:
Walter Lozano <walter.lozano@collabora.com>
-
- Jan 20, 2023
-
-
Walter Lozano authored
In some use cases it is useful to only run some test cases, with that in mind add the support to them when generating the jobs. Signed-off-by:
Walter Lozano <walter.lozano@collabora.com>
-
- Dec 05, 2022
-
-
Apertis CI authored
Signed-off-by:
Apertis CI <devel@lists.apertis.org>
-
- Nov 29, 2022
-
-
Apertis CI authored
Signed-off-by:
Apertis CI <devel@lists.apertis.org>
-
- Nov 23, 2022
-
-
Apertis CI authored
Signed-off-by:
Apertis CI <devel@lists.apertis.org>
-
- Nov 14, 2022
-
-
Ryan Gonzalez authored
The previous code would only ever check if the architecture was directly listed in the test case, which meant it excluded the visibility suffixes. https://phabricator.apertis.org/T8949
-
Ryan Gonzalez authored
This was getting rendered as a messy plain-text paragraph. https://phabricator.apertis.org/T8949
-
Ryan Gonzalez authored
Because the test logs are private, the visibility is '-internal', so update the test cases to match. https://phabricator.apertis.org/T8949 Signed-off-by:
Ryan Gonzalez <ryan.gonzalez@collabora.com>
-
- Nov 11, 2022
-
-
Ryan Gonzalez authored
The group template was created after 9dc1fafa was merged, which removed all instances of the 'name:' key. However, this turned out to break test submission, which is why it was then reverted in 79a2b0df. Unfortunately, I did not realize that the new IoT group was missing this field, which means that 'name:' was never added back there and IoT image test submission is broken. https://phabricator.apertis.org/T8949 Signed-off-by:
Ryan Gonzalez <ryan.gonzalez@collabora.com>
-
- Nov 10, 2022
-
-
Ryan Gonzalez authored
The tests need credentials to be passed to them, so a new LAVA group is created that passes them down to the scripts. In order to avoid any leakage, visibility controls are added to the job generation, so that the IoT jobs can set the visibility to be internal-only. https://phabricator.apertis.org/T8949 Signed-off-by:
Ryan Gonzalez <ryan.gonzalez@collabora.com>
-
- Oct 31, 2022
-
-
Walter Lozano authored
qa.apertis.org is being served by the qa-report-app. So modify the LAVA callback to point to qa.apertis.org. Signed-off-by:
Walter Lozano <walter.lozano@collabora.com>
-
- Oct 24, 2022
-
-
Walter Lozano authored
This reverts commit 0f2e8a1a. On commit 0f2e8a1a a clean up was introduced to remove validation warnings. Unfortunately the tool prompts is key is used by lava to detect when the deploy action has finished when the tool is different from dd as described in: https://github.com/Linaro/lava/blob/c38d449719d7e4dca3d07e6b5e9bb63d5a5579a9/lava_dispatcher/actions/deploy/removable.py#L151 Revert the previous changes to allow LAVA to detect that the deploy action has finished.
-
- Oct 20, 2022
-
-
Walter Lozano authored
This reverts commit 9dc1fafa. Revert this changes since QA Report App tries to use this metadata and currently rises the following exception: File "/app/testobjects.py", line 143, in lava_jobdata_extract_results t['group'] = testgroup['name'] KeyError: 'name' This causes tests results for v2023pre not being processed. It is not clear if this metadata is used, however, currently it is not possible to easily deploy a new version of QA Report app, so the best approach is to revert this change.
-
- Oct 19, 2022
-
-
The way test submission works changed significantly when we switched to using the lava_runner. In that change set, I preserved the logic around when the submit tasks ran, such that they only ran when a valid LAVA configuration existed in the job variables, and when the pipeline is not for a merge request. In repository preceding those changes, there was a valid LAVA configuration, but the test submission was given `--dry-run` so that we would run our templating, but stop short of submitting the jobs, even though we required a valid LAVA configuration to exist. In the repository after these changes, there is not a valid LAVA configuration (since the required variables changed) and now the test generation step is no longer occurring on pipelines. Moreover, if we made the LAVA configuration valid for the state we have now, it would both generate and run the tests, because the generation and run steps have identical rules governing them in CI. Therefore, remove the checks for a valid LAVA configuration from the generation step. This means we do not need a valid LAVA configuration in order to get the same behaviour we had before: generate the tests, but do not submit them.
-
- Oct 13, 2022
-
-
Emanuele Aina authored
Validating the job definitions when resubmitting them yields: Valid definition with warnings: extra keys not allowed @ data['actions[4]']['test']['definition']['name'] Get rid of those extra keys to make LAVA happier. Signed-off-by:
Emanuele Aina <emanuele.aina@collabora.com>
-
Emanuele Aina authored
Validating the job definitions when resubmitting them yields: Valid definition with warnings: extra keys not allowed @ data['actions[2]']['deploy']['usb']['tool'] Get rid of the extra setting to make LAVA happier. Signed-off-by:
Emanuele Aina <emanuele.aina@collabora.com>
-
- Sep 26, 2022
-
-
Ariel D'Alessandro authored
Commit cde077ec ("lava: amd64-upsquared6000: User a newer v2022 first stage image") set the UP Squared 6000 1st stage image to a weekly build. As the weekly image may get deleted sooner than later, let's use a release point. Signed-off-by:
Ariel D'Alessandro <ariel.dalessandro@collabora.com>
-
- Sep 23, 2022
-
-
Detlev Casanova authored
When running the aum-rollback-blacklist test, u-boot must rollback to the older version when the bootlimit of 3 has been reached. Using just `boot` will bypass the boot count check, so we need to add the check here in the device template, like it is done for the other device types. Fixes: infrastructure/apertis-issues#129 Fixes: infrastructure/apertis-issues#130 Signed-off-by:
Detlev Casanova <detlev.casanova@collabora.com>
-
- Sep 16, 2022
-
-
Walter Lozano authored
Most of test cases sort supported architectures by relevance instead of alphabetically, so apply the same principle in general to keep consistency. Signed-off-by:
Walter Lozano <walter.lozano@collabora.com>
-
Walter Lozano authored
AUM tests for amd64 were never fully enabled due to the lack of hardware in LAVA. After adding support to the new UP Squared 6000 boards this has changed, so re enable test on amd64 to make testing consistent across all the supported boards. Signed-off-by:
Walter Lozano <walter.lozano@collabora.com>
-
- Sep 08, 2022
-
-
Apertis CI authored
Signed-off-by:
Apertis CI <devel@lists.apertis.org>
-
- Sep 01, 2022
-
-
Ariel D'Alessandro authored
Commit 160e8737 ("lava: amd64-upsquared6000: Use v2022 1st stage image with firmware pkgs") set the UP Squared 6000 1st stage image to a daily build as it included the required firmware pkgs. As the daily image will get deleted soon, let's use a weekly one until the next v2022 release it out. Signed-off-by:
Ariel D'Alessandro <ariel.dalessandro@collabora.com>
-
- Aug 31, 2022
-
-
-
-
These changes are made to demonstrate to client repository owners how to use the new tool for best effect. The tests from this repository are not now run as standard, and the base configuration cannot be run because the Apertis build ids are unset. Leaving the tests in an obsolete state seems actively harmful; the alternative is simply to remove the testing infrastructure entirely, but that makes it more difficult to find a fully worked example of how all the tools this repo provides fit together. Up until now, the approach has been to run two commands back-to-back: first generate the tests (as YAML), then submit them to Lava with lava-submit.py. Using the new generate-test-pipeline.py tool, we generate a pipeline which will run all the generated test files. We can then define a new trigger job for every existing submit job to execute the generated pipeline.
-