Skip to content
Snippets Groups Projects
  1. Aug 11, 2023
  2. Jul 21, 2023
    • Walter Lozano's avatar
      Update policy on sanity tests failures · d9b8a0cb
      Walter Lozano authored
      
      The sanity manual test is the first test run to confirm the health of the
      image. For this reason, the test itself states that no other manual test
      should run if there is a failure.
      
      This policy aims to avoid running tests on an image which is not healthy
      enough since the results will not be valid. This policy is too strict,
      since due to failure in one are, like WiFi, will block the rest of the
      tests.
      
      To provide a better trade off, update the policy to avoid running other
      manual tests until the issue is triaged.
      
      Signed-off-by: default avatarWalter Lozano <walter.lozano@collabora.com>
      d9b8a0cb
  3. Jul 19, 2023
  4. Jul 14, 2023
  5. Jul 12, 2023
  6. Jul 11, 2023
  7. Jul 03, 2023
    • Walter Lozano's avatar
      Disable iptables nmap test · 006519c1
      Walter Lozano authored
      
      This test relies in a Debian LXC image to be deployed in the DUT which
      presents a problem in some scenarios, devices requires access to public
      Internet and QEMU requires LXC support.
      
      Disable this test until a it is reimplemented to overcome the current
      limitations.
      
      Signed-off-by: default avatarWalter Lozano <walter.lozano@collabora.com>
      006519c1
  8. Jun 30, 2023
  9. Jun 19, 2023
  10. Apr 13, 2023
  11. Mar 01, 2023
  12. Feb 27, 2023
  13. Feb 08, 2023
  14. Jan 25, 2023
    • Walter Lozano's avatar
      Disable rollback tests for amd64 · 9de6e490
      Walter Lozano authored
      
      In commit 5bf3c4a9 tests for amd64 board were re enabled after adding
      support for Up Squared 6000 board, and having enough boards in LAVA.
      
      After this, it was seen that tests that relies in bootcount fail and
      require a rework to align u-boot and uefi.
      
      In the mean time, to avoid creating noise and masking issues disable
      tests that rely in bootcount.
      
      Signed-off-by: default avatarWalter Lozano <walter.lozano@collabora.com>
      9de6e490
  15. Jan 23, 2023
  16. Jan 20, 2023
  17. Dec 05, 2022
  18. Nov 29, 2022
  19. Nov 23, 2022
  20. Nov 14, 2022
  21. Nov 11, 2022
  22. Nov 10, 2022
  23. Oct 31, 2022
  24. Oct 24, 2022
  25. Oct 20, 2022
    • Walter Lozano's avatar
      Revert "lava: Drop a bunch of spurious `name:` keys" · 79a2b0df
      Walter Lozano authored
      This reverts commit 9dc1fafa.
      
      Revert this changes since QA Report App tries to use this metadata and
      currently rises the following exception:
      
          File "/app/testobjects.py", line 143, in lava_jobdata_extract_results
            t['group'] = testgroup['name']
          KeyError: 'name'
      
      This causes tests results for v2023pre not being processed.
      
      It is not clear if this metadata is used, however, currently it is not
      possible to easily deploy a new version of QA Report app, so the best
      approach is to revert this change.
      79a2b0df
  26. Oct 19, 2022
    • Edmund Smith's avatar
      Restore the previous dry run behaviour · 03262fe5
      Edmund Smith authored and Andre Moreira Magalhaes's avatar Andre Moreira Magalhaes committed
      The way test submission works changed significantly when we switched
      to using the lava_runner. In that change set, I preserved the logic
      around when the submit tasks ran, such that they only ran when a valid
      LAVA configuration existed in the job variables, and when the pipeline
      is not for a merge request.
      
      In repository preceding those changes, there was a valid LAVA
      configuration, but the test submission was given `--dry-run` so that
      we would run our templating, but stop short of submitting the jobs,
      even though we required a valid LAVA configuration to exist.
      
      In the repository after these changes, there is not a valid LAVA
      configuration (since the required variables changed) and now the test
      generation step is no longer occurring on pipelines. Moreover, if we
      made the LAVA configuration valid for the state we have now, it would
      both generate and run the tests, because the generation and run steps
      have identical rules governing them in CI.
      
      Therefore, remove the checks for a valid LAVA configuration from the
      generation step. This means we do not need a valid LAVA configuration
      in order to get the same behaviour we had before: generate the tests,
      but do not submit them.
      03262fe5
  27. Oct 13, 2022
  28. Sep 26, 2022
  29. Sep 23, 2022
  30. Sep 16, 2022
  31. Sep 08, 2022
  32. Sep 01, 2022
  33. Aug 31, 2022
    • Edmund Smith's avatar
      Actually move file to reflect new naming · f4c00ce8
      Edmund Smith authored and Sjoerd Simons's avatar Sjoerd Simons committed
      f4c00ce8
    • Edmund Smith's avatar
      Rename stages to match new behaviour · 6461c6f3
      Edmund Smith authored and Sjoerd Simons's avatar Sjoerd Simons committed
      6461c6f3
    • Edmund Smith's avatar
      Use the new generate-test-pipeline tool · 76de4e09
      Edmund Smith authored and Sjoerd Simons's avatar Sjoerd Simons committed
      These changes are made to demonstrate to client repository owners how
      to use the new tool for best effect. The tests from this repository
      are not now run as standard, and the base configuration cannot be run
      because the Apertis build ids are unset. Leaving the tests in an
      obsolete state seems actively harmful; the alternative is simply to
      remove the testing infrastructure entirely, but that makes it more
      difficult to find a fully worked example of how all the tools this
      repo provides fit together.
      
      Up until now, the approach has been to run two commands back-to-back:
      first generate the tests (as YAML), then submit them to Lava with
      lava-submit.py. Using the new generate-test-pipeline.py tool, we
      generate a pipeline which will run all the generated test files. We
      can then define a new trigger job for every existing submit job to
      execute the generated pipeline.
      76de4e09
Loading