Skip to content

Get a single TestResult for each testcase

Emanuele Aina requested to merge wip/em/single-result-for-testcase into master

LAVA provides results at the single test command level and does not provide any easy way to get results at the testcase level.

This often results in duplicated entries being reported in Phabricator, for instance in cases like the tasks below:

Fortunately the information needed is available in the data passed to the webhook, so it's a matter of reworking the underlying datamodel until it matches our expectations.

The original data is flattened to JSON by converting the YAML sections, then we extract some useful data from the job definition. The bulk of the work happens in the code that selects the few entries from the 'lava' test suite that approximately match our concept of a "testcase" and then goes over the other test suites to collect the actual test results, also aggregating them at the testcase level.

The resulting datastructure is something like this:

{
    "description": "AppArmor common tests on 19.03 Minnowboard turbot using target OStree image 20190125.0",
    "health_string": "complete",
    "id": 1450933,
    "image.arch": "amd64",
    "image.board": "uefi",
    "image.deployment": "apt",
    "image.release": "19.03",
    "image.type": "target",
    "image.url": "https://images.apertis.org/daily/19.03/20190125.0/amd64/target/apertis_ostree_19.03-target-amd64-uefi_20190125.0.img.gz",
    "image.version": "20190125.0",
    "image_deployment": "ostree",
    [...]
    "testcases": [
        {
            "id": "21739461",
            "commit_id": "458e56df1cca79da1bc2f52da94a4a284f91b693",
            "suite": "sanity-check",
            "name": "0_sanity-check",
            "path": "test-cases/sanity-check.yaml",
            "repository": "https://gitlab.apertis.org/tests/apertis-test-cases.git",
            "result": "pass",
            "results": [
                {
                    "id": "21739460",
                    "name": "user-id",
                    "result": "pass",
                    "url": "/results/testcase/21739460"
                },
                {
                    "id": "21739459",
                    "name": "system-id",
                    "result": "pass",
                    "url": "/results/testcase/21739459"
                },
    [...]

To check the details in the full datamodel you can launch ./testobjects.py againts one of the test job data files and look at the returned JSON:

python3 ./testobjects.py testdata/job_data_1450933

The downside of tracking results at a coarse granularity is that if more than one test is failing in a single testcase only one Phabricator task gets reported and it's up to the developers to create the appropriate subtasks, while in a ideal situation the current code is able to track the issues separatedly. That said, our reports already use the testcase granularity so it's better to focus on that and avoid spurios duplicates.

Edited by Emanuele Aina

Merge request reports