- Mar 03, 2016
-
-
Differential Revision: https://phabricator.apertis.org/D791 Reviewed-by:
Philip Withnall <philip.withnall@collabora.co.uk>
-
This means we can do them without python3 installed, such as on target images. Bug: https://bugs.apertis.org/show_bug.cgi?id=513 Differential Revision: https://phabricator.apertis.org/D494 Reviewed-by: pwith
-
Differential Revision: ttps://phabricator.apertis.org/D493 Reviewed-by: pwith
-
Previously, we used the installed copy in chaiwala-tests; but we don't actually need that, a source directory is fine. This means we can drop the dependency on chaiwala-tests. This simplifies deployment of a new version of the test on LAVA. We can also drop the dependency on busybox, which we haven't used since moving to run-test-in-systemd. Differential Revision: https://phabricator.apertis.org/D492 Reviewed-by: pwith
-
Differential Revision: https://phabricator.apertis.org/D411 Reviewed-by: xclaesse
-
-
-
In many tests, we run a scenario twice, once with a fake "malicious" LD_PRELOAD and one without. Prefix the tests so we get normal_test1: pass normal_test2: pass normal.expected_underlying_tests: pass normal.expected: pass malicious_test1: fail malicious_test2: pass malicious.expected_underlying_tests: fail malicious.expected: pass instead of having "duplicate" results for the underlying tests: test1: pass test2: pass normal.expected_underlying_tests: pass normal.expected: pass test1: fail test2: pass malicious.expected_underlying_tests: fail malicious.expected: pass Differential Revision: https://phabricator.apertis.org/D282 Signed-off-by:
Simon McVittie <simon.mcvittie@collabora.co.uk> Reviewed-by: xclaesse
-
Tests that cannot be debugged considered harmful. Differential Revision: https://phabricator.apertis.org/D280 Signed-off-by:
Simon McVittie <simon.mcvittie@collabora.co.uk> Reviewed-by: araujo
-
Differential Revision: https://phabricator.apertis.org/D278 Signed-off-by:
Simon McVittie <simon.mcvittie@collabora.co.uk> Reviewed-by: xclaesse
-
Before this change, if "normal" failed due to a bug in the underlying tests and "malicious" failed due to an unmet expectation, the machine-readable parts of our log would be normal.expected: fail malicious.expected: fail and discovering the reasons would require reading logs. Now, we would log that situation as: normal.expected_underlying_tests: fail normal.expected: pass malicious.expected_underlying_tests: pass malicious.expected: fail and an appropriate developer can investigate in the right places; in this case, the "normal" failure would require someone who knows about whatever is under test, for example Tracker, while the "malicious" failure would require someone who knows about AppArmor. Differential Revision: https://phabricator.apertis.org/D277 Signed-off-by:
Simon McVittie <simon.mcvittie@collabora.co.uk> Reviewed-by: xclaesse
-
In nearly all cases, the underlying test is itself designed as a LAVA test with (at least partially) structured output. We want to capture and parse that output, so that the LAVA logs correctly blame one specific underlying test for failures if necessary. Differential Revision: https://phabricator.apertis.org/D276 Signed-off-by:
Simon McVittie <simon.mcvittie@collabora.co.uk> Reviewed-by: araujo
-
Signed-off-by:
Simon McVittie <simon.mcvittie@collabora.co.uk> Differential Revision: https://phabricator.apertis.org/D275 Reviewed-by: xclaesse
-
Sjoerd Simons authored
-