- Jul 21, 2016
-
-
Simon McVittie authored
The Tracker services are started on-demand, while Xorg might not be installed. Also continue testing if one of these assertions fails: just log it as "not OK". Reviewed-by:
Sjoerd Simons <sjoerd.simons@collabora.co.uk> Signed-off-by:
Simon McVittie <simon.mcvittie@collabora.co.uk> Differential Revision: https://phabricator.apertis.org/D3771
-
Simon McVittie authored
We're effectively asserting that all these processes are running, so we should look at whether they, in fact, *are* running, and if not, why not. Reviewed-by:
Philip Withnall <philip.withnall@collabora.co.uk> Signed-off-by:
Simon McVittie <simon.mcvittie@collabora.co.uk> Differential Revision: https://phabricator.apertis.org/D3639
-
Simon McVittie authored
If we fail to read the AppArmor profile or other required information due to a time-of-check/time-of-use difference (the process exits) then that's fine. Otherwise, it's a problem and we should fail, although we might as well continue testing and get more complete results. Reviewed-by:
Sjoerd Simons <sjoerd.simons@collabora.co.uk> Signed-off-by:
Simon McVittie <simon.mcvittie@collabora.co.uk> Differential Revision: https://phabricator.apertis.org/D3770
-
Simon McVittie authored
Reviewed-by:
Mathieu Duponchelle <mathieu.duponchelle@opencreed.com> Signed-off-by:
Simon McVittie <simon.mcvittie@collabora.co.uk> Differential Revision: https://phabricator.apertis.org/D3638
-
Simon McVittie authored
In practice, the regex should always match: AppArmor "confinement strings" appear to always contain a label and mode, except in the special case "unconfined". However, if this is untrue for whatever reason, we should log it as an error, not carry on blindly. Reviewed-by:
Sjoerd Simons <sjoerd.simons@collabora.co.uk> Signed-off-by:
Simon McVittie <simon.mcvittie@collabora.co.uk> Differential Revision: https://phabricator.apertis.org/D3637
-
- Jul 08, 2016
-
-
Sjoerd Simons authored
Our test should be using the session environment as setup by the system (in lava specifically by run-in-systemd). So remove the hardcoding of DISPLAY (deprecated on wayland targets anyway), XDG_RUNTIME_DIR (setup by the environment in all cases) and DBUS_SESSION_BUS_ADDRESS (should be inferred by XDG_RUNTIME_DIR by all supported dbus libraries) Reviewed-by:
Luis Araujo <luis.araujo@collabora.co.uk> Signed-off-by:
Sjoerd Simons <sjoerd.simons@collabora.co.uk> Differential Revision: https://phabricator.apertis.org/D3664
-
- Jul 07, 2016
-
-
Simon McVittie authored
The more files we use from the $srcdir, the more likely it is that test fixes can be deployed to LAVA without waiting for a package rebuild. Reviewed-by:
Sjoerd Simons <sjoerd.simons@collabora.co.uk> Signed-off-by:
Simon McVittie <simon.mcvittie@collabora.co.uk> Differential Revision: https://phabricator.apertis.org/D3634
-
Simon McVittie authored
This makes it easier to use uninstalled with make -C apparmor/libreoffice apparmor/libreoffice/libreoffice normal Reviewed-by:
Sjoerd Simons <sjoerd.simons@collabora.co.uk> Signed-off-by:
Simon McVittie <simon.mcvittie@collabora.co.uk> Differential Revision: https://phabricator.apertis.org/D3633
-
Simon McVittie authored
Tests that suppress debug output are user-hostile, and we should stop doing that. Reviewed-by:
Sjoerd Simons <sjoerd.simons@collabora.co.uk> Signed-off-by:
Simon McVittie <simon.mcvittie@collabora.co.uk> Differential Revision: https://phabricator.apertis.org/D3636
-
- Jun 24, 2016
-
-
Simon McVittie authored
This test is run from the source tree, so there's no guarantee that this query will return anything. Reviewed-by:
Philip Withnall <philip.withnall@collabora.co.uk> Signed-off-by:
Simon McVittie <simon.mcvittie@collabora.co.uk> Differential Revision: https://phabricator.apertis.org/D3487
-
Simon McVittie authored
Reviewed-by:
Philip Withnall <philip.withnall@collabora.co.uk> Signed-off-by:
Simon McVittie <simon.mcvittie@collabora.co.uk> Differential Revision: https://phabricator.apertis.org/D3480
-
Simon McVittie authored
This typically makes logs from automated tests more useful. Reviewed-by:
Philip Withnall <philip.withnall@collabora.co.uk> Signed-off-by:
Simon McVittie <simon.mcvittie@collabora.co.uk> Differential Revision: https://phabricator.apertis.org/D3479
-
Simon McVittie authored
Debian Policy §10.4 says "Every script should use set -e or check the exit status of every command", which is just generally good advice. Reviewed-by:
Philip Withnall <philip.withnall@collabora.co.uk> Signed-off-by:
Simon McVittie <simon.mcvittie@collabora.co.uk> Differential Revision: https://phabricator.apertis.org/D3478
-
Simon McVittie authored
We need the compiled LD_PRELOAD hack from the installed tree, but the rest can come from the source, allowing for quicker test/fix cycles. Reviewed-by:
Philip Withnall <philip.withnall@collabora.co.uk> Signed-off-by:
Simon McVittie <simon.mcvittie@collabora.co.uk> Differential Revision: https://phabricator.apertis.org/D3477
-
- Jun 22, 2016
-
-
Simon McVittie authored
We use systemd-run to schedule the pactl process to be run under a vaguely realistic user-session. However, there's a chicken-and-egg problem here: systemd-run uses either D-Bus or a private socket in XDG_RUNTIME_DIR to communicate with systemd, and without setting some environment variables we can't know either of those. This is similar to the implementation of the same concept in common/run-test-in-systemd. Unfortunately, the AppArmor tests need to reinvent that bit, because they run as root (to be able to manipulate AppArmor, which is a highly privileged action). Bug-Apertis: https://phabricator.apertis.org/T1859 Reviewed-by:
Philip Withnall <philip.withnall@collabora.co.uk> Signed-off-by:
Simon McVittie <simon.mcvittie@collabora.co.uk> Differential Revision: https://phabricator.apertis.org/D3449
-
Simon McVittie authored
The list of profiles and processes isn't all that long. If we're going to make assertions about this information, we should probably show it first. Reviewed-by:
Philip Withnall <philip.withnall@collabora.co.uk> Signed-off-by:
Simon McVittie <simon.mcvittie@collabora.co.uk> Differential Revision: https://phabricator.apertis.org/D3448
-
Simon McVittie authored
We were running commands like "pactl stat" and then ignoring their nonzero exit status. I've included support for ignoring failures, but in fact we never actually run anything in this test that can legitimately fail, so it's unused. Reviewed-by:
Philip Withnall <philip.withnall@collabora.co.uk> Signed-off-by:
Simon McVittie <simon.mcvittie@collabora.co.uk> Differential Revision: https://phabricator.apertis.org/D3447
-
- Mar 17, 2016
-
-
Philip Withnall authored
The regexp is not bound to either end of the process name, so despite the fact that the test script changed its effective process name to ‘ofonod_’, the regexp ‘ofonod’ still matches that, and hence the script was killing itself, which was causing the systemd unit it was running as to fail, and hence the overall test to fail. Tighten the pkill regexp to match at the end of the process name to avoid this. Bug-Apertis: https://bugs.apertis.org/show_bug.cgi?id=681 Reviewed-by:
Simon McVittie <simon.mcvittie@collabora.co.uk> Signed-off-by:
Philip Withnall <philip.withnall@collabora.co.uk> Differential Revision: https://phabricator.apertis.org/D2284
-
Philip Withnall authored
If an AppArmor malicious test is run as a systemd system job (using `run-test-in-systemd --system`), $HOME will explicitly not be set, which results in the program trying to read (null)/.bash_history, rather than the expected /home/user/.bash_history. Fix that by hard-coding it to use /home/user/.bash_history if $HOME is not set. If the username changes in future, the tests should start failing, which will allow us to update it again. Bug-Apertis: https://bugs.apertis.org/show_bug.cgi?id=681 Reviewed-by:
Simon McVittie <simon.mcvittie@collabora.co.uk> Signed-off-by:
Philip Withnall <philip.withnall@collabora.co.uk> Differential Revision: https://phabricator.apertis.org/D2283
-
Philip Withnall authored
This matches the command used in ofono.service (minus the --nodetach, because we want the process to detach so we can do other things in the test). This eliminates some spurious AppArmor failures caused by the RIL code, and allows ofono to work (the RIL plugin does not work on the i.MX6). Bug-Apertis: https://bugs.apertis.org/show_bug.cgi?id=681 Reviewed-by:
Sjoerd Simons <sjoerd.simons@collabora.co.uk> Signed-off-by:
Philip Withnall <philip.withnall@collabora.co.uk> Differential Revision: https://phabricator.apertis.org/D2278
-
- Mar 03, 2016
-
-
mc-tool (among other things), wasn't installed on the LAVA instance. This package is provided by telepathy-mission-control-5 , which is correctly listed here: https://wiki.apertis.org/QA/Test_Cases/apparmor-folks As such, this commit adds it, and other dependencies listed on the wiki, but absent in the yaml file. Reviewed-by:
Luis Araujo <luis.araujo@collabora.co.uk> Differential Revision: https://phabricator.apertis.org/D2132
-
Reviewed-by:
Philip Withnall <philip.withnall@collabora.co.uk> Signed-off-by:
Simon McVittie <simon.mcvittie@collabora.co.uk> Differential Revision: https://phabricator.apertis.org/D2117
-
See the previous commit. Bug: https://bugs.apertis.org/show_bug.cgi?id=602 Reviewed-by:
Simon McVittie <simon.mcvittie@collabora.co.uk> Signed-off-by:
Philip Withnall <philip.withnall@collabora.co.uk> Differential Revision: https://phabricator.apertis.org/D1170
-
Differential Revision: https://phabricator.apertis.org/D791 Reviewed-by:
Philip Withnall <philip.withnall@collabora.co.uk>
-
This means we can do them without python3 installed, such as on target images. Bug: https://bugs.apertis.org/show_bug.cgi?id=513 Differential Revision: https://phabricator.apertis.org/D494 Reviewed-by: pwith
-
Differential Revision: ttps://phabricator.apertis.org/D493 Reviewed-by: pwith
-
Previously, we used the installed copy in chaiwala-tests; but we don't actually need that, a source directory is fine. This means we can drop the dependency on chaiwala-tests. This simplifies deployment of a new version of the test on LAVA. We can also drop the dependency on busybox, which we haven't used since moving to run-test-in-systemd. Differential Revision: https://phabricator.apertis.org/D492 Reviewed-by: pwith
-
Differential Revision: https://phabricator.apertis.org/D411 Reviewed-by: xclaesse
-
-
-
In many tests, we run a scenario twice, once with a fake "malicious" LD_PRELOAD and one without. Prefix the tests so we get normal_test1: pass normal_test2: pass normal.expected_underlying_tests: pass normal.expected: pass malicious_test1: fail malicious_test2: pass malicious.expected_underlying_tests: fail malicious.expected: pass instead of having "duplicate" results for the underlying tests: test1: pass test2: pass normal.expected_underlying_tests: pass normal.expected: pass test1: fail test2: pass malicious.expected_underlying_tests: fail malicious.expected: pass Differential Revision: https://phabricator.apertis.org/D282 Signed-off-by:
Simon McVittie <simon.mcvittie@collabora.co.uk> Reviewed-by: xclaesse
-
Tests that cannot be debugged considered harmful. Differential Revision: https://phabricator.apertis.org/D280 Signed-off-by:
Simon McVittie <simon.mcvittie@collabora.co.uk> Reviewed-by: araujo
-
Differential Revision: https://phabricator.apertis.org/D278 Signed-off-by:
Simon McVittie <simon.mcvittie@collabora.co.uk> Reviewed-by: xclaesse
-
Before this change, if "normal" failed due to a bug in the underlying tests and "malicious" failed due to an unmet expectation, the machine-readable parts of our log would be normal.expected: fail malicious.expected: fail and discovering the reasons would require reading logs. Now, we would log that situation as: normal.expected_underlying_tests: fail normal.expected: pass malicious.expected_underlying_tests: pass malicious.expected: fail and an appropriate developer can investigate in the right places; in this case, the "normal" failure would require someone who knows about whatever is under test, for example Tracker, while the "malicious" failure would require someone who knows about AppArmor. Differential Revision: https://phabricator.apertis.org/D277 Signed-off-by:
Simon McVittie <simon.mcvittie@collabora.co.uk> Reviewed-by: xclaesse
-
In nearly all cases, the underlying test is itself designed as a LAVA test with (at least partially) structured output. We want to capture and parse that output, so that the LAVA logs correctly blame one specific underlying test for failures if necessary. Differential Revision: https://phabricator.apertis.org/D276 Signed-off-by:
Simon McVittie <simon.mcvittie@collabora.co.uk> Reviewed-by: araujo
-
Signed-off-by:
Simon McVittie <simon.mcvittie@collabora.co.uk> Differential Revision: https://phabricator.apertis.org/D275 Reviewed-by: xclaesse
-
Sjoerd Simons authored
-