From 7976200bc76b1c30da302d10d3c9febde4ff80e7 Mon Sep 17 00:00:00 2001
From: Martyn Welch <martyn.welch@collabora.com>
Date: Tue, 21 Jul 2020 17:04:12 +0100
Subject: [PATCH] Move submission checklist from designs.a.o to policies

We have a stub in guidelines for the contribution checklist. Move the
content over from designs.a.o and move to policies.

Add in alias for `old-designs` to record the location used on
designs.a.o that we can be used once we've dropped all the contents
of designs.a.o as a redirect to the website.

Flag file to be generated as PDF since it's a document from
designs.a.o.

Signed-off-by: Martyn Welch <martyn.welch@collabora.com>
---
 content/guidelines/checklist.md            |  14 --
 content/policies/contribution_checklist.md | 265 +++++++++++++++++++++
 2 files changed, 265 insertions(+), 14 deletions(-)
 delete mode 100644 content/guidelines/checklist.md
 create mode 100644 content/policies/contribution_checklist.md

diff --git a/content/guidelines/checklist.md b/content/guidelines/checklist.md
deleted file mode 100644
index ca50b0cea..000000000
--- a/content/guidelines/checklist.md
+++ /dev/null
@@ -1,14 +0,0 @@
-+++
-date = "2019-10-25"
-weight = 100
-
-title = "Checklist"
-
-aliases = [
-    "/old-wiki/Guidelines/Checklist"
-]
-+++
-
-Please see the [Submission
-Checklist](https://designs.apertis.org/latest/contribution-checklist.html)
-on <https://designs.apertis.org>.
diff --git a/content/policies/contribution_checklist.md b/content/policies/contribution_checklist.md
new file mode 100644
index 000000000..b377be4f5
--- /dev/null
+++ b/content/policies/contribution_checklist.md
@@ -0,0 +1,265 @@
++++
+date = "2019-10-25"
+weight = 100
+
+title = "Submission Checklist"
+
+aliases = [
+    "/old-wiki/Guidelines/Checklist",
+    "/old-designs/latest/contribution-checklist.html",
+]
+
+outputs = ["html", "pdf-in"]
++++
+
+# Summary
+
+Before submitting a large commit, go through the checklist below to ensure it
+meets all the requirements. If submitting a large patchset, submit it in parts,
+and ensure that feedback from reviews of the first few parts of the patchset are
+applied to subsequent parts.
+
+# Overall principles
+
+In order to break sets of reviews down into chunks, there a few key principles
+we need to stick to:
+* Review patches which are as small as possible, but no smaller (see 
+  [here](https://developer.gnome.org/programming-guidelines/stable/version-control.html.en#guidelines-for-making-commits),
+  [here](https://crealytics.com/blog/2010/07/09/5-reasons-keeping-git-commits-small/),
+  and [here](http://who-t.blogspot.co.uk/2009/12/on-commit-messages.html))
+* Learn from each review so the same review comments do not need to be made
+  more than once
+* Use automated tools to eliminate many of the repetitive and time consuming
+  parts of patch review and rework
+* Do high-level API review first, ideally before implementing those APIs, to
+  avoid unnecessary rework
+
+# Order of reviewing commits
+
+Before proposing a large patchset, work out what order the patches should be
+submitted in. If multiple new classes are being submitted, they should be
+submitted depth-first, as the APIs of the root classes affect the implementation
+of everything else.
+
+The high-level API of any major new feature should be first, then – only once
+that high-level API has been reviewed – its implementation. There is no point in
+starting to review the implementation before the high-level API, as the
+high-level review could suggest some big changes which invalidate a lot of the
+implementation review.
+
+# Revisiting earlier commits
+
+Rather than trying to get everything comprehensive first time, we should aim to
+get everything correct and minimal first time. This is especially important for
+base classes. The commit which introduces a new base class should be fairly
+minimal, and subsequent commits can add functionality to it as that
+functionality becomes needed by new class implementations.
+
+The aim here is to reduce the amount of initial review needed on base classes,
+and to ensure that the non-core parts of the API are motivated by specific needs
+in subclasses, rather than being added speculatively.
+
+# Automated tests
+
+One of the checklist items requires checking the code coverage of the automated
+tests for a class. We are explicitly not requiring that the code coverage
+reaches some target value, as the appropriateness of this value would vary
+wildly between patches and classes. Instead, we require that the code coverage
+report (`lcov` output) is checked for each patch, and the developer thinks about
+whether it would be easy to add additional automated tests to increase the
+coverage for the code in that patch.
+
+# Pre-submission checklist
+
+(A rationale for each of these points is given in the section below to avoid
+cluttering this one.)
+
+Before submitting any patch, please make sure that it passes this checklist, to
+avoid the review getting hung up on avoidable issues:
+1. All new code follows the
+   [coding guidelines](https://designs.apertis.org/latest/coding_conventions.html),
+   especially the
+   [API design guidelines]( {{< ref "api_design.md" >}} ),
+   [namespacing guidelines](https://developer.gnome.org/programming-guidelines/unstable/namespacing.html.en),
+   [memory management guidelines](https://developer.gnome.org/programming-guidelines/unstable/memory-management.html.en),
+   [pre- and post-condition
+guidelines](https://developer.gnome.org/programming-guidelines/unstable/preconditions.html.en),
+   and
+   [introspection
+guidelines](https://developer.gnome.org/programming-guidelines/unstable/introspection.html.en)
+   — some key points from these are pulled out below, but these are not the
+   only points to pay attention to.
+1. All new public API must be
+   [namespaced correctly](https://developer.gnome.org/programming-guidelines/unstable/namespacing.html.en).
+1. All new public API must have a complete and useful
+   [documentation comment]( {{< ref "api_documentation.md" >}} )
+   (ignore the build system comments on that page – we use hotdoc now – the
+   guidelines about the comments themselves are all still relevant).
+1. All new public API documentation comments must have
+   [GObject Introspection annotations](https://wiki.gnome.org/Projects/GObjectIntrospection/Annotations)
+   where appropriate; `g-ir-scanner` (part of the build process) must emit no
+   warnings when run with `--warn-all --warn-error` (which should be set by
+   `$(WARN_SCANNERFLAGS)` from `AX_COMPILER_FLAGS`).
+1. All new public methods must have
+   [pre- and post-conditions](https://developer.gnome.org/programming-guidelines/unstable/preconditions.html.en)
+   to enforce constraints on the accepted parameter values.
+1. The code must compile without warnings, after ensuring that
+   `AX_COMPILER_FLAGS` is used *and enabled* in `configure.ac` (if it is
+   correctly enabled, compiling liblightwood should fail if there are any
+   compiler warnings) — remember to add `$(WARN_CFLAGS)`, `$(WARN_LDFLAGS)`
+   and `$(WARN_SCANNERFLAGS)` to new `Makefile.am` targets as appropriate.
+1. The introduction documentation comment for each new class must give a usage
+   example for that class in each of the main modes it can be used (for
+   example, if done for the roller, there would be one example for fixed mode,
+   one for variable mode, one for linked rollers, one for each animation mode,
+   etc.).
+1. All new code must be formatted as per the
+   [coding guidelines](https://designs.apertis.org/latest/coding_conventions.html#code-formatting),
+   using
+   [`clang-format`](https://designs.apertis.org/latest/coding_conventions.html#reformatting-code)
+   not GNU `indent`.
+1. There must be an example program for each new class, which can be used to
+   manually test all of the class’s main modes of operation (for example, if
+   done for the roller, there would be one example for fixed mode, one for
+   variable mode, one for linked rollers, one for each animation mode, etc.) —
+   these examples may be submitted in a separate patch from the class
+   implementation, but must be submitted at the same time as the implementation
+   in order to allow review in parallel. Example programs must be usable when
+   installed or uninstalled, so they can be used during development and on
+   production machines.
+1. There must be automated tests (using the
+   [`GTest` framework in GLib](https://developer.gnome.org/glib/stable/glib-Testing.html))
+   for construction of each new class, and for getting and setting each of its
+   properties.
+1. The code coverage of the automated tests must be checked (using
+   `make check-code-coverage`; see D3673) before submission, and if it’s
+   possible to add more automated tests (and for them to be reliable) to
+   improve the coverage, this should be done; the final code coverage figure
+   for the class should be mentioned in a comment on the diff, and it would be
+   helpful to have the `lcov` reports for the class saved somewhere for
+   analysis as part of the review.
+1. There must be no definite memory leaks reported by Valgrind when running the
+   automated tests under it (using `AX_VALGRIND_CHECK` and
+   `make check-valgrind`; see D3673).
+1. All automated tests must be installed as
+   [installed-tests](https://wiki.gnome.org/Initiatives/GnomeGoals/InstalledTests)
+   and must be
+   [run when liblightwood is built into a package](https://git.apertis.org/cgit/rhosydd.git/tree/debian/tests/gnome-desktop-testing)
+   (we can help with the initial setup of this infrastructure if needed).
+1. `build-snapshot -Is $build_machine` must pass before submission of any patch
+   (where `$build_machine` is a machine running an up-to-date copy of Apertis,
+   which may be `localhost` — this is a standard usage of `build-snapshot`).
+1. All new code has been checked to ensure it doesn’t contradict review
+   comments from previous reviews of other classes (i.e. we want to avoid
+   making the same review comments on every submitted class).
+1. Commit messages must
+   [explain *why* they make the changes they do](http://chris.beams.io/posts/git-commit/).
+1. The dependency information between Phabricator diffs must be checked to be
+   in the correct order after submitting diffs.
+1. All changes are documented including wiki and appdev portal
+1. Grammar and spelling should be checked with an automated tool for typos and
+   mistakes (where appropriate)
+
+# Rationales
+
+1. Each coding guideline has its own rationale for why it’s useful, and many of
+   them significantly affect the structure of a diff, so are important to get
+   right early on.
+1. Namespacing is important for the correct functioning of a lot of the
+   developer tools (for example, GObject Introspection), and to avoid symbol
+   collisions between libraries — checking it is a very mechanical process
+   which it is best to not have to spend review time on.
+1. Documentation comments are useful to both the reviewer and to end users of
+   the API — for the reviewer, they act as an explanation of why a particular
+   API is necessary, how it is meant to be used, and can provide insight into
+   implementation choices. These are questions which the reviewer would
+   otherwise have to ask in the review, so writing them up lucidly in a
+   documentation comment saves time in the long run.
+1. GObject Introspection annotations are a requirement for the platform’s
+   language bindings (to JavaScript, for example) to work, so must be added at
+   some point. Fixing the error messages from `g-ir-scanner` is sufficient to
+   ensure that the API can be introspected.
+1. Pre- and post-conditions are a form of assertion in the code, which check
+   for programmer errors at runtime. If they are used consistently throughout
+   the code on every API entry point, they can catch programmer errors much
+   nearer their origin than otherwise, which speeds up debugging both during
+   development of the library, and when end users are using the public APIs.
+   They also act as a loose form of documentation of what each API will allow
+   as its inputs and outputs, which helps review (see the comments about
+   documentation above).
+1. The set of compiler warnings enabled by `AX_COMPILER_FLAGS` have been chosen
+   to balance
+   [false positives against false negatives](https://en.wikipedia.org/wiki/Type_I_and_type_II_errors)
+   in detecting bugs in the code. Each compiler warning typically identifies a
+   single bug in the code which would otherwise have to be fixed later in the
+   life of the library — fixing bugs later is always more expensive in terms of
+   debugging time.
+1. Usage examples are another form of documentation (as discussed above), which
+   specifically make it clearer to a reviewer how a particular class is
+   intended to be used. In writing usage examples, the author of a patch can
+   often notice awkwardness in their API design, which can then be fixed before
+   review — this is faster than them being caught in review and sent back for
+   modification.
+1. Well formatted code is a lot easier to read and review than poorly formatted
+   code. It allows the reviewer to think about the function of the code they
+   are reviewing, rather than (for example) which function call a given
+   argument actually applies to, or which block of code a statement is actually
+   part of.
+1. Example programs are a loose form of testing, and also act as usage examples
+   and documentation for the class (see above). They provide an easy way for
+   the reviewer to run the class and (for example, if it is a widget) review
+   its visual appearance and interactivity, which is very hard to do by simply
+   looking at the code in a patch. Their biggest benefit will be when the class
+   is modified in future — the example programs can be used to test changes to
+   it and ensure that its behavior changes (or does not) as expected.
+   Availability of example programs which covered each of the modes of using
+   `LightwoodRoller` would have made it easier to test changes to the roller in
+   the last two releases, and discover that they broke some modes of operation
+   (like coupling two rollers).
+1. For each unit test for a piece of code, the behavior checked by that unit
+   test can be guaranteed to be unchanged across modifications to the code in
+   future. This prevents regressions (especially as the unit tests for Apertis
+   projects are set up to be run automatically on each commit by
+   @apertis-qa-bot, which is more frequently than in other projects). The value
+   of unit tests when initially implementing a class is in the way they guide
+   API design to be testable in the first place. It is often the case that an
+   API will be written without unit tests, and later someone will try to add
+   unit tests and find that the API is untestable; typically because it relies
+   on internal state which the test harness cannot affect. By that point, the
+   API is stable and cannot be changed to allow testing.
+1. Looking at code coverage reports is a good way to check that unit tests are
+   actually checking what they are expected to check about the code. Code
+   coverage provides a simple, coarse-grained metric of code quality — the
+   quality of untested code is unknown.
+1. Every memory leak is a bug, and hence needs to be fixed at some point.
+   Checking for memory leaks in a code review is a very mechanical,
+   time-consuming process. If memory leaks can be detected automatically, by
+   using `valgrind` on the unit tests, this reduces the amount of time needed
+   to catch them during review. This is an area where higher code coverage
+   provides immediate benefits. Another way to avoid leaks is to use
+   [`g_autoptr()`](https://developer.gnome.org/glib/stable/glib-Miscellaneous-Macros.html#g-autoptr)
+   to automatically free memory when leaving a control block — however, as this
+   is a completely new technique to learn, we are not mandating its use yet.
+   You might find it easier though.
+1. If all automated tests are run at package build time, they will be run by
+   @apertis-qa-bot for every patch submission; and can also be run as part of
+   the system-wide integration tests, to check that liblightwood behavior
+   doesn’t change when other system libraries (for example, Clutter or
+   libthornbury) are changed. This is one of the
+   [motivations behind installed-tests](https://wiki.gnome.org/Initiatives/GnomeGoals/InstalledTests#Issues_with_.22make_check.22).
+   This is a one-time setup needed for liblightwood, and once it’s set up, does
+   not need to be done for each commit.
+1. `build-snapshot` ensures that a Debian package can be built successfully
+   from the code, which also entails running all the unit tests, and checking
+   that examples compile. This is the canonical way to ensure that liblightwood
+   remains deliverable as a Debian package, which is important, as the
+   deliverable for Apertis is essentially a bunch of Debian packages.
+1. If each patch is updated to learn from the results of previous patch
+   reviews, the amount of time spent making and explaining repeated patch
+   review comments should be significantly reduced, which saves everyone’s
+   time.
+1. Commit messages are a form of documentation of the changes being made to a
+   project. They should explain the motivation behind the changes, and clarify
+   any design decisions which the author thinks the reviewer might question. If
+   a commit message is inadequate, the reviewer is going to ask questions in
+   the review which could have been avoided otherwise.
-- 
GitLab