diff --git a/content/designs/test-data-reporting.md b/content/designs/test-data-reporting.md
deleted file mode 100644
index 29a02c307c0e36e29189fbd23eba6a9cbeb2fef1..0000000000000000000000000000000000000000
--- a/content/designs/test-data-reporting.md
+++ /dev/null
@@ -1,408 +0,0 @@
-+++
-title = "Test Data Reporting"
-short-description = "Describe test data reporting and visualization."
-weight = 100
-aliases = [
-	"/old-designs/latest/test-data-reporting.html",
-	"/old-designs/v2019/test-data-reporting.html",
-	"/old-designs/v2020/test-data-reporting.html",
-	"/old-designs/v2021dev3/test-data-reporting.html",
-]
-outputs = [ "html", "pdf-in",]
-date = "2019-09-27"
-+++
-
-# Background
-
-Testing is a fundamental part of the project, but it is not so useful unless it
-goes along with an accurate and convenient model to report the results of such a
-testing.
-
-Receiving notifications on time about critical issues, easily checking tests
-results, or analyzing tests trends among different images versions are some of
-the examples of test reporting that helps to keep a project in a good state
-through its different phases of development.
-
-The goal of this document is to define a model for on time and accurate reporting
-of tests results and issues in the project. This model should be adapted to the
-project needs and requirements, along with support for convenient visualization of
-the test data and reports.
-
-The solution proposed in this document should fit the mechanisms available to
-process the test data in the project.
-
-# Current Issues
-
-  - Tests reports are created manually and stored in the wiki.
-  - There is no convenient way to analyze test data and check tests logs.
-  - There does not exist a proper notification system for critical issues.
-  - There is no mechanism to generate statistics from the test data.
-  - There is no way to visualize test data trends.
-
-# Solution
-
-A system or mechanism must be implemented with a well defined workflow fulfilling
-the project requirements to cover the test reporting and visualization of all the
-test data.
-
-The solution will mainly involve designing and implementing a web application
-dashboard for visualization of test results and test cases, and a notification
-mechanism for tests issues.
-
-# Test Cases
-
-Test cases will be available from the Git repository in YAML format as explained
-in the document about [test data storage and processing][TestDataStorage].
-
-The Gitlab Web UI will be used to read test cases rendered in HTML format.
-
-A link to the test case page in Gitlab will be added to the test results metadata
-to easily find the exact test case instructions that were used to execute the
-tests. This link will be shown from the web application dashboard and the SQUAD
-UI for convenient access of the test case for each test result.
-
-As originally proposed in the [test data storage][TestDataStorage] document, the
-test case file will be the canonical specification for the test instructions, and
-it will be executed both by the automated tests and during manual test execution.
-
-# Test Reports
-
-Test results will be available from the SQUAD backend in JSON format as explained
-in the document about [test data storage and processing][TestDataStorage].
-
-The proposal for reporting test results involves two solutions, a web application
-dashboard and a notification mechanism. Both of them will use the SQUAD API to
-access the test data.
-
-# Web Application Dashboard
-
-A web application dashboard must be developed to view test results and generate
-reports from it. This dashboard will serve as the central place for test data
-visualization and report generation for the whole project.
-
-The web application dashboard will be running as a HTTP web service, and it can
-be accessed using a web browser. Details about the specific framework and platform
-will be defined during implementation.
-
-This application should allow to do the following at the minimum:
-
-  - Filter and view test results by priority, test categories, image types,
-    architecture and test type (manual or automated).
-  - Link test results to the specific test cases.
-  - Graphics to analyze test data trends.
-
-The web application won't process test results in any way, nor manipulate the test
-data or change the test data in the storage backend. Its only purpose is to
-generate reports and visual statistics for the test data, so it only has a one way
-commnication channel with the data storage backend in order to fetch the test data.
-
-The application may also be progressively extended to export data in different
-formats such as spreadsheets and PDFs
-
-This dashboard will serve as a complement to the SQUAD Web UI that is more suitable
-for developers.
-
-## Components
-
-The web application will consist at least of the following functionalities modules:
-
-  - Results Fetcher
-  - Filter and Search Engine
-  - Results Renderer
-  - Test Report Generator
-  - Graphics Generator
-  - Application API (Optional)
-  - Format Exporters (Optional)
-
-Each of these components or modules can be independent tools or be part of a single
-framework for web development. Proper researching about the more suitable model
-and framework should be done during implementation.
-
-Apart of these components, new ones might be added during implementation to support
-the above components and any other functionality required by the web application
-dashboard (for example, for HTML and data rendering, allow privileged operations
-if needed, and so on).
-
-This section will give an overview of each of the above listed components.
-
-### Results Fetcher
-
-This component will take care of fetching the test data from the storage backend.
-
-As explained in the [test data storage document][TestDataStorage], the data storage
-backend is SQUAD, so this component can use the SQUAD API to fetch the required
-test results.
-
-### Filter and Search Engine
-
-This involves all the filtering and searching capabilities for test data and it can
-be implemented either using existing web application modules or extending those
-to suit the dashboard needs.
-
-This engine will only search and filter test results data and won't manipulate that
-data in any other way.
-
-### Results Renderer
-
-This component will take care of showing the test results visualization. It is
-basically the HTML renderer for the test data, with all the required elements
-for the web pages design.
-
-### Test Report Generator
-
-This includes all the functions to generate all kind of test reports. It can also
-be split into several modules (for example, one for each type of report), and
-it should ideally offer a command line API that can be used to trigger and fetch
-test reports remotely.
-
-### Graphics Generator
-
-It comprises all the web application modules to generate graphics, charts and any
-other visual statistics, including the history view. In the same way as other
-components, it can be composed of several smaller components.
-
-### Application API
-
-Optionally the web application can also make available an API that can be used to
-trigger certain actions remotely, for example, generation and fetching of test
-reports, or test data exporting are some of the possible features for this API.
-
-### Format Exporters
-
-This should initially be considered an optional module which will include support
-to export the test data into different formats, for example, PDF and spreadsheets.
-
-It can also offers a convenient API to trigger this kind of format generations
-using command line tools remotely.
-
-## History View
-
-The web application should offer a compact historical overview of all the tests
-results through specific period of times to distinguish at a glance important
-trends of the results.
-
-This history view will also be able to show results from randomly chosen dates,
-so in this way it is possible to generate views for comparing test data between
-different images cycles (daily, weekly or releases images).
-
-This view should be a graphical visualization that can be generated periodically or
-at any time as needed from the web application dashboard.
-
-In a single compact view, at least the following information should be available:
-
-  - All tests names executed by images.
-  - List of images versions.
-  - Platforms and images type.
-  - Number of failed, passed and total tests. 
-  - Graphic showing the trend of tests results across the different images
-    versions.
-
-### Graphical Mockup
-
-The following is an example of how the history view might look like for test
-results:
-
-![](/images/tests_history_view.svg)
-
-## Weekly Test Report
-
-This report should be generated using the web application dashboard described in
-the previous section.
-
-The dashboard should allow to generate this report weekly or at any time as needed,
-and it should offer both a Web UI and a command line interface to generate the
-report.
-
-The report should contain at least the following data:
-
-  - List of images used to run the tests.
-  - List of tests executed ordered by priority, image type, architecture and
-    category.
-  - Tests results will be in the form: PASS, FAIL, SKIP.
-  - Image version.
-  - Date of test execution.
-
-The report could also include the historical view as explained in the section[history view]( {{< ref "#history-view" >}} ) and allow exporting to all formats supported by the web
-application dashboard.
-
-## Application Layout and Behaviour
-
-The web application dashboard will only show test results and generate test
-reports.
-
-The web application will fetch the test data from SQUAD directly to generate all
-the relevant web pages, graphics and test reports once it is launched. So, the web
-application won't store any test data and all visual information will be generated
-on runtime.
-
-For the main layout, the application will show in the initial main page the history
-view for the last 5~10 images versions as this will help to have a quick overview
-of the current status of tests for latest images at a first glance.
-
-Along with the history view in the main page, a list of links to the latests test
-reports will also be shown. These links can point to previously saved searches or
-they can just be convenient links to generate test reports for past images
-versions.
-
-The page should also show the relevant options for filtering and searching test
-results as explained in the [web application dashboard section]( {{< ref "#web-application-dashboard" >}} ).
-
-In summary, the minimal required layout of the main page for the web application
-dashboard will be the history view, a list to recent test reports and the searching
-and filtering options.
-
-# Notifications
-
-At least for critical and high priority tests failures, a notification system must
-be setup.
-
-This system could send emails to a mailing list and messages to the Mattermost
-chat system for greater visibility on time.
-
-This system will work as proposed in the [closing ci loop document][ClosingCiLoop].
-It will be a Jenkins phase that will receive the automated tests results previously
-analyzed, and will determine the critical tests failures in order to send the
-notifications.
-
-For manual tests results, the Jenkins phase could be manually or periodically
-triggered once all the tests results are stored in the SQUAD backend.
-
-### Format
-
-The notification message should at least contain the following information:
-
-  - Test name.
-  - Test result (FAIL).
-  - Test priority.
-  - Image type and architecture.
-  - Image version.
-  - Link to the logs (if any).
-  - Link to attachments (if any).
-  - Date and time of test execution.
-
-# Infrastructure
-
-The infrastructure for the web application dashboard and notification system will
-be defined during implementation, but they all will be aligned to the requirements
-proposed by the document for [closing the CI loop][ClosingCiLoop], so it won't
-impose any special or resource intensive requirements beyond the current CI loop
-proposal.
-
-# Test Results Submission
-
-For automated tests, the test case will be executed by LAVA and results will be
-submitted to the SQUAD backend as explained in the [closing ci loop document][ClosingCiLoop].
-
-For manual tests, a new tool is required to collect the tests results and submit
-those to SQUAD. This can be either a command line tool or a web application that
-could render the test case pages for convenient visualization during the test
-execution, or link to the test cases Gitlab pages for easy reference.
-
-The main function of this application will be to collect the manual tests results,
-optionally guide the tester through the test cases steps, generate a JSON file
-with the test results data, and finally send these results to the SQUAD backend.
-
-# SQUAD
-
-SQUAD offers a web UI frontend that allows to check tests results and metadata,
-including their attachments and logs.
-
-This web frontend is very basic, it only shows the tests organized by teams and
-groups, and list the tests results for each test stored in the backend. Though it
-is a basic frontend, it can be useful for quickly checking results and making sure
-the data is properly stored in SQUAD, but it might be intended to be used only by
-developers and sometimes testers as it is not a complete solution from a project
-management perspective.
-
-For a more complete visualization of the test data, the new web application
-dashboard should be used.
-
-# Concept Limitations
-
-The platform, framework and infrastructure for the web application is not covered
-by this document and it needs to be defined during implementation.
-
-# Current Implementation
-
-The [QA Test Report][QAReportApplication] is an application to save and report all
-the test results for the Apertis images.
-
-It supports both types of tests, automated tests results executed by LAVA and
-manual tests results submitted by a tester. It only provides static reports with no
-analytical tools yet.
-
-## Workflow
-
-The deployment consists of two docker images, one containing the main report
-application and the other running the postgresql database. The general workflow is
-as follows:
-
-### Automated Tests
-
-1) The QA Report Application is executed and it opens HTTP interfaces to receive
-   HTTP requests calls and serve HTML pages in specific HTTP routes.
-
-2) Jenkins builds the images and they are pushed to the image server.
-
-3) Jenkins triggers the LAVA jobs to execute the automated tests in the published
-   images.
-
-4) Jenkins, when triggering the LAVA jobs, also registers these jobs with the QA
-   Report Application using its specific HTTP interface.
-
-5) The QA Report application adds these jobs in its internal queue and waits
-   for the LAVA tests jobs results to be submitted via HTTP.
-
-6) Once LAVA finishes executing the tests jobs, it triggers the configured HTTP
-   callback sending all the test data to the QA Report application.
-
-7) Test data for the respective job is saved into the database.
-
-### Manual Tests
-
-1) User authenticate with GitLab credentials from the `Login` button in the main
-   page.
-
-2) Once logged in, the user can click on the `Submit Manual Test Report` button
-   that is now available from the main page.
-
-3) Tester needs to enter the following information in the `Select Image Report`
-   page:
-
-      - Release: Image release (19.03, v2020dev0 ..)
-      - Version: The daily build identifier (20190705.0, 20190510.1 ..)
-      - Select Deployment Type (APT, OSTree)
-      - Select Image Type
-
-4) A new page only showing the valid test cases for the selected image type
-   is shown.
-
-5) User selects `PASS` , `FAIL` or `NOT TESTED` for each test case.
-
-6) An optional `Notes` text area box is avaibale besides each test case for the
-   user to add any extra information (e.g tasks links, a brief comment about any
-   issue with the test, etc).
-
-7) Once results have ben selected for all test cases, user should submit this
-   data using the `Submit All Results` button at the top of the page.
-
-8) The application now will save the results into the database and redirect the
-   user to a page with the following two options:
-
-      - Submit Manual Test Report: To submit tests results for a new image type.
-      - Go Back to Main Page: To check the recently submitted tests results.
-
-9) If the user wants to update a report, just repeat the above steps selecting
-   the specific image type for the existing report and then updating the results
-   for the necessary test cases.
-
-### Reports
-
-1) Reports for the stored test results (both manual and automated) are generated
-   on the fly by the QA application such as: https://lavaphabbridge.apertis.org/report/v2019dev0/20190401.0
-
-[TestDataStorage]: test-data-storage.md
-
-[ClosingCiLoop]: closing-ci-loop.md
-
-[QAReportApplication]: https://gitlab.apertis.org/infrastructure/lava-phab-bridge/
diff --git a/content/designs/test-data-storage.md b/content/designs/test-data-storage.md
deleted file mode 100644
index aafd0fc9e1ca1717c583dcddd961b6193b066ed9..0000000000000000000000000000000000000000
--- a/content/designs/test-data-storage.md
+++ /dev/null
@@ -1,943 +0,0 @@
-+++
-title = "Test Data Storage"
-short-description = "Describe the test data storage backend and processing."
-weight = 100
-aliases = [
-	"/old-designs/latest/test-data-storage.html",
-	"/old-designs/v2019/test-data-storage.html",
-	"/old-designs/v2020/test-data-storage.html",
-	"/old-designs/v2021dev3/test-data-storage.html",
-]
-outputs = [ "html", "pdf-in",]
-date = "2019-09-27"
-+++
-
-# Background
-
-Testing is a core part of the project, and different test data is required to
-optimise the testing process.
-
-Currently the project does not have a functional and well defined place for
-storage of the different types of test data, which creates many issues across
-the testing processes.
-
-The goal of this document is to define a single storage place for all the test
-data and build on top it the foundation for accurate test data processing and
-reporting.
-
-# Current Issues
-
-## Test Case Issues
-
-At this time, test cases are stored in the Apertis MediaWiki instance with a
-single page for each test case. Although this offers a reasonable degree of
-visibility for the tests, the storage method is not designed to manage this
-type of data, which means that there are only some limited features available
-for handling the test cases.
-
-The wiki does not provide a convenient way to reuse this data through other
-tools or infrastructure services. For example, management functions like
-filtering or detailed searching are not available.
-
-Test cases may also come out of sync with the automated tests, since they are
-managed manually in different places: an automated test might not have a test
-case page available, or the test case could be marked as obsolete while it is
-still being executed automatically by LAVA.
-
-Another big issue is that test cases are not versioned, so there is no way to
-keep track of which specific version of a test case was executed for a specific
-image version.
-
-## Test Result Issues
-
-Automated tests results are stored in the LAVA database after the tests are
-executed, while manual tests results are manually added by the tester to the
-wiki page report for the weekly testing round. This means that the wiki is the
-only storage location for all test data. As with test cases, there are also
-limits to the functionally of the wiki when handling the results.
-
-LAVA does not offer a complete interface or dashboard to clearly track test
-results, and the current interface to fetch these results is not user friendly.
-
-Manual results are only available from the Apertis wiki in the Weekly Testing
-Report page, and they are not stored elsewhere.
-
-The only way to review trends between different test runs is to manually go
-through the different wiki and LAVA web pages of each report, which is
-cumbersome and time consuming.
-
-Essentially, there is no a canonical place for storing all the test results for
-the project. This has major repercussions since there is no way to keep proper
-track of the whole project health.
-
-## Testing Process Issues
-
-The biggest issue is the lack of a centralised data storage for tests results
-and tests cases, creating the following issues for the testing process:
-
-  - It is not possible to easily analyse tests results. For example, there is no
-    interface for tracking test result trends over a period of time or across
-    different releases.
-
-  - Tests cases are not versioned, so it is not possible to know exactly which
-    test cases are executed for a specific image version.
-
-  - Test cases instructions can differ from the actual test instructions being
-    executed. This issue tends to happen mainly with automated tests: for
-    example, when a test script is updated but the corresponding test case
-    misses the update.
-
-  - Tests results cannot be linked to test cases because test data is located in
-    different places and test cases have no version information.
-
-# Solution
-
-A data storage backend need to be defined to store all test cases and test
-results.
-
-The storage backend may not be necessarily the same for all the data types,
-but a well defined mechanism should be available to access this data in a
-consistent way from our current infrastructure, and one solution should not
-impose limitations or constraints onto the other. For example, one backend can
-be used only for test cases and another for test results.
-
-## Data Backend Requirements
-
-The data storage backend should fulfil the following conditions at the minimum:
-
-  - Store all test cases.
-  - Store all manual and automated test results.
-  - It should make no distinction between manual and automated test cases, and
-    ideally offer a very transparent and consistent interface for both types of
-    tests.
-  - It should offer an API to access the data that can be easily integrated with
-    the rest of the services in the existing infrastructure.
-  - It should allow the execution of management operations on the data
-    (querying, filtering, searching).
-  - Ideally, it should offer a frontend to simplify management operations.
-
-# Data
-
-We are interested in storing two types of test data: test cases and test
-results.
-
-## Test Cases
-
-A test case is a specification containing the requirements, environment, inputs,
-execution steps, expected results and metadata for a specific test.
-
-The test cases descriptions in the wiki include custom fields that will need to
-be defined during the design and development of the data storage solution. The
-design will also need to consider the management, maintenance and tools required
-to handle all test case data.
-
-## Test Results
-
-Tests results can be of two types: manual and automated.
-
-Since tests results are currently acquired in two different places depending on
-the test type, this makes it very inconvenient to process and analyse test data.
-
-Therefore, the data backend solution should be able to:
-
-  - Store manual tests results which will be manually entered by the tester.
-  - Store automated tests results that will be fetched from the LAVA database.
-  - Have all results in the same place and format to simplify reporting and
-    manipulation of such data.
-
-# Data Usage
-
-The two main usage for test result data will be reports and statistics.
-
-## Test Reports
-
-This shows the tests results for all the applicable tests cases executed in a
-specific image version.
-
-The tests reports are currently created weekly. They are created manually with
-the help of some scripts and stored on the project wiki.
-
-New tools will need to be designed and developed to create reports once the
-backend solution is implemented.
-
-These tools should be able to query the test data using the backend API to
-produce reports both as needed and at regular intervals (weekly, monthly).
-
-## Test Statistics
-
-Accurate and up-to-date statistics are an important use case for the test data.
-
-Even though these statistics could be generated using different tools, there
-may still exist the need for storing this data somewhere. For example, for every
-release, besides the usual test report, producing a final `release report`
-giving a more detailed overview of the whole release's history could be
-generated.
-
-The backend should also make it possible to easily access the statistics data for
-further processing, for example, to download it and manipulate the data using a
-spreadsheet.
-
-# Data Format
-
-Tests data should ideally be in a well-known standard format that can be reused
-easily by other services and tools.
-
-In this regard, data format is an important point for consideration when
-choosing the backend since it will have a major impact on the project as it will
-help to determine the infrastructure requirements and the tools which need to be
-developed to interact with such data.
-
-# Version Management
-
-Tests cases and tests results should be versioned.
-
-Though this is more related to the way data will be used, the backend might also
-have an impact on managing versions of this data.
-
-One of the advantages of versioning is that it will allow to link test cases to
-tests results.
-
-# Data Storage Backends
-
-These sections give an overview of the different data backend systems that can
-be used to implement a solution.
-
-## SQUAD
-
-SQUAD stands for `Software Quality Dashboard` and it is an open source test
-management dashboard.
-
-It can handle tests results with metrics and metadata, but it offers no support
-for test case management.
-
-SQUAD is a database with a HTTP API to manage tests result data. It uses an SQL
-database, like MySQL or PostgreSQL, to store results. Its web frontend and API
-are written using Django.
-
-Therefore, it would not require much effort to modify our current infrastructure
-services to be able to push and fetch test results from SQUAD.
-
-Advantages:
-
-  - Simple HTTP API: POST to submit results, GET to fetch results.
-  - Easy integration with all our existing infrastructure.
-  - Test results, metrics and metadata are in JSON format.
-  - Offers support for PASS/FAIL results with metrics, if available.
-  - Supports authentication token to use the HTTP API.
-  - Has support for teams and projects. Each team can have multiple projects
-    and each project can have multiple builds with multiple test runs.
-  - It offers group permissions and visibility options.
-  - It offers optional backend support for LAVA.
-  - Actively developed and upstream is open to contributions.
-  - It provides a web fronted to visualise test result data with charts.
-  - It is a Django application using a stable database system like PostgreSQL.
-
-Disadvantages:
-
-  - It offers no built-in support for storing manual tests results. But it
-    should be straightforward to develop a new tool or application to submit
-    these test results.
-  - It has no support for test case management. This could be either added to
-    SQUAD or a different solution could be used.
-  - The web frontend is very simple and it lacks support for many visual charts.
-    It currently only supports very simple metrics charts for tests results.
-
-## Database Systems
-
-Since the problem is about storing data, a plain SQL database is also a valid
-option to be considered.
-
-A reliable DB system could be used, for example PostgreSQL or MySQL, with an
-application built on top of it to manage all test data.
-
-New database systems, such as CouchDB, can also offer more advanced features.
-CouchDB is a NOSQL database that stores data using JSON documents. It also
-offers a HTTP API that allows to send requests to manage the stored data.
-
-This database acts like a server that can interact with remote applications
-through its HTTP API.
-
-Advantages:
-
-  - Very simple solution to store data.
-  - Advanced database systems can offer an API and features to interact with
-    data remotely.
-
-Disadvantages:
-
-  - All applications to manage data need to be developed on top of the database
-    system.
-
-## Version Control Systems
-
-A version control system (VCS), like Git, could be used to store all or part of
-the test data.
-
-This approach would involve a design from scratch for all the components to
-manage the data in the VCS, but it has the advantage that the solution can be
-perfectly adapted to the project needs.
-
-A data format would need to be defined for all data and document types,
-alongside a structure for the directory hierarchy within the repository.
-
-Advantages:
-
-  - It fits the model of the project perfectly. All project members can easily
-    have access to the data and are already familiar with this kind of system.
-  - It offers good versioning and history support.
-  - It allows other tools, frameworks or infrastructure services to easily reuse
-    data.
-  - Due to its simplicity and re-usability, it can be easily adapted to other
-    projects and teams.
-
-Disadvantages:
-
-  - All applications and tools need to be developed to interact with this
-    system.
-  - Although it is a simple solution, it depends on well defined formats for
-    documents and files to keep data storage in a sane state.
-  - It does not offer the usual query capabilities found in DB systems, so this
-    would need to be added in the applications logic.
-
-## ResultsDB
-
-ResultsDB is a system specifically designed for storage of test results. It can
-store results from many different test systems and types of tests.
-
-It provides an optional web frontend, but it is built to be compatible with
-different frontend applications, which can be developed to interact with the
-stored data.
-
-Advantages:
-
-  - It has a HTTP REST interface: POST to submit results, GET to fetch results.
-  - It provides a Python API for using the JSON/REST interface.
-  - It only stores test results, but it has the `concept` of test cases in forms
-    of namespaced names.
-  - It is production ready.
-
-Disadvantages:
-
-  - The web frontend is very simple. It lacks metrics graphics and groups for
-    projects teams.
-  - The web frontend is optional. This could involve extra configurations and
-    maintenance efforts.
-  - It seems too tied to its upstream project system.
-
-# Proposal
-
-This section describes a solution using some of the backends discussed in the
-previous section in order to solve the test data storage problem in the Apertis
-project.
-
-This solution proposes to use a different type of storage backends for each type
-of data.
-
-SQUAD will be used to store the tests result data (both manual and automated),
-and a VCS system (Git is recommended) will be used to store the tests case data.
-This solution also involves defining data formats, and writing a tool or a
-custom web application to guide testers through entering manual test results.
-
-Advantages:
-
-  - It is a very simple solution for all data types.
-  - It can be designed to perfectly suit the project needs.
-  - It can be easily integrated with our current infrastructure. It fits very
-    well into the current CI workflow.
-  - Storing test cases in a VCS will easily allow managing test case versions in
-    a very flexible way.
-
-Disadvantages:
-
-  - Some tools and applications need to be designed and implemented from scratch.
-  - Format and standards need to be defined for test cases files.
-  - It is a solution only limited to data storage, further data processing tasks
-    will need to be done by other tools (for example, test case management
-    tasks, generating tests results statistics, and so on).
-
-## Test Results
-
-SQUAD will be used as the data storage backend for all the tests results.
-
-This solution to receive and store tests results perfectly fits into the
-proposed mechanism to [close the CI loop][ClosingLoopDoc].
-
-### Automated Test Results
-
-Automated tests results will be received in Jenkins from LAVA using the webhook
-plugin. These results will then be processed in Jenkins and can be pushed into
-SQUAD using the HTTP API.
-
-A tool needs to be developed to properly process the tests results received from
-LAVA, though this data is in JSON format, which is the same format required by
-SQUAD, so it should be very simple to write a tool to properly translate the
-data to the correct format accepted by SQUAD.
-
-### Manual Test Results
-
-SQUAD does not offer any mechanism to input manual tests results. These tests
-results will need to be manually entered into SQUAD. Nevertheless, it should be
-relatively simple to develop a tool or application to submit this data.
-
-The application would need to receive the test data (for example, it can prompt
-the user in some way to input this data), and then generate a JSON file that
-will later be sent into SQUAD.
-
-The manual test results will need to be entered manually by the tester using the
-new application or tool every time a manual test is executed.
-
-### File Format
-
-All the tests results will be in the standard SQUAD JSON format:
-
-  - For automated tests, Jenkins will receive the test data in the JSON format
-    sent by LAVA, then this data needs to be converted to the JSON format
-    recognised by SQUAD.
-
-  - For manual tests, a tool or application will be used to enter the test data
-    manually by the tester, and it will create a JSON format file that can also
-    be recognised by SQUAD.
-
-So it can be said that the final format for all tests results will be determined
-by the SQUAD backend.
-
-The test data must be submitted to SQUAD as either file attachments, or as
-regular POST parameters.
-
-There are four types of input file formats accepted by SQUAD: tests, metrics,
-metadata and attachment files.
-
-The tests, metrics and metadata files should all be in JSON format. The
-attachment files can be in any format (txt, png, and so on).
-
-All tests results, both for automated and manual tests will use any of these
-file formats. Here are some examples of the different types of file formats:
-
-1) Tests file: it contains the test results in `PASS/FAIL` format.
-
-```
-{
-  "test1": "pass",
-  "test2": "pass",
-  "testsuite1/test1": "pass",
-  "testsuite1/test2": "fail",
-  "testsuite2/subgroup1/testA": "pass",
-  "testsuite2/subgroup2/testA": "pass",
-}
-```
-
-2) Metrics file: it contains the test results in metrics format.
-
-```
-{
-  "test1": 1,
-  "test2": 2.5,
-  "metric1/test1": [1.2, 2.1, 3.03],
-  "metric2/test2": [200, 109, 13],
-}
-```
-
-3) Metadata file: it contains metadata for the tests. It recognises some
-special values and also accept new fields to extend the test data with any
-relevant information.
-
-```
-{
-  "build_url": "https://<url_build_origin>",
-  "job_id": "902",
-  "job_url": "https://<original_test_run_url>",
-  "job_status": "stable",
-  "metadata1": "metadata_value1",
-  "metadata2": "metadata_value2",
-  ....
-}
-```
-
-4) Attachment files: these are any arbitrary files that can be submitted to
-SQUAD as part of the test results. Multiple attachments can be submitted to
-SQUAD during a single POST request.
-
-### Mandatory Data Fields
-
-The following metadata fields are mandatory for every test file submitted to
-SQUAD, and must be included in the file: `source`, `image.version`,
-`image.release`, `image.arch`, `image.board`, and `image.type`.
-
-The metadata file also needs to contain the list of test cases executed for the
-tests job, and their types (manual or automated). This will help to identify the
-exact test case versions that were executed.
-
-This metadata will help to identify the test data environment, and it
-essentially maps to the same metadata available in the LAVA job definitions.
-
-This data should be included both for automated and manual tests, and it can be
-extended with more fields if necessary.
-
-### Processing Test Results
-
-At the end, all test results (both manual and automated) will be stored in a single
-place, the SQUAD database, and the data will be accessed consistently using the
-appropriate tools.
-
-The SQUAD backend won't make any distinction between storing manual and automated
-test results, but they will contain their respective type in the metadata so that
-they can be appropriately distinguished by the processing tools and user
-interfaces.
-
-Further processing of all the test data can be done by other tools that can use
-the respective HTTP API to fetch this data from SQUAD.
-
-All the test result data will be processed by two main tools:
-
-  1) Automated Tests Processor
-
-     This tool will receive test results in the LAVA JSON format and convert it
-     to the JSON format recognised by SQUAD.
-
-     This should be developed as a command line tool that can be executed from
-     the Jenkins job receiving the LAVA results.
-
-  2) Manual Tests Processor
-
-     This tool will be manually executed by the tester to submit the manual
-     test results and will create a JSON file with the test data which can then
-     be submitted to SQUAD.
-
-Both tools can be written in the Python programming language, using the JSON
-module to handle the test result data and the `request` module in order to
-submit the test data to SQUAD.
-
-### Submitting Test Results
-
-Test data can be submitted to SQUAD triggering a POST request to the specific
-HTTP API path.
-
-SQUAD works around teams and projects to group the test data, so these are
-central concepts reflected in its HTTP API. For example, the API path contains
-the team and project names in the following form:
-
-```
-/api/submit/:team/:project/:build/:environment
-```
-
-Tools can make use of this API either using programming modules or invoking
-command line tools like `curl` to trigger the request.
-
-An example using the `curl` command line tool to submit all the results in the
-test file `common-tests.json` for the image release 18.06 with version
-20180527.0 and including its metadata from the file `environment.json` would
-look like this:
-
-```
-$ curl \
-    --header "Auth-Token: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" \
-    --form tests=@common-tests.json \
-    --form metadata=@environment.json \
-    https://squad.apertis.org/api/submit/apertis/18.06/20180527.0/amd64
-```
-
-### Fetching Test Results
-
-Test data can be fetched from SQUAD, triggering a GET request to the specific
-HTTP API path.
-
-Basically, all the different types of files pushed to SQUAD are accessible
-through its HTTP API. For example, to fetch the tests results contained in the
-tests file (previously submitted), a GET request call to the test file path can
-be triggered like this:
-
-```
-$ curl https://squad.apertis.org/api/testruns/5399/tests_file/
-```
-
-This retrieves the test results contained in the tests file of the test run ID
-5399. In the same say, the metadata file for this test run can be fetched
-with a call to the metadata file path like this:
-
-```
-$ curl https://squad.apertis.org/api/testruns/5399/metadata_file/
-```
-
-The test run ID is generated by SQUAD to identify a specific test, and it can be
-obtained by triggering some query calls using the HTTP API.
-
-Tools and applications (for example, to generate test reports and project
-statistics) can conveniently use this HTTP API either from programming modules
-or command line tools to access all the test data stored in SQUAD.
-
-## Test Cases
-
-Git will be used as the data storage backend for all the test cases.
-
-A data format for test cases needs to be defined, among with other standards
-such as, for example, the directory structure in the repository, as well as a
-procedure to access and edit this data might be necessary.
-
-### File Format
-
-A well defined file format is required for the test cases.
-
-The proposed solution is to reuse the LAVA test definition file format for all
-test cases, both for manual and automated tests.
-
-The LAVA test definitions files are YAML files that contain the instructions to
-run the automated tests in LAVA and they are already stored in the automated
-tests Git repository.
-
-In essence, the YAML format would be extended to add all the required test case
-data to the automated tests definition files and new test definition YAML files
-would be created for manual test cases, which would follow the same format as
-for the automated test cases.
-
-In this way, all tests cases, both for automated and manual tests, will be
-available in the same YAML format.
-
-The greatest advantage of this approach is that it will avoid the current issue
-of test case instructions differing from the executed steps in automated tests,
-since the test case and the definition file will be the same document.
-
-The following examples are intended to give an idea of the file format for the
-manual and automated test cases. They are not in the final format and only serve
-as an indicator of format that will be used.
-
-An example of the automated test case file for the `librest` test. This test
-case file will be executed by LAVA automatically:
-
-```
-metadata:
-  format: "Lava-Test-Shell Test Definition 1.0"
-  name: librest
-  type: unit-tests
-  exec-type: automated
-  target: any
-  image-type: any
-  description: "Run the unit tests that ship with the library against the running system."
-  maintainer: "Luis Araujo <luis.araujo@collabora.co.uk>"
-
-  pre-conditions:
-  - "Ensure you have the development repository enabled in your sources.list and you have recently run apt-get update."
-  - "Ensure Rootfs is remounted as read/write"
-  - sudo mount -o remount,rw /
-
-install:
-  deps:
-  - librest-0.7-tests
-
-run:
-  steps:
-    - common/run-test-in-systemd --user=user --timeout=900 --name=run-test env DEBUG=2 librest/automated/run-test.sh
-
-parse:
-  pattern: ^(?P<test_case_id>[a-zA-Z0-9_\-\./]+):\s*(?P<result>pass|fail|skip|unknown)$
-
-expected:
-  - "PASSED or FAILED"
-```
-
-An example of the manual test case file for the `webkit2gtk-aligned-scroll`
-test. This test case can be manually read and executed by the tester, but
-ideally a new application should be developed to read this file and guide the
-tester through each step of the test case:
-
-```
-metadata:
-  format: "Manual Test Definition 1.0"
-  name: webkit2gtk-aligned-scroll
-  type: functional
-  exec-type: manual
-  target: any
-  image-type: any
-  description: "Test that scrolling is pinned in a given direction when started mostly towards it."
-  maintainer: "Luis Araujo <luis.araujo@collabora.co.uk>"
-
-  resources:
-  - "A touchscreen and a mouse (test with both)."
-
-  pre-conditions:
-  - "Ensure you have the development repository enabled in your sources.list and you have recently run apt-get update."
-  - "Ensure Rootfs is remounted as read/write"
-  - sudo mount -o remount,rw /
-
-install:
-  deps:
-  - webkit2gtk-testing
-
-run:
-  steps:
-    - GtkClutterLauncher -g 400x600 http://gnome.org/
-    - "Try scrolling by starting a drag diagonally"
-    - "Try scrolling by starting a drag vertically"
-    - "Try scrolling by starting a drag horizontally, ensure you can only pan the page horizontally"
-
-expected:
-  - "When the scroll is started by a diagonal drag, you should be able to pan the page freely"
-  - "When the scroll is started by a vertical drag, you should only be able to pan the page vertically,
-     regardless of if you move your finger/mouse horizontally"
-  - "When the scroll is started by a horizontal drag, you should only be able to pan the page horizontally,
-     regardless of if you move your finger/mouse vertically"
-
-example:
-  - video: https://www.apertis.org/images/Aligned-scroll.ogv
-
-notes:
-  - "Both mouse and touchscreen need to PASS for this test case to be considered a PASS.
-     If either does not pass, then the test case has failed."
-```
-
-### Mandatory Data Fields
-
-A test case file should at least contain the following data fields for both the
-automated and manual tests:
-
-```
-format: This is used to identify the format version.
-name: Name of test case.
-type: This could be used to define a series of test case types (functional, sanity,
-      system, unit-test).
-exec-type: Manual or automated test case.
-image-type: This is the image type (target, minimal, ostree, development, SDK).
-image-arch: The image architecture.
-description: Brief description of the test case.
-priority: low, medium, high, critical.
-run: Steps to execute the test.
-expected: The expected result after running the test.
-```
-
-The test case file format is very extensible and new fields can be added as
-necessary.
-
-### Git Repository Structure
-
-A single Git repository can be used to store all test cases files, both for
-automated and manual tests.
-
-Currently, LAVA automated tests definitions are located in the git repository
-for the project tests. This repository contains all the scripts and tools to run
-tests.
-
-All tests cases could be placed inside this git repository. This has the great
-advantage that both tests instructions and tests tools will be located in the
-same place.
-
-The git repository will need to be cleaned and organised to adapt it to contain
-all the available tests cases. A directory hierarchy can be defined to organise
-all test cases by domain and type.
-
-For example, the path `tests/networking/automated/` will contain all automated
-tests for the networking domain, the path `tests/apparmor/manual/` will contain
-all manual tests for the apparmor domain, and so on.
-
-Further tools and scripts can be developed to keep the git repository hierarchy
-structure in a sane and standard state.
-
-### Updates and Changes
-
-Since all tests cases will be available from a git repository, and they are
-plain YAML files, they can be edited like any other file from that repository.
-
-At the lowest level, the tester or developer can use an editor to edit these
-files, though it is also possible to develop tools or a frontend to help with
-editing and at the same time enforce a certain standard on them.
-
-### Execution
-
-The test cases files will be in the same format both for automated and manual
-tests, though the way these will be executed are different.
-
-Automated test cases will continue to be automatically executed by LAVA, and for
-manual test cases a new application could be developed that can assist the
-tester going through the steps from the tests definition files.
-
-This application can be a tool or a web application that, besides guiding the
-tester through each step of the manual test definition file, will also collect
-the test results and convert them to JSON format, which can then be sent to the
-SQUAD backend using the HTTP API.
-
-In this way, both types of test cases, manual and automated, would follow the
-same file format, located in the same git repository, they will be executed by
-different applications (LAVA for automated tests, a new application for manual
-tests), and both types of tests results will conveniently use the same HTTP API
-to be pushed into the SQUAD data storage backend.
-
-### Visualisation
-
-Though a Git repository offers many advantages to manage the test cases files,
-it is not a friendly option for the user to access and read test cases from it.
-
-One solution is to develop an application that can render these test cases files
-from the git repository into HTML or other format and display them into a server
-where they can be conveniently accessed by users, testers and developers.
-
-In the same way, other tools to collect statistics, or generate other kinds of
-information about test cases can be developed to interact with the git
-repository to fetch the required data.
-
-## Test Reports
-
-A single application or different ones can be developed to generate different
-kinds of report.
-
-These applications will need to trigger a GET request to the SQUAD HTTP API to
-fetch the specific tests results (as explained in the
-[Fetching Test Results]( {{< ref "#fetching-test-results" >}} ) section) and generate the
-report pages or documents using that data.
-
-These applications can be developed as command line tools or web applications
-that can be executed periodically or as needed.
-
-## Versioning
-
-Since all the test cases both for manual and automated tests will be available
-as YAML files from a Git repository, these files can be versioned and link to
-the corresponding tests runs.
-
-Tests case groups will be versioned using Git branches. For every image release,
-the test cases repository will be branched with the same version (for example
-18.03, 18.06, and so on). This will match the whole group of test cases against
-an image release.
-
-A specific test case can also be identified using the `HEAD` commit of the
-repository from which it is being executed. It should be relatively simple to
-retrieve the `commit` id from the git repository during test execution and add
-it to the metadata file that will be sent to SQUAD to store the test results. In
-this way, it will be possible to locate the exact test case version that was
-used for executing the test.
-
-For automated tests cases, the commit version can be obtained from the LAVA
-metadata, and for manual tests cases, the new tool executing the manual tests
-should take care of retrieving the commit id. Once the commit id is available,
-it should be added to the JSON metadata file that will be pushed along with the
-tests results data to SQUAD.
-
-## SQUAD Configuration
-
-Some configuration is required to start using SQUAD in the project.
-
-Groups, teams and projects need to be created and configured with the correct
-permissions for all the users. Depending on the implementation, some of these
-values will need to be configured every quarter (for example, if new projects
-should be created for every release).
-
-Authentication tokens need to be created by users and tools required to submit
-tests results using the HTTP API.
-
-## Workflow
-
-This section describes the workflow for each of the components in the proposed
-solution.
-
-### Automated Test Results
-
-  - Automated tests are started by the Jenkins job responsible for triggering
-    tests.
-  - The Jenkins job waits for automated tests results using a webhook plugin.
-  - Once test results are received in Jenkins, these are processed with the tool
-    to convert the test data into SQUAD format.
-  - After the data is in the correct format, it is sent by Jenkins to SQUAD
-    using the HTTP API.
-
-### Manual Test Results
-
-  - Tester manually executes the application to run manual tests.
-  - This application will read the instructions from the manual test definition
-    files in the git repository and will guide the testers through the different
-    test steps.
-  - Once the test is completed, the tester enter its results into the application.
-  - A JSON file is generated with these results in the format recognised by SQUAD.
-  - This same application or a new one could be used by the tester to send the
-    test results (JSON file) into SQUAD using the HTTP API.
-
-### Test Cases
-
-  - Test case files can be edited using any text editor in the Git repository.
-  - A Jenkins job could be used to periodically generate HTML or PDF pages from
-    the test case files and make them available from a website for easy and
-    convenient access by users and testers.
-  - Test cases will be automatically versioned once a new branch is created in
-    the git repository which is done for every release.
-
-### Test Reports
-
-  - Reports will be generated either periodically or manually by using the new
-    reporting tools.
-  - The SQUAD frontend can be used by all users to easily check test results
-    and generate simple charts showing the trend for test results.
-
-## Requirements
-
-This gives a general list of the requirements needed to implement the proposed
-solution:
-
-  - The test case file format needs to be defined.
-  - A directory hierarchy needs to be defined for the tests Git repository to
-    contain the test case files.
-  - Develop tools to help work around the test cases files (for example, syntax
-    and format checker, repository sanity checker).
-  - Develop tool to convert tests data from LAVA format into SQUAD format.
-  - Develop tool to push tests results from Jenkins into SQUAD using HTTP API.
-  - Develop application to guide execution of manual test cases.
-  - Develop application to push manual test results into SQUAD using HTTP API
-    (this can be part of the application to guide manual test case execution).
-  - Develop tool or web application to generate weekly test report.
-
-## Deployment Impact
-
-All the additional components proposed in this document (SQUAD backend, new tools,
-web application) are not resource intensive and do not set any new or special
-requirements on the hosting infrastructure.
-
-The instructions for the deployment of all the new tools and services will be made
-available with their respective implementations.
-
-Following is a general overview of some important deployment considerations:
-
-  - SQUAD will be deployed using a Docker image. SQUAD is a Django application,
-    and using a Docker image makes the process of setting up an instance very
-    straightforward with no need for special resources, packaging all the required
-    software in a container that can be conveniently deployed by other projects.
-
-    The recommended setup is to use a Docker image for the SQUAD backend and
-    another one for its PostgreSQL database.
-
-  - The application proposed to execute manual tests and collect their results will
-    serve mainly as an extension to the SQUAD backend, therefore the requirements
-    will also be in accordance with the SQUAD deployment from an infrastructure
-    point of view.
-
-  - Other new tools proposed in this document will serve as the components to
-    integrate all the workflow of the new infrastructure, so they won't require
-    special efforts and resources beyond the infrastructure setup.
-
-# Limitations
-
-This document only describes the test data storage issues and proposes a
-solution for those issues along with the minimal test data processing required
-to implement the reporting and visualisation mechanisms on top of it. It does
-not cover any API in detail and only gives a general overview of the required
-tools to implement the proposed solution.
-
-# Links
-
-* Apertis Tests
-
-https://gitlab.apertis.org/infrastructure/apertis-tests
-
-* Weekly Tests Reports Template Page
-
-https://wiki.apertis.org/QA/WeeklyTestReport
-
-* SQUAD
-
-https://github.com/Linaro/squad
-https://squad.readthedocs.io/en/latest/
-
-* ResultsDB
-
-https://fedoraproject.org/wiki/ResultsDB
-
-* CouchDB
-
-http://couchdb.apache.org/
-
-[ClosingLoopDoc]: closing-ci-loop.md
-
diff --git a/content/qa/test-data-reporting.md b/content/qa/test-data-reporting.md
new file mode 100644
index 0000000000000000000000000000000000000000..0d8300ccb59e7d8b57f1852477bdd5a1febd5b07
--- /dev/null
+++ b/content/qa/test-data-reporting.md
@@ -0,0 +1,96 @@
++++
+title = "Test Data Reporting"
+short-description = "Describe test data reporting and visualization."
+weight = 100
+aliases = [
+	"/old-designs/latest/test-data-reporting.html",
+	"/old-designs/v2019/test-data-reporting.html",
+	"/old-designs/v2020/test-data-reporting.html",
+	"/old-designs/v2021dev3/test-data-reporting.html",
+]
+outputs = [ "html", "pdf-in",]
+date = "2019-09-27"
+lastmod = "2021-01-22"
++++
+
+Testing is a fundamental part of the project, but it is not so useful unless it
+goes along with an accurate and convenient model to report the results of such
+a testing. The
+[QA Test Report](https://gitlab.apertis.org/infrastructure/lava-phab-bridge/)
+is an application that has been developed to save and report the test results
+for the Apertis images.
+
+It supports both automated tests results executed by LAVA and manual tests
+results submitted by a tester.
+
+# Workflow
+
+The deployment consists of two docker images, one containing the main report
+application and the other running the postgresql database. The general workflow is
+as follows:
+
+## Automated Tests
+
+1) The QA Report Application is executed and it opens HTTP interfaces to receive
+   HTTP requests calls and serve HTML pages in specific HTTP routes.
+
+2) GitLab CI/CD builds the images and they are pushed to the image server.
+
+3) GitLab CI/CD triggers the LAVA jobs to execute the automated tests in the
+   published images.
+
+4) GitLab CI/CD, when triggering the LAVA jobs, also registers these jobs with
+   the QA Report Application using its specific HTTP interface.
+
+5) The QA Report application adds these jobs in its internal queue and waits
+   for the LAVA tests jobs results to be submitted via HTTP.
+
+6) Once LAVA finishes executing the tests jobs, it triggers the configured HTTP
+   callback sending all the test data to the QA Report application.
+
+7) Test data for the respective job is saved into the database.
+
+## Manual Tests
+
+1) User authenticate with GitLab credentials from the `Login` button in the main
+   page.
+
+2) Once logged in, the user can click on the `Submit Manual Test Report` button
+   that is now available from the main page.
+
+3) Tester needs to enter the following information in the `Select Image Report`
+   page:
+
+      - Release: Image release (19.03, v2020dev0 ..)
+      - Version: The daily build identifier (20190705.0, 20190510.1 ..)
+      - Select Deployment Type (APT, OSTree)
+      - Select Image Type
+
+4) A new page only showing the valid test cases for the selected image type
+   is shown.
+
+5) User selects `PASS` , `FAIL` or `NOT TESTED` for each test case.
+
+6) An optional `Notes` text area box is avaibale besides each test case for the
+   user to add any extra information (e.g tasks links, a brief comment about any
+   issue with the test, etc).
+
+7) Once results have ben selected for all test cases, user should submit this
+   data using the `Submit All Results` button at the top of the page.
+
+8) The application now will save the results into the database and redirect the
+   user to a page with the following two options:
+
+      - Submit Manual Test Report: To submit tests results for a new image type.
+      - Go Back to Main Page: To check the recently submitted tests results.
+
+9) If the user wants to update a report, just repeat the above steps selecting
+   the specific image type for the existing report and then updating the results
+   for the necessary test cases.
+
+# Reports
+
+Reports for the stored test results (both manual and automated) are generated
+on the fly by the QA report application, for example as done for the
+[v2020.3 release](https://lavaphabbridge.apertis.org/report/v2020/20201126.0).
+
diff --git a/content/qa/test-data-storage.md b/content/qa/test-data-storage.md
new file mode 100644
index 0000000000000000000000000000000000000000..905f951770ffdc604d5c25a29f539fc75ff593b9
--- /dev/null
+++ b/content/qa/test-data-storage.md
@@ -0,0 +1,55 @@
++++
+title = "Test Definitions"
+weight = 100
+aliases = [
+	"/old-designs/latest/test-data-storage.html",
+	"/old-designs/v2019/test-data-storage.html",
+	"/old-designs/v2020/test-data-storage.html",
+	"/old-designs/v2021dev3/test-data-storage.html",
+]
+outputs = [ "html", "pdf-in",]
+date = "2019-09-27"
+lastmod = "2021-01-22"
++++
+
+The test cases, both manual and automated, are written in the LAVA test
+definition file format, which stores the instructions to run the automated
+tests in YAML files.
+
+Git is used as the data storage backend for all the test cases. The current
+Apertis tests can be found in the
+[Apertis Test Cases](https://gitlab.apertis.org/tests/apertis-test-cases)
+repository. The test cases are versioned using Git branches to enable
+functionality change without breaking tests for older releases.
+
+The format has been extended to add all the required test case data to the
+tests definition files. A description of these changes can be found in the
+[README.md](https://gitlab.apertis.org/tests/apertis-test-cases/-/blob/apertis/v2021/README.md)
+This approach avoids the issue of test case instructions differing from the
+executed steps in automated tests, since the test case and the definition file
+are the same document. The `atc` utility is provided with the test cases to
+render them to HTML definitions. Test cases labeled as automated are run in
+LAVA, those labeled as manual should be run by hand using the steps that are
+generated in the HTML definition.
+
+# Mandatory Test Definition Data Fields
+
+A test case file should at least contain the following data fields for both the
+automated and manual tests:
+
+```
+format: This is used to identify the format version.
+name: Name of test case.
+type: This could be used to define a series of test case types (functional, sanity,
+      system, unit-test).
+exec-type: Manual or automated test case.
+image-type: This is the image type (target, minimal, ostree, development, SDK)
+	    and the architectures that it's supported on.  description: Brief
+            description of the test case.
+priority: low, medium, high, critical.
+run: Steps to execute the test.
+expected: The expected result after running the test.
+```
+
+The test case file format is very extensible and new fields can be added if
+necessary.