diff --git a/content/designs/x86-build-infrastructure.md b/content/architecture/x86-build-infrastructure.md
similarity index 84%
rename from content/designs/x86-build-infrastructure.md
rename to content/architecture/x86-build-infrastructure.md
index 5609cb512a5ec9dbb526b3c29ec41d71e728cb47..06940627476b9049ec9f22a97ef8442c96450cc2 100644
--- a/content/designs/x86-build-infrastructure.md
+++ b/content/architecture/x86-build-infrastructure.md
@@ -25,85 +25,85 @@ The only exceptions are:
 * LAVA workers, which match the [reference hardware
   platforms]( {{< ref "/reference_hardware/_index.md" >}} )
 
-While LAVA workers are by nature meant to be hosted separatedly from the rest
+While LAVA workers are by nature meant to be hosted separately from the rest
 of the infrastructure and are handled via [geographically distributed LAVA
 dispatchers](https://gitlab.apertis.org/infrastructure/apertis-lava-docker/blob/master/apertis-lava-dispatcher/README.md),
 the constraint on the OBS workers is problematic for adopters that want to host
-a downstream Apertis infrastructure.
+downstream Apertis infrastructure.
 
-## Why hosting the whole build infrastructure on Intel x86-64
+## Why host the whole build infrastructure on Intel x86-64
 
 Being able to host the build infrastructure solely on Intel x86 64 bit
 (usually referred to as `x86-64` or `amd64`) machines enables downstream
-Apertis to be hosted on standard public or private cloud solution as these
+Apertis to be hosted on a standard public or private cloud solution as these
 usually only offer x86-64 machines.
 
-Deploying the OBS workers on cloud providers would also allow for implementing
-elastic workload handling.
+Deploying the OBS workers on cloud providers would also allow for the
+implementation of elastic workload handling.
 
-Elastic scaling and the desire to ensure that the cloud approach is tested
-and viable for dowstream mean that the deploying the approach described in
+Elastic scaling, and the desire to ensure that the cloud approach is tested
+and viable for downstreams, means that the deployment approach described in
 this document is of interest for the main Apertis infrastructure, not just
 for downstreams.
 
-Some cloud provider like Amazon Web Services have recently started offering ARM
-64 bit servers as well so it should be always possible to adopt an hybrid
-approach mixing foreign builds on x86-64 and native ones on ARM machines.
+Some cloud providers like Amazon Web Services have recently started offering ARM
+64 bit servers as well. As a result it should be possible to adopt a hybrid
+approach, mixing foreign builds on x86-64 and native ones on ARM machines.
 
-In particular Apertis is currently committed to maintain native workers for all
-the supported architectures, aiming for a hybrid set up where foreign packages
-get built on a mix of native and non-native Intel x86 64 bit machines.
+In particular, Apertis is currently committed to maintain native workers for all
+the supported architectures, but is aiming for a hybrid set up where foreign
+packages get built on a mix of native and non-native Intel x86 64 bit machines.
 
 Downstreams will be able to opt for fully native, hybrid or Intel-only OBS
 worker setups.
 
 ## Why OBS workers need a native environment
 
-Development enviroment for embedded devices often rely on cross-compilation to
+Development environments for embedded devices often rely on cross-compilation to
 build software targeting a foreign architecture from x86-64 build hosts.
 
 However, pure cross-compilation prevents running the unit tests that are
 shipped with the projects being built, since the binaries produced do not match
-the current machine.
+that of the build machine.
 
 In addition, supporting cross-compilation across all the projects that compose a
-Linux distribution involves a considerable effort since not all build systems
+Linux distribution involves a considerable effort, since not all build systems
 support cross-compilation, and where it is supported some features may still be
 incompatible with it.
 
-From the point of view of upstream projects, cross-compilation is in general
-a less tested path, which often lead cross-building distributors to ship a
+From the point of view of upstream projects, cross-compilation is in generally
+a less tested path, which often leads cross-building distributors to ship a
 considerable amount of patches adding fixes and workarounds.
 
-For this reason all major package-based distributions like Fedora, Ubuntu, SUSE
+For this reason all the major package-based distributions like Fedora, Ubuntu, SUSE
 and in particular Debian, the upstream distribution from which Apertis sources
 most of its packages, choose to only officially support native compilation for
 their packages.
 
 The Debian infrastructure thus hosts machines with different
-CPU architectures, since build workers must run hardware that matches the
-architecture of the binary package being built.
+CPU architectures, since the build workers must run hardware that matches the
+architecture of the binary packages being built.
 
-Apertis inherits this requirements, and currently has build workers with
+Apertis inherits this requirement, and currently has build workers with
 Intel 64 bit, ARM 32 and 64 bit CPUs.
 
 ## CPU emulation
 
 Using the right CPU is fortunately not the only way to execute programs for
 non-Intel architectures: the [QEMU project](https://www.qemu.org/) provides
-the ability to emulate a multitude of platforms on a x86-64 machine.
+the ability to emulate a multitude of platforms on an x86-64 machine.
 
 QEMU offers two main modes:
 * system mode: emulates a full machine, including the CPU and a set of attached
   hardware devices;
 * user mode: translates CPU instructions on a running Linux system, running
-  foreign binaries as they where native.
+  foreign binaries as if they were native.
 
 The system mode is useful when running entire operating systems, but it has a
 severe performance impact.
 
 The user mode has a much lighter impact on performance as it only deals with
-translating the CPU instructions in a Linux executable, for instance running
+translating the CPU instructions in a Linux executable. For instance, running
 an ARMv7 ELF binary on top of the x86-64 kernel running on a x86-64 host.
 
 ## Using emulation to target foreign architectures from x86-64
@@ -111,7 +111,7 @@ an ARMv7 ELF binary on top of the x86-64 kernel running on a x86-64 host.
 The build process on the OBS workers already involves setting up a chroot where
 the actual compilation happens. By combining it with the static variant of the
 QEMU user mode emulator it can be used to build software on a x86-64 host
-targeting a foreign architectures as it were a native build.
+targeting a foreign architecture as if it were a native build.
 
 The [binfmt_misc](https://en.wikipedia.org/wiki/Binfmt_misc) subsystem in the
 kernel can be used to make the emulation transparent so that emulation
@@ -125,12 +125,12 @@ in the OBS documentation.
 The following diagram shows how the OBS backend can distribute build jobs to
 its workers.
 
-Each CPU instruction set is marked by the codename used by OBS:
+Each CPU instruction set is marked by the code name used by OBS:
 * `x86_64`: the Intel x86 64 bit ISA, also known as `amd64` in Debian
 * `armv7hl`: the ARMv7 32 bit Hard Float ISA, also known as `armhf` in Debian
 * `aarch64`: the ARMv8 64 bit ISA, also known as `arm64` in Debian
 
-![](/images/obs-emulated-workers.svg)
+![OBS backend connected to several workers](/images/obs-emulated-workers.svg)
 
 Particularly relevant here are the `armv7hl` jobs building ARMv7 32 bit packages
 that can be dispatched to:
@@ -141,15 +141,17 @@ that can be dispatched to:
 1. the `x86_64` worker machine, which uses the `qemu-arm-static` binary
    translator to run binaries in `armv7hl` chroots via emulation.
 
-It's worth nothing that some ARM 64 bit server systems do not support the ARMv7
+{{% notice note %}}
+Some ARM 64 bit server systems do not support the ARMv7
 32 bit ISA natively, and would thus require the same emulation-based approach
 used on the x86-64 machines to execute the ARM 32 bit jobs.
+{{% /notice %}}
 
 ## Mitigating the impact on performance
 
 The most obvious way to handle the performance penalty is to use faster CPUs.
 Cloud providers offer a wide range of options for x86-64 machines, and
-establishing the appropriate cost/perfomance balance is the first step.
+establishing the appropriate cost/performance balance is the first step.
 It is possible that the performance of an emulated build on a fast x86-64 CPU
 may be comparable or even faster than a native build on a older ARMv7 machine.
 
@@ -243,7 +245,7 @@ Apertis build infrastructure on x84-64 machines:
 There's a risk that no mitigation end up being effective on some packages so
 they keep failing in the emulated approach. In the short term those packages
 will be required to be built on the native workers in a hybrid set up, but they
-would be more problematic in a hypotetic downstream setup with no native
+would be more problematic in a hypothetical downstream setup with no native
 workers as they can't be built there. In that case, pre-built binaries coming
 from an upstream with native workers will have to be injected in the archive.
 
@@ -286,7 +288,7 @@ Azure Networking was tweaked to allow full intercommunication in-between the VMs
 
 The OBS Build setup was populated with the Apertis v2021dev3 release for the `development, target and sdk` components.
 The combined number of packages for the 3 repository components is: `4121`
-* developmet => 3237 packages
+* development => 3237 packages
 * target => 465 packages
 * sdk => 419 packages
 
@@ -296,6 +298,6 @@ The full archive-wide rebuild of Apertis v2021dev3 was completed in around 1 wee
 There weren't any build failure specific to the setup above, to the `emulated build` setup in particular.
 Some packages failed to build while running their respective build time tests.
 
-To summazire, *Emulated Builds* worked fine with 2 caveats mentioned below
+To summarize, *Emulated Builds* worked fine with 2 caveats mentioned below
 * Performance: Given the emulation penalty, builds were 4-5 times slower than native.
 * Failing packages: Given the performance penalty due to emulation, some of the tests failed due to timeouts