Skip to content
Snippets Groups Projects
Unverified Commit 987f5007 authored by Ritesh Raj Sarraf's avatar Ritesh Raj Sarraf
Browse files

Evaluation report for Emulated Builds


Signed-off-by: default avatarRitesh Raj Sarraf <ritesh.sarraf@collabora.com>
parent f0a79202
No related branches found
No related tags found
1 merge request!82Evaluation report for Emulated Builds
Pipeline #160425 passed
......@@ -252,3 +252,49 @@ by modifying the failing packages to make them buildable with a real cross-compi
This solution requires a much higher maintenance cost as packages do not
generally support being built in that way, but it may be an option to be able to
do full builds on x86-64 in the few cases where emulation fails.
## Evaluation Report
A full archive-wide build was run on the Azure Cloud setup, using `x86-64` virtual machines.
A cloud optimized setup was built, comprising of the following major components:
* Azure provided Linux Virtual Machines (Debian Buster)
* Docker (as provided by the Linux distribution vendor)
* Linux 4.19 and above
* binfmt-support
* QEMU Emulator
Given the task at hand, to run emulation for `ARM` architecture on `x86-64`, we chose
the following cloud hardware class for our OBS worker setup.
* OBS-Server VM: Standard DS14 v2 (16 vcpus, 112 GiB memory)
* Worker VM: Standard F32s_v2 (32 vcpus, 64 GiB memory)
The provisioned `OBS-Server` VM hosted all of the OBS services, dockerized to run easily and
efficiently in a cloud environment. For the workers, we provisioned 3 `Worker` VMs, each VM running
5 worker instances per architecture, with 3 architectures this resulted in a total of
15 worker instances per virtual machine.
In total, we ran 45 worker instances for our build farm. This includes 30 worker instances doing emulated
builds, 15 for the 32-bit ARM architecture and 15 for the 64 bit architecture. The remaining 15 worker instances
were allocated for native `x86` builds.
All services used Azure provided *Premium SSD* disk storage.
Azure Networking was tweaked to allow full intercommunication in-between the VMs.
The OBS Build setup was populated with the Apertis v2021dev3 release for the `development, target and sdk` components.
The combined number of packages for the 3 repository components is: `4121`
* developmet => 3237 packages
* target => 465 packages
* sdk => 419 packages
Of the mentioned repositories, `development` and `target` repository are built for 3 architectures: `x86-64`, `armv7hl` and `aarch64`, while `sdk` repository is built only for the `x86-64` architecture.
The full archive-wide rebuild of Apertis v2021dev3 was completed in around 1 week, with the above mentioned setup.
There weren't any build failure specific to the setup above, to the `emulated build` setup in particular.
Some packages failed to build while running their respective build time tests.
To summazire, *Emulated Builds* worked fine with 2 caveats mentioned below
* Performance: Given the emulation penalty, builds were 4-5 times slower than native.
* Failing packages: Given the performance penalty due to emulation, some of the tests failed due to timeouts
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment