Skip to content
Snippets Groups Projects
Commit be22a8f6 authored by Martyn Welch's avatar Martyn Welch Committed by Frederic Danis
Browse files

Concept document: Preparing hawkBit for Production use


This commit adds a concept document covering recommendations on work to be
carried out before the Apertis hawkBit instance could be used as a
reference for a production deployment system.

Signed-off-by: default avatarMartyn Welch <martyn.welch@collabora.com>
parent df65cebf
No related branches found
No related tags found
1 merge request!144Preparing hawkbit for production
+++
date = "2021-01-06"
weight = 100
title = "Preparing hawkBit for Production Use"
outputs = ["html", "pdf-in"]
+++
The Apertis project has been experimenting with the use of
[Eclipse hawkBit](https://www.eclipse.org/hawkbit/) as a mechanism for the
deployment of system updates and applications to target devices in the field.
The current emphasis being placed on system updates.
Apertis has recently deployed a [hawkBit instance](https://hawkbit.apertis.org)
into which the
[image build pipelines](https://gitlab.apertis.org/infrastructure/apertis-image-recipes/-/pipelines)
are uploading builds. The
[apertis-hawkBit-agent](https://gitlab.apertis.org/pkg/apertis-hawkbit-agent)
has been added to OSTree based images and a guide produced detailing how this
can be used to
[deploy updates to an Apertis target]({{< ref "deployment-management.md" >}}).
The current instance is proving valuable for gaining insight into how hawkBit
can be utilized as part of the broader Apertis project, however more work is
required to reach the point where this infrastructure (or an equivalent
deployment elsewhere) would be ready for production use. In this document we
will describe the steps we feel that need to be taken to provide a reference
deployment that could be more readily suitable for production.
# Server configuration
The current hawkBit deployment is hosted on Collabora's infrastructure. The
example
[Docker Compose configuration file](https://github.com/eclipse/hawkbit/blob/master/hawkbit-runtime/docker/docker-compose-stack.yml)
has been modified to improve stability, security and adding a reverse proxy
providing SSL encryption. This has been wrapped with
[Chef](https://www.chef.io/) configuration to improve maintainability. Whilst
this configuration has limitations (that will be discussed later), it provides
a better starting point for the deployment of a production system. These
configuration files are currently stored in Collabora's private infrastructure
repository and thus not visible to 3rd parties.
## Recommendations
The improvements made to this configuration file should be published either in
a publicly visible Apertis repository and/or improvements should be submitted
back to the hawkBit project to be included in the reference Docker
configuration.
# Considering the production workflow
The currently enabled process for the enrollment and configuration of a target
device into the hawkBit deployment infrastructure requires the following steps:
- Install Apertis OSTree based image on the target device.
- Define or determine the devices `controllerid`. This ID needs to be unique on
the hawkBit instance as it is used to identify the target.
- Enroll the target on the hawkBit instance, either via the
[UI](https://www.eclipse.org/hawkbit/ui/#deployment-management) or
[API](https://www.eclipse.org/hawkbit/rest-api/targets-api-guide/#_post_rest_v1_targets).
- If adding via the UI, hawkBit creates a security token, if adding via the
API the security token can be generated outside of hawkBit.
- Modify the configuration file for `apertis-hawkbit-agent` to contain the
correct URL for the hawkBit instance, the targets `controllerid` and the
generated security token. This configuration file is
`/etc/apertis-hawkbit-agent.ini`. Without these options being set, the
target will be unable to find and access the deployment server to discover
updates.
This workflow presents a number of points that could prove contentious in a
production environment:
- A need for access to the hawkBit deployment server (that may be hosted on
external cloud infrastructure) from the production environment to register
the `controllerid` and security token.
- The requirement to have a mechanism to add configuration to the device post
software load.
The security token based mechanism is one of a
[number of options](https://www.eclipse.org/hawkbit/concepts/authentication/)
available for authentication via the DDI API. The security token must be shared
between the target and the hawkBit server. This approach has a number of
downsides:
- The Token needs to added to the hawkBit server and tied to the target devices
`controllerid`. This may necessitate a link between the production
environment and an external network to access the hawkBit server.
- The need for the shared token to be registered with the server for
authentication would make it impossible to utilize the "plug n' play"
enrollment of the target devices supported by hawkBit.
hawkBit allows for a certificate based authentication mechanism (utilizing a
reverse proxy before the hawkBit server to perform authentication) which would
remove the need to share a security token with the server. Utilizing signed
keys would allow authentication to be achieved independently from enrollment,
thus allowing enrollment to be carried out at a later date and would remove the
need to store data per device in the hawkBit from the production environment.
hawkBit allows for
"[plug'n play](https://gitter.im/eclipse/hawkbit/archives/2016/07/27)"
enrollment, the enrollment of the device when it's first seen by hawkBit, thus
the device could potentially be enrolled once the end user has switched on the
device and successfully connected it to a network for the first time when using
certificate based authentication.
For many devices it would not be practical or desired to have remote access
into the production firmware to add device specific configuration, such as a
security token or device specific signed key. `apertis-hawkbit-agent` currently
expects such configuration to be saved in `/etc/apertis-hawkbit-agent.ini`. An
option that this presents is for the image programmed onto the target to
provide 2 OSTree commits, one with the software expected on the device when
shipped and the other for factory use, with boot defaulting to the latter.
OSTree will attempt to merge any local changes made to the configuration when
updating the image. The factory image could be utilized to perform any testing
and factory configuration tasks required before switching the device to the
shipping software load. Customizations made to the factory commit's
configuration should then be merged as part of the update to the shipping load.
Such an approach could provide some remote access to the target as part of the
factory commit, but not the shipping commit, thus avoiding remote access being
present in the field.
As previously mentioned, a unique `controllerid` is needed by hawkBit to identify
the device and needs to be stored in the configuration file. An alternative
approach may be to generated this ID from other unique data provided by the
device, such as a MAC address or unique ID provided by the SoC used in the
device.
## Recommendations
- The hawkBit deployment should be updated to utilize a signed key based
security strategy.
- `apertis-hawkbit-agent` should be improved to enable authentication via
signed keys.
- `apertis-hawkbit-agent` should be improved to auto-enroll when the target
device is not already found.
- `apertis-hawkbit-agent` is currently storing it's configuration in `/etc`,
this should be extended to look under `/var` and the default configuration
should be moved there.
- A mechanism should be added to `apertis-hawkbit-agent` to enable the
`controllerid` to be generated from supported hardware sources.
# Management UI access
We currently have a number of static users defined with passwords available to
trusted maintainers. Such as scheme is not going to scale in a production
environment, nor provide an adequate level of security for a production
deployment. hawkBit provides the ability to configure authentication using a
provider implementing the OpenID Connect standard, which would allow for much
greater flexibility in authenticating users.
## Recommendations
The Apertis hawkBit instance should be configured to utilize the OpenID
mechanism, ideally utilizing the same SSO used to authenticate users for other
Apertis resources.
# Enabling device filtering
hawkBit provides functionality to perform update rollouts in a controlled way,
allowing a subset of the deployed base to get an update and only moving on to
more devices when a target percentage of devices have received the update and
with a configurable error rate. When rolling out updates, in an environment
where more than one hardware platform or revision of hardware is present, it
will be necessary to be able to ensure the correct updates are targeted towards
the correct devices. For example, 2 revisions of a gadget could utilize
different SoCs with different architectures each requiring a different build of
the update and different versions of a device may need to be updated with
different streams of updates. In order to cater for such scenarios, it is
important for hawkBit to be able to accurately distinguish between differing
hardware. Support to achieve this is provided via hawkBit's ability to store
attributes. These attributes can be set by the target device via the DDI
interface once enrolled and used by hawkBit to filter target devices into
groups. At the moment the `apertis-hawkbit-agent` is not setting any
attributes.
## Recommendations
- Update `apertis-hawkbit-agent` to set attributes based on information known
about the target device. This should include (where possible):
- Device Architecture
- Device Type
- Device Revision
# Provisioning for multiple product teams or partners
In order to utilize hawkBit for multiple products or partners it would be
either beneficial or necessary for each to have some isolation from each other.
This could be achieved via hawkBit's multi-tenant functionality or via the
deployment of multiple instances of hawkBit. It is likely that both of these
options would be deployed depending on the demands and requirements of the
product team or partner. It is expected that some partners may like to utilize
a deployment server provided by Apertis or one of it's partners. In this
instance multi-tenancy would make sense. Others may wish to have their own
instance, possibly hosted by themselves, in which case providing a simple way
to deploy a hawkBit instance would be beneficial.
Deploying instances of hawkBit utilizing the docker configuration would
be trivial. The multi-tenant configuration requires the authentication
mechanism for accessing the management API, web interface and potentially DDI
API to be multi-tenant aware.
## Recommendations
Apertis does not have a direct need for a multi-tenant deployment nor for
multiple deployments. Investigate and document what's involved for setting up a
multi-tenanted installation.
# Life management of artifacts
The GitLab CI pipeline generally performs at least 2 builds a day, pushing
multiple artifacts for each architecture and version of Apertis. In order to
minimize the space used to store artifacts and so as not to store many defunct
artifacts, they are currently deleted after 7 days.
Whilst this approach enables the Apertis project to frequently exercise the
artifact upload path and has been adequate for Apertis during it's initial
phase, a more comprehensive strategy will be required for production use. For
shipped hardware, it is unlikely that any units will be updated as frequently.
In addition, depending on the form and function of the device, it may only poll
the infrastructure to check for updates sporadically, either due to the device
not needing to be on or not having access to a network connection capable of reaching
the deployment server. Artifacts will needed to be more selectively kept to
ensure that the most up-to-date version is kept available for each device type
and hardware revision. Older artifacts that are no longer the recommended
version should be safe to delete from hawkBit as no targets should be
attempting to update to them.
## Recommendations
Apertis is developing a base platform to be used by production teams and thus
the images it produces for it's reference hardware needs a subtly
[different scheme]({{< ref "long-term-reproducibility.md" >}}) from that which
would be anticipated to be needed by a production team. It is therefore
recommended that the process removing old artifacts should adhere to the
following rules:
- Retain all point releases for current Apertis releases
- Retain 7 days of daily development builds
- Delete all artifacts for versions of Apertis no longer supported
# Platform scalability
hawkBit provides support for clustering to scale beyond the bandwidth that a
single deployment server could handle. The Apertis hawkBit instance is not
expected to need to handle a high level of use, though this may be important to
product teams who might quite quickly have many devices connected to hawkBit in
the field.
## Recommendations
At this current point in time we do not feel that investigating this facet of
hawkBit has immediate value.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment