Skip to content
GitLab
Explore
Sign in
Register
Primary navigation
Search or go to…
Project
P
pipewire-evaluation
Manage
Activity
Members
Labels
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Package Registry
Model registry
Operate
Environments
Terraform modules
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
docs
pipewire-evaluation
Commits
79bb064b
Commit
79bb064b
authored
1 year ago
by
Dylan Aïssi
Browse files
Options
Downloads
Patches
Plain Diff
Improve conclusion for test-case 4
Signed-off-by:
Dylan Aïssi
<
dylan.aissi@collabora.com
>
parent
a70b7c67
No related branches found
No related tags found
No related merge requests found
Pipeline
#630152
passed
1 year ago
Stage: build
Changes
1
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
README.md
+18
-13
18 additions, 13 deletions
README.md
with
18 additions
and
13 deletions
README.md
+
18
−
13
View file @
79bb064b
...
...
@@ -429,16 +429,19 @@ and *1.503 secs* for 21 streams (*+ 4ms*). From here, the time will increase to
*1.511 secs*
for 31 streams (
*+ 12ms*
) and it will reach
*1.524 secs*
for 41 streams and
51 streams (
*+ 25ms*
).
TODO add some conclusions here; it's clear that the variability even for a
single instance is in the olrder of 30ish ms. Looks like the max of 1
instance and the overall max is also within 20 miliseconds, similar for the
minimum those are within 20ms as well. So TL;DR really it's likely all noise
and the actual longer times could simply be due to higher system load as opposed
to pipewire API. The main conclusion there is that while there is an overall upward
trend in your measurements, it's both quite small. For me a 20ms increase in
end to end latency on average (and 50ms or so between absolute min and max) is
enough to conclude there is no big impact wrt. the number of clients. For
measuring more detailed impacts another approach would be needed.
To summarize, we can see a variability in the order of ~ 30 ms for each set of
tests, for example with 1 instance the time ranges from 1.49 to 1.52 secs, and
the same for 51 instances from 1.51 to 1.54 secs. By comparing the maximum of
each set, 1.52 secs for 1 instance whereas it is 1.54 secs for 51 instances, we
see a variability of in the order of ~ 20 ms. The same variability of ~ 20 ms is
seen when comparing the minimum, 1.49 sec for 1 instance vs 1.51 for 51 instances.
This variability is likely only noise and the actual longer times could simply be
due to higher system load as opposed to pipewire API. While there is an overall
upward trend in the measurements, it's both quite small. With only a 20 ms
increase in end to end latency on average and 50 ms or so between absolute min
and max, we can conclude there is no big impact with regards to the number of
clients. For measuring more detailed impacts another approach would be needed.
All raw results are available in
`results/test-case-4/`
, including a list of
means
`test-case-4-list-means.txt`
, a list of every measure
...
...
@@ -502,9 +505,11 @@ built-in sound card, it's a little longer with *1min06* (+6secs) to fully recove
-
Starting and/or stopping inputs independently does not cause disruptions in the
outputs, but mixing streams from different sources gives sound arfects during
our test. See
*Test case 3: Results*
for more details.
-
The load caused by the number of clients is stable up to ~ 30 clients, then
we can see a small increase in pipewire response time to start a stream from
a new client. See
*Test case 4: Results*
for more details.
-
The results show a variability between the different sets of tests which is
quite small and which is probably only noise and resulting from the difference
of system load levels as opposed to pipewire API. There is no big impact with
regards to the number of clients. For measuring more detailed impacts another
approach would be needed. See
*Test case 4: Results*
for more details.
-
Interestingly, both sound cards (USB and built-in) are not equally affected by
a CPU limitation. Pipewire manages to keep sound (although of poor quality
because chopped) on the USB sound card but not for the built-in sound card
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment