When we try to retrieve benchmark results in the webhook call, we cannot
use the `head_sha` parameter since the workflow run might have a
different `head_sha` associated with it than the upstream workflow run.
This can happen when the JS repl binary workflow runs, a new commit is
pushed to master, followed by a JS benchmarks workflow run causing this
latter run to be associated with a different commit ID.
This extends the webhook payload to include the current run ID, which
can eventually be used by the webhook script to specifically download
the benchmark results associated with the current run.
Additionally, this changes the JS artifact download to use the upstream
run ID which seems nicer to do anyway.
For our js-benchmarks and libjs-test262 workflow runs, we already know
that they're provisioned with these repositories and can skip adding the
key and repo altogether.
We were using both wget and curl arbitrarily; use curl exclusively since
that is installed by default on our machines and containers. Fixes the
js-benchmarks workflow.
Chaining workflows does not cause the subsequently spawned workflow runs
to use the same event, but rather it uses the latest head SHA based on
the branch it runs on. This would cause the JS benchmarks jobs to not be
able to find artifacts (if a new JS repl workflow was started before the
previous one could finish) and/or assign the wrong commit SHA to the
benchmark results.
Since `github.event` contains information about the original workflow
run that spawned the JS benchmarks jobs, we can take the commit SHA from
there and use it to download the correct artifact.
We had concurrency set on the JS artifacts and JS benchmarks workflows
causing them to not run in parallel for the same combination of
(workflow, OS name). You'd expect that this causes a FIFO queue to exist
of the jobs to run sequentially, but in reality GitHub maintains a
single job to prioritize and cancels all others. We don't want that for
our artifacts and benchmarks: we want them to run on each push.
For example, a new push could have workflows getting cancelled because
someone restarted a previously failed workflow, resulting in the
following message:
"Canceling since a higher priority waiting request for [..] exists"
By removing the concurrency setting from these workflows, we make use of
all available runners to execute the jobs and potentially run some of
them in parallel. For the benchmarks however, we currently only have one
matching self-hosted runner per job, and as such they are still not run
in parallel.
This introduces a matrix for the js-benchmarks workflow and runs both
the Linux x86_64 and macOS arm64 JS repl builds against our benchmarks
repository.
The workflow-webhook action that was being used didn't work on macOS or
machines without Docker, so let's create the payload ourselves, sign it
and send it over using plain old `curl`.
In practice this does not make a big difference, but technically it
could happen that a second JS Repl artifact was built before the first
JS Benchmarks job is executed. So make sure to filter on commit ID.
This workflow starts after a successful js-artifacts workflow, picks up
the JS repl binary and runs our js-benchmarks tool. It does not yet
publish or otherwise store the benchmark results, but it's a start!