We haven't built a universal binary in over 11 months. The name is
confusing, and actually breaks esvu on macOS. The fact that nobody
has complained suggests that this is not a common use case..
Their cache action only works on their runners. For jobs that run on
other runners, we have use the default cache action. At least until they
update their cache product to work or fallback on other runners.
Everything that's not self-hosted or macOS is now pointing to
Blacksmith.sh. Nightly jobs and JS artifact builds use 8VCPU machines,
while regular integration builds & tests use 16VCPU machines.
The people over at Blacksmith.sh have generously offered usage of their
runners for our organization, so let's try to switch over some simple
workflows. The runners should be drop-in replacements.
When we try to retrieve benchmark results in the webhook call, we cannot
use the `head_sha` parameter since the workflow run might have a
different `head_sha` associated with it than the upstream workflow run.
This can happen when the JS repl binary workflow runs, a new commit is
pushed to master, followed by a JS benchmarks workflow run causing this
latter run to be associated with a different commit ID.
This extends the webhook payload to include the current run ID, which
can eventually be used by the webhook script to specifically download
the benchmark results associated with the current run.
Additionally, this changes the JS artifact download to use the upstream
run ID which seems nicer to do anyway.
This job uses the windows_ci_ninja preset to build just the
components and unit tests that are known to work with ClangCL on the
amd64-pc-windows-msvc target triple.
As a nightly job, its failures are non-blocking for any PRs, though
they should be fixed eventually or the job will get turned off by
email-annoyed maintainers.
For our js-benchmarks and libjs-test262 workflow runs, we already know
that they're provisioned with these repositories and can skip adding the
key and repo altogether.
We were using both wget and curl arbitrarily; use curl exclusively since
that is installed by default on our machines and containers. Fixes the
js-benchmarks workflow.
This is a workaround for the deprecation of the cache v1 REST API that
was replaced with a new protobuf RPC based API this month. vcpkg was
using the private cache backend API without the knowledge of the GitHub
actions team, and was thus broken by the deprecation.
While we wait for Microsoft to talk to Microsoft to get this feature
restored, we can use the raw actions/cache step to get almost the same
cache behavior. The only difference is that the cache will be less fine
grained than the per-package cache that VCPKG_BINARY_SOURCES of x-gha
was giving us before.
Chaining workflows does not cause the subsequently spawned workflow runs
to use the same event, but rather it uses the latest head SHA based on
the branch it runs on. This would cause the JS benchmarks jobs to not be
able to find artifacts (if a new JS repl workflow was started before the
previous one could finish) and/or assign the wrong commit SHA to the
benchmark results.
Since `github.event` contains information about the original workflow
run that spawned the JS benchmarks jobs, we can take the commit SHA from
there and use it to download the correct artifact.
We had concurrency set on the JS artifacts and JS benchmarks workflows
causing them to not run in parallel for the same combination of
(workflow, OS name). You'd expect that this causes a FIFO queue to exist
of the jobs to run sequentially, but in reality GitHub maintains a
single job to prioritize and cancels all others. We don't want that for
our artifacts and benchmarks: we want them to run on each push.
For example, a new push could have workflows getting cancelled because
someone restarted a previously failed workflow, resulting in the
following message:
"Canceling since a higher priority waiting request for [..] exists"
By removing the concurrency setting from these workflows, we make use of
all available runners to execute the jobs and potentially run some of
them in parallel. For the benchmarks however, we currently only have one
matching self-hosted runner per job, and as such they are still not run
in parallel.
This introduces a matrix for the js-benchmarks workflow and runs both
the Linux x86_64 and macOS arm64 JS repl builds against our benchmarks
repository.
The workflow-webhook action that was being used didn't work on macOS or
machines without Docker, so let's create the payload ourselves, sign it
and send it over using plain old `curl`.
It might be useful to have these artifacts, even for older commits. As
an added bonus, this causes the JS benchmarks to run as well giving us
more datapoints.
In practice this does not make a big difference, but technically it
could happen that a second JS Repl artifact was built before the first
JS Benchmarks job is executed. So make sure to filter on commit ID.
This workflow starts after a successful js-artifacts workflow, picks up
the JS repl binary and runs our js-benchmarks tool. It does not yet
publish or otherwise store the benchmark results, but it's a start!
This reverts commits bf333eaea2 and
6f69a445bd.
The commit linter needs to run on event `pull_request_target` to have
access to its secret token, which means we cannot have a dependency on
that job from another workflow that is run as a result of the
`pull_request` event.
Additionally, the linters were no longer run for first-time
contributors. This isn't a huge problem but it was nice that a
preliminary check took place before running the full CI on their PRs.
This requires us to always run the CI job and check the individual jobs'
results, since only having `needs:` will not work when `lint_commits` is
potentially skipped.
We can prevent running builds unnecessarily by requiring the linters to
succeed first. If either the code or commit linter fails, it means the
author of the PR needs to rework their branch and after pushing their
changes, we need to do a full new CI run anyway.
This was a relic from the SerenityOS CI, where architecture meant
what architecture to build Serenity for. For just ladybird, we might
want to build ladybird for multiple architectures per OS.