Benchmarks
This page will host real, reproducible benchmark numbers comparing Machine runners against GitHub-hosted runners. Every “X faster” claim elsewhere in the docs links here.
Methodology
We’re publishing a single defensible benchmark that anyone can reproduce on their own account: a cold LLVM build, run across multiple Machine runner configurations and compared against GitHub-hosted baselines.
Workload
LLVM 18.1.x compilation, the same workload OpenBenchmarking uses as their reference compile-time benchmark. LLVM is:
- Industry standard for compile-time benchmarking
- Heavy C++ that exercises real compiler workloads
- Parallelizable to as many cores as the runner has
- Cross-architecture — same source builds on X64 and ARM64
- Long enough (~30 minutes baseline) that improvements are clearly measurable
- Reproducible — pinned source tarball, deterministic flags
Configuration
git clone --depth 1 --branch llvmorg-18.1.8 https://github.com/llvm/llvm-projectcd llvm-projectcmake -G Ninja -B build \ -DCMAKE_BUILD_TYPE=Release \ -DLLVM_ENABLE_PROJECTS="clang;lld" \ -DLLVM_TARGETS_TO_BUILD=X86 \ llvmninja -C build -j$(nproc)- Cold cache (no
ccache/sccache) - Clean checkout each run
- 3 runs per configuration, median reported
Runners
| # | Runner | Architecture | vCPU | RAM |
|---|---|---|---|---|
| 1 | ubuntu-latest (GitHub-hosted) | X64 | 4 | 16 GB |
| 2 | [machine, cpu=8] | X64 | 8 | 16 GB |
| 3 | [machine, cpu=16] | X64 | 16 | 32 GB |
| 4 | [machine, cpu=32] | X64 | 32 | 64 GB |
| 5 | [machine, cpu=64] | X64 | 64 | 128 GB |
| 6 | ubuntu-24.04-arm (GitHub-hosted) | ARM64 | 4 | 16 GB |
| 7 | [machine, cpu=8, architecture=arm64] | ARM64 | 8 | 16 GB |
| 8 | [machine, cpu=16, architecture=arm64] | ARM64 | 16 | 32 GB |
| 9 | [machine, cpu=32, architecture=arm64] | ARM64 | 32 | 64 GB |
| 10 | [machine, cpu=64, architecture=arm64] | ARM64 | 64 | 128 GB |
Metrics
- Wall-clock build time (cold cache, median of 3 runs)
- Cost in dollars (
build_time_min × spot_$/min) - Builds per dollar for cost-efficiency comparison
- Memory peak via
/usr/bin/time -v
Results
Pending. This section will populate when the benchmark workflow ships.
Source
The full workflow file, parser script, and raw logs will live at github.com/MachineDotDev/examples/benchmarks/llvm-build (link will be live after the consolidated examples repo ships).
How to reproduce
Once the workflow is published, you’ll be able to:
- Fork the examples repo
- Trigger the
llvm-build-benchworkflow viaworkflow_dispatch - Wait ~1 hour for all 10 runner configurations to complete
- Read the results in your fork’s Actions tab