Skip to content

Benchmarks

This page will host real, reproducible benchmark numbers comparing Machine runners against GitHub-hosted runners. Every “X faster” claim elsewhere in the docs links here.

Methodology

We’re publishing a single defensible benchmark that anyone can reproduce on their own account: a cold LLVM build, run across multiple Machine runner configurations and compared against GitHub-hosted baselines.

Workload

LLVM 18.1.x compilation, the same workload OpenBenchmarking uses as their reference compile-time benchmark. LLVM is:

  • Industry standard for compile-time benchmarking
  • Heavy C++ that exercises real compiler workloads
  • Parallelizable to as many cores as the runner has
  • Cross-architecture — same source builds on X64 and ARM64
  • Long enough (~30 minutes baseline) that improvements are clearly measurable
  • Reproducible — pinned source tarball, deterministic flags

Configuration

Terminal window
git clone --depth 1 --branch llvmorg-18.1.8 https://github.com/llvm/llvm-project
cd llvm-project
cmake -G Ninja -B build \
-DCMAKE_BUILD_TYPE=Release \
-DLLVM_ENABLE_PROJECTS="clang;lld" \
-DLLVM_TARGETS_TO_BUILD=X86 \
llvm
ninja -C build -j$(nproc)
  • Cold cache (no ccache/sccache)
  • Clean checkout each run
  • 3 runs per configuration, median reported

Runners

#RunnerArchitecturevCPURAM
1ubuntu-latest (GitHub-hosted)X64416 GB
2[machine, cpu=8]X64816 GB
3[machine, cpu=16]X641632 GB
4[machine, cpu=32]X643264 GB
5[machine, cpu=64]X6464128 GB
6ubuntu-24.04-arm (GitHub-hosted)ARM64416 GB
7[machine, cpu=8, architecture=arm64]ARM64816 GB
8[machine, cpu=16, architecture=arm64]ARM641632 GB
9[machine, cpu=32, architecture=arm64]ARM643264 GB
10[machine, cpu=64, architecture=arm64]ARM6464128 GB

Metrics

  • Wall-clock build time (cold cache, median of 3 runs)
  • Cost in dollars (build_time_min × spot_$/min)
  • Builds per dollar for cost-efficiency comparison
  • Memory peak via /usr/bin/time -v

Results

Pending. This section will populate when the benchmark workflow ships.

Source

The full workflow file, parser script, and raw logs will live at github.com/MachineDotDev/examples/benchmarks/llvm-build (link will be live after the consolidated examples repo ships).

How to reproduce

Once the workflow is published, you’ll be able to:

  1. Fork the examples repo
  2. Trigger the llvm-build-bench workflow via workflow_dispatch
  3. Wait ~1 hour for all 10 runner configurations to complete
  4. Read the results in your fork’s Actions tab