Skip to content

GitHub Actions CPU Runners | High-Performance CPU Specifications

Machine provides high-performance CPU runners for GitHub Actions, perfect for intensive builds, data processing, and compute workloads that don’t require GPU acceleration. These runners offer significant performance improvements over standard GitHub Actions runners at competitive pricing.

CPU Runner Specifications

Machine offers flexible CPU runner configurations for both X64 and ARM64 architectures:

Intel/AMD X64 Runners

vCPURAMCredits/Min (Spot)Credits/Min (On-Demand)
24 GB0.51
48 GB12
816 GB1.54
1632 GB2.57
3264 GB3.510
64128 GB4.513

ARM64 (Graviton) Runners

vCPURAMCredits/Min (Spot)Credits/Min (On-Demand)
24 GB0.51
48 GB12
816 GB1.53.5
1632 GB2.56
3264 GB3.59
64128 GB4.512

All CPU runners include:

  • Operating System: Ubuntu 22.04 LTS
  • Root Volume Storage: 100GB SSD
  • Network: High-bandwidth, low-latency

Use Cases for CPU Runners

CPU runners are ideal for:

Build and Compilation Tasks

  • Large codebases requiring significant CPU resources
  • Multi-threaded compilation processes
  • Docker image builds
  • Frontend asset compilation and bundling

Data Processing

  • ETL pipelines
  • Data transformation and analysis
  • Non-GPU machine learning tasks
  • Batch processing jobs

Testing and CI/CD

  • Parallel test execution
  • Integration testing
  • Load testing and performance benchmarking
  • Security scanning and code analysis

General Compute

  • Scientific computing (non-GPU)
  • Simulations requiring high CPU performance
  • Video encoding (CPU-based)
  • Database operations and migrations

Configuring CPU Runners in Workflows

To use Machine’s CPU runners in your GitHub Actions workflow, specify the machine label with cpu runner type and the number of CPU cores:

jobs:
build:
name: High-Performance Build
runs-on:
- machine
- cpu=16 # Required: specify number of vCPUs (2, 4, 8, 16, 32, or 64)
steps:
- uses: actions/checkout@v4
- name: Build project
run: |
# Your build commands here
make -j$(nproc) all

Specifying CPU Configuration

runs-on:
- machine
- cpu=16 # Required: 16 vCPUs with 32GB RAM
- architecture=x64 # Optional: x64 (default) or arm64 for Graviton

With Spot Instances

Save costs by using spot instances for fault-tolerant workloads:

runs-on:
- machine
- cpu=8 # Required: specify number of vCPUs
- tenancy=spot # Use spot instances for cost savings

Region Selection

Specify regions for data locality or compliance requirements:

runs-on:
- machine
- cpu=4 # Required: specify number of vCPUs
- regions=us-east-1,us-west-2

Performance Comparison

Machine CPU runners offer substantial performance improvements compared to standard GitHub Actions runners:

MetricStandard GitHub RunnerMachine CPU RunnerImprovement
CPU Cores2-42-64 (configurable)Up to 32x
RAM7-14GB4-128GB (configurable)Up to 18x
Build Speed*Baseline3-10x faster200-900%
ArchitectureX64 onlyX64 and ARM64More options

*Performance improvements vary by workload type

Cost Optimization

Spot vs On-Demand

  • On-Demand: Guaranteed availability, consistent performance
  • Spot: Up to 85% cost savings for interruptible workloads

Best Practices for Cost Efficiency

  1. Use caching aggressively to reduce build times
  2. Parallelize workloads to take advantage of 16 cores
  3. Use spot instances for non-critical builds
  4. Implement incremental builds where possible

Example Workflows

Parallel Testing

name: Parallel Test Suite
on: [push, pull_request]
jobs:
test:
runs-on:
- machine
- cpu=16 # Use 16 vCPUs for maximum parallelism
steps:
- uses: actions/checkout@v4
- name: Set up environment
run: |
# Setup commands
- name: Run tests in parallel
run: |
# Utilize all 16 cores for parallel testing
pytest -n 16 tests/

Large-Scale Build

name: Production Build
on:
push:
branches: [main]
jobs:
build:
runs-on:
- machine
- cpu=32 # Use 32 vCPUs for large builds
- tenancy=on_demand # Use on-demand for critical builds
steps:
- uses: actions/checkout@v4
- name: Cache dependencies
uses: actions/cache@v3
with:
path: ~/.cache
key: ${{ runner.os }}-deps-${{ hashFiles('**/lockfile') }}
- name: Build application
run: |
# High-performance build using all cores
make -j16 release
- name: Run benchmarks
run: |
./run-benchmarks.sh

Data Processing Pipeline

name: Data Pipeline
on:
schedule:
- cron: '0 2 * * *' # Daily at 2 AM
jobs:
process:
runs-on:
- machine
- cpu=64 # Use maximum CPU power for data processing
- regions=us-east-1 # Run close to data source
steps:
- uses: actions/checkout@v4
- name: Download datasets
run: |
aws s3 sync s3://my-bucket/data ./data
- name: Process data
run: |
python process_data.py --threads 16
- name: Upload results
run: |
aws s3 sync ./output s3://my-bucket/results

Monitoring and Optimization

Resource Utilization

Monitor CPU and memory usage to ensure efficient resource utilization:

steps:
- name: Monitor resources
run: |
# Monitor CPU usage
mpstat -P ALL 5 > cpu_usage.log &
# Monitor memory usage
vmstat 5 > memory_usage.log &
# Run your workload
./run-workload.sh
# Check peak usage
cat cpu_usage.log memory_usage.log

Performance Profiling

Use profiling tools to identify bottlenecks:

Terminal window
# CPU profiling
perf record -g ./my-application
perf report
# Memory profiling
valgrind --tool=massif ./my-application

Comparison with GPU Runners

Choose between CPU and GPU runners based on your workload:

Use CaseCPU RunnersGPU Runners
General builds✅ Optimal❌ Overkill
ML training❌ Slow✅ Optimal
Data preprocessing✅ Good⚠️ Depends
Video encoding✅ Good✅ Better with GPU encoding
Database operations✅ Optimal❌ No benefit
Container builds✅ Optimal❌ No benefit

Next Steps