GitHub Actions CPU Runners | High-Performance CPU Specifications
Machine provides high-performance CPU runners for GitHub Actions, perfect for intensive builds, data processing, and compute workloads that don’t require GPU acceleration. These runners offer significant performance improvements over standard GitHub Actions runners at competitive pricing.
CPU Runner Specifications
Machine offers flexible CPU runner configurations for both X64 and ARM64 architectures:
Intel/AMD X64 Runners
vCPU | RAM | Credits/Min (Spot) | Credits/Min (On-Demand) |
---|---|---|---|
2 | 4 GB | 0.5 | 1 |
4 | 8 GB | 1 | 2 |
8 | 16 GB | 1.5 | 4 |
16 | 32 GB | 2.5 | 7 |
32 | 64 GB | 3.5 | 10 |
64 | 128 GB | 4.5 | 13 |
ARM64 (Graviton) Runners
vCPU | RAM | Credits/Min (Spot) | Credits/Min (On-Demand) |
---|---|---|---|
2 | 4 GB | 0.5 | 1 |
4 | 8 GB | 1 | 2 |
8 | 16 GB | 1.5 | 3.5 |
16 | 32 GB | 2.5 | 6 |
32 | 64 GB | 3.5 | 9 |
64 | 128 GB | 4.5 | 12 |
All CPU runners include:
- Operating System: Ubuntu 22.04 LTS
- Root Volume Storage: 100GB SSD
- Network: High-bandwidth, low-latency
Use Cases for CPU Runners
CPU runners are ideal for:
Build and Compilation Tasks
- Large codebases requiring significant CPU resources
- Multi-threaded compilation processes
- Docker image builds
- Frontend asset compilation and bundling
Data Processing
- ETL pipelines
- Data transformation and analysis
- Non-GPU machine learning tasks
- Batch processing jobs
Testing and CI/CD
- Parallel test execution
- Integration testing
- Load testing and performance benchmarking
- Security scanning and code analysis
General Compute
- Scientific computing (non-GPU)
- Simulations requiring high CPU performance
- Video encoding (CPU-based)
- Database operations and migrations
Configuring CPU Runners in Workflows
To use Machine’s CPU runners in your GitHub Actions workflow, specify the machine
label with cpu
runner type and the number of CPU cores:
jobs: build: name: High-Performance Build runs-on: - machine - cpu=16 # Required: specify number of vCPUs (2, 4, 8, 16, 32, or 64)
steps: - uses: actions/checkout@v4
- name: Build project run: | # Your build commands here make -j$(nproc) all
Specifying CPU Configuration
runs-on: - machine - cpu=16 # Required: 16 vCPUs with 32GB RAM - architecture=x64 # Optional: x64 (default) or arm64 for Graviton
With Spot Instances
Save costs by using spot instances for fault-tolerant workloads:
runs-on: - machine - cpu=8 # Required: specify number of vCPUs - tenancy=spot # Use spot instances for cost savings
Region Selection
Specify regions for data locality or compliance requirements:
runs-on: - machine - cpu=4 # Required: specify number of vCPUs - regions=us-east-1,us-west-2
Performance Comparison
Machine CPU runners offer substantial performance improvements compared to standard GitHub Actions runners:
Metric | Standard GitHub Runner | Machine CPU Runner | Improvement |
---|---|---|---|
CPU Cores | 2-4 | 2-64 (configurable) | Up to 32x |
RAM | 7-14GB | 4-128GB (configurable) | Up to 18x |
Build Speed* | Baseline | 3-10x faster | 200-900% |
Architecture | X64 only | X64 and ARM64 | More options |
*Performance improvements vary by workload type
Cost Optimization
Spot vs On-Demand
- On-Demand: Guaranteed availability, consistent performance
- Spot: Up to 85% cost savings for interruptible workloads
Best Practices for Cost Efficiency
- Use caching aggressively to reduce build times
- Parallelize workloads to take advantage of 16 cores
- Use spot instances for non-critical builds
- Implement incremental builds where possible
Example Workflows
Parallel Testing
name: Parallel Test Suite
on: [push, pull_request]
jobs: test: runs-on: - machine - cpu=16 # Use 16 vCPUs for maximum parallelism
steps: - uses: actions/checkout@v4
- name: Set up environment run: | # Setup commands
- name: Run tests in parallel run: | # Utilize all 16 cores for parallel testing pytest -n 16 tests/
Large-Scale Build
name: Production Build
on: push: branches: [main]
jobs: build: runs-on: - machine - cpu=32 # Use 32 vCPUs for large builds - tenancy=on_demand # Use on-demand for critical builds
steps: - uses: actions/checkout@v4
- name: Cache dependencies uses: actions/cache@v3 with: path: ~/.cache key: ${{ runner.os }}-deps-${{ hashFiles('**/lockfile') }}
- name: Build application run: | # High-performance build using all cores make -j16 release
- name: Run benchmarks run: | ./run-benchmarks.sh
Data Processing Pipeline
name: Data Pipeline
on: schedule: - cron: '0 2 * * *' # Daily at 2 AM
jobs: process: runs-on: - machine - cpu=64 # Use maximum CPU power for data processing - regions=us-east-1 # Run close to data source
steps: - uses: actions/checkout@v4
- name: Download datasets run: | aws s3 sync s3://my-bucket/data ./data
- name: Process data run: | python process_data.py --threads 16
- name: Upload results run: | aws s3 sync ./output s3://my-bucket/results
Monitoring and Optimization
Resource Utilization
Monitor CPU and memory usage to ensure efficient resource utilization:
steps: - name: Monitor resources run: | # Monitor CPU usage mpstat -P ALL 5 > cpu_usage.log &
# Monitor memory usage vmstat 5 > memory_usage.log &
# Run your workload ./run-workload.sh
# Check peak usage cat cpu_usage.log memory_usage.log
Performance Profiling
Use profiling tools to identify bottlenecks:
# CPU profilingperf record -g ./my-applicationperf report
# Memory profilingvalgrind --tool=massif ./my-application
Comparison with GPU Runners
Choose between CPU and GPU runners based on your workload:
Use Case | CPU Runners | GPU Runners |
---|---|---|
General builds | ✅ Optimal | ❌ Overkill |
ML training | ❌ Slow | ✅ Optimal |
Data preprocessing | ✅ Good | ⚠️ Depends |
Video encoding | ✅ Good | ✅ Better with GPU encoding |
Database operations | ✅ Optimal | ❌ No benefit |
Container builds | ✅ Optimal | ❌ No benefit |
Next Steps
- Learn about GPU Runners for ML workloads
- Explore Configuration Options for customization
- Read the Cost Optimization Guide for best practices