Skip to content

Configuration Options

Machine gives you granular control over your high-performance GitHub Actions workflows through a simple yet powerful configuration syntax. This guide covers all available configuration options to help you optimize your workflows.

Basic Configuration

CPU Runner Configuration

For CPU runners, specify machine and cpu in the runs-on field:

jobs:
build-job:
runs-on:
- machine
- cpu=4

GPU Runner Configuration

For GPU runners, specify machine and a GPU type in the runs-on field:

jobs:
ml-job:
runs-on:
- machine
- gpu=t4

Complete Configuration Reference

Here’s a complete reference of all configuration options available:

For CPU Runners:

jobs:
build-job:
runs-on:
- machine # Required: Activates Machine.dev runners
- cpu=4 # Required: Specifies CPU runner type
- tenancy=spot # Optional: spot or on-demand (default: on_demand)
- regions=us-east-1,eu-west-1 # Optional: Comma-separated AWS regions

For GPU Runners:

jobs:
ml-job:
runs-on:
- machine # Required: Activates Machine.dev runners
- gpu=L40S # Required: GPU type to use
- cpu=16 # Optional: Number of CPU cores (default varies by GPU)
- ram=64 # Optional: RAM in GB (default varies by GPU)
- architecture=x64 # Optional: x64 or arm64 (default: x64)
- tenancy=spot # Optional: spot or on-demand (default: on_demand)
- regions=us-east-1,eu-west-1 # Optional: Comma-separated AWS regions

Configuration Matrix

CPU Runners

Machine offers flexible CPU runner configurations:

Intel/AMD X64

vCPURAMCredits/Min (Spot)Credits/Min (On-Demand)
24 GB0.51
48 GB12
816 GB1.54
1632 GB2.57
3264 GB3.510
64128 GB4.513

ARM64 (Graviton)

vCPURAMCredits/Min (Spot)Credits/Min (On-Demand)
24 GB0.51
48 GB12
816 GB1.53.5
1632 GB2.56
3264 GB3.59
64128 GB4.512

GPU Runners

Below is a matrix of available configurations for each GPU type. Each row represents a GPU type with its available CPU and RAM scaling options.

GPU TypeArchitectureGPU MemoryDefault ConfigCredits/Min (Spot)Credits/Min (On-Demand)
T4GARM6416GB4 vCPU + 8GB RAM13
T4X6416GB4 vCPU + 16GB RAM24
L4X6424GB4 vCPU + 16GB RAM26
A10GX6424GB4 vCPU + 16GB RAM37
L40SX6448GB4 vCPU + 32GB RAM314
TRAINIUMX6432GB8 vCPU + 32GB RAM--
INFERENTIA2X6432GB4 vCPU + 16GB RAM--

CPU Configuration

CPU runners require specifying the number of vCPUs:

runs-on:
- machine
- cpu=16 # Required: 16 vCPUs with 32GB RAM (options: 2, 4, 8, 16, 32, 64)
- architecture=x64 # Optional: x64 (default) or arm64 for Graviton

Regional Availability

Machine GPU runners are available in multiple AWS regions. By default we search globally to find the most cost effective region to start each runner, but this can be overridden in the workflow. You can specify a list of regions to search to limit the search space.

runs-on:
- machine
- gpu=a10g
- regions=us-east-1,us-east-2,eu-west-2

Available regions for the beta include:

  • North America: us-east-1, us-east-2, us-west-2, ca-central-1
  • Europe: eu-south-2
  • Asia Pacific: ap-southeast-2

Benefits of region selection:

  1. Cost optimization: Different regions have different pricing
  2. Data locality: Run close to your data sources
  3. Availability: Some GPU types are more readily available in specific regions
  4. Compliance: Meet data sovereignty requirements

Spot vs. On-Demand Instances

Machine offers both on-demand and spot instances:

  • On-Demand: Guaranteed availability with consistent pricing
  • Spot: Up to 85% cost savings using AWS spot instances

Specify your preference in the workflow. We default to on_demand:

runs-on:
- machine
- gpu=L4
- tenancy=spot # or tenancy=on_demand

Note: Spot instances may be interrupted, so it’s important to implement error handling in your workflow.

When to use spot instances:

  • Non-critical workloads
  • Jobs that can be retried if interrupted
  • Cost-sensitive projects

When to use on-demand instances:

  • Critical production workloads
  • Deadline-sensitive jobs
  • Jobs that would be costly to restart

Architecture Support

Machine runners support both x64 and arm64 architectures:

runs-on:
- machine
- gpu=T4G
- architecture=arm64 # Default is x64

Note: Not all GPU types support both architectures. T4G instances are optimized for ARM64, while other GPUs typically use X64.

Workflow Examples

Combined CPU and GPU Workflow

name: Full Pipeline
on:
push:
branches: [ main ]
jobs:
build:
name: Build Application
runs-on:
- machine
- cpu=4
- tenancy=spot
steps:
- uses: actions/checkout@v3
- name: Build project
run: |
make -j16 all
- name: Upload artifacts
uses: actions/upload-artifact@v3
with:
name: build-artifacts
path: ./build/
train:
name: Train Model
needs: build
runs-on:
- machine
- gpu=L4
- tenancy=spot
steps:
- uses: actions/checkout@v3
- name: Download build artifacts
uses: actions/download-artifact@v3
with:
name: build-artifacts
- name: Train model
run: python train.py

Basic ML Training Job

name: Train ML Model
on:
push:
branches: [ main ]
jobs:
train:
name: Train Model
runs-on:
- machine
- gpu=L4
- tenancy=spot
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Train model
run: python train.py

High-Performance Fine-Tuning

name: Fine-tune LLM
on: workflow_dispatch
jobs:
finetune:
name: Fine-tune Language Model
runs-on:
- machine
- gpu=L40S
- cpu=16
- ram=128
- tenancy=on_demand
steps:
- uses: actions/checkout@v3
- name: Set up environment
run: |
pip install -r requirements.txt
- name: Fine-tune model
run: python finetune.py
env:
WANDB_API_KEY: ${{ secrets.WANDB_API_KEY }}

Best Practices

  1. Match resources to workload: Choose the appropriate GPU, CPU, and RAM for your specific task
  2. Use spot instances when possible: Save costs on non-critical workloads
  3. Set timeouts: Configure workflow timeouts to avoid runaway costs
  4. Monitor usage: Regularly check the Machine.dev dashboard to track spending
  5. Cache dependencies: Use GitHub’s caching to speed up workflow runs

Next Steps