Quickstart
This guide walks you from zero to a green Machine job in about 5 minutes. By the end you’ll have a workflow that boots a T4G GPU runner, prints nvidia-smi, and shuts down — for about $0.004 per run.
Prerequisites
- A GitHub account
- A repository where you can add a workflow file (any repo will do — even an empty one)
- Permission to install GitHub Apps on your account or organization
Step 1: Create a Machine account
Go to app.machine.dev/signup and sign in with GitHub. New accounts receive $10 of free compute — enough for ~58 hours of T4G GPU time at spot rates.
Step 2: Install the Machine GitHub App
After signing in, you’ll be prompted to install the Machine Provisioner GitHub App. Pick the account or organization you want to use Machine with, then either grant access to all repositories or pick specific ones.
Step 3: Enable self-hosted runners on your org
GitHub blocks self-hosted runners by default for security. You need to enable them once per org.
See Enable self-hosted runners for the click-by-click steps. It takes about 30 seconds.
Step 4: Add a workflow file
Create .github/workflows/machine-test.yml in your repository with this content:
name: Machine quickstarton: push: branches: [main] workflow_dispatch:
jobs: hello-gpu: runs-on: [machine, gpu=t4g, tenancy=spot] steps: - uses: actions/checkout@v4
- name: Show GPU info run: nvidia-smi
- name: Show CPU info run: | echo "Architecture: $(uname -m)" echo "vCPUs: $(nproc)" echo "RAM: $(free -h | awk '/^Mem:/{print $2}')"Commit and push this file to your main branch.
Step 5: Watch it run
Open the Actions tab in your repository on GitHub. You should see the “Machine quickstart” workflow running. Within about a minute, the job will:
- Get picked up by a freshly provisioned Machine T4G runner
- Print the GPU details (
nvidia-smi) - Print the CPU info (4 vCPUs, 8 GB RAM, ARM64 Graviton)
- Finish and shut down
Total runtime: about 30–60 seconds. Cost: about $0.004 at $0.004/min T4G spot rates.
What just happened
When you pushed the workflow file, GitHub Actions saw the runs-on: [machine, gpu=t4g, tenancy=spot] labels. It handed the job off to the Machine Provisioner, which launched a fresh AWS spot instance with an NVIDIA T4G GPU (16 GB VRAM, ARM64 Graviton). The runner agent inside the VM registered itself with GitHub as an ephemeral self-hosted runner, GitHub dispatched the job, and the runner shut down when the job finished.
You only paid for the seconds the runner was actually up.
Where to go next
- Configuration options — every label you can pass to
runs-on - GPU Runners — pick the right GPU for your workload
- CPU vs GPU — decision matrix
- Workflow Setup — patterns for real workflows
- Cost Optimization — spot, checkpointing, right-sizing
- Use Cases — real ML workflows you can fork