GPU Runners
Machine provides a variety of GPU runner options to match your specific workload requirements. Each runner type comes with pre-installed NVIDIA Device Drivers 555.58, CUDA 12.1.0 and cuDNN 9.2.1, allowing you to start using GPU acceleration immediately without any configuration.
You are always free to install additional drivers, CUDA, or cuDNN versions, or even build your own from source.
Available GPU Types
Machine supports all GPU instances currently available on AWS, including NVIDIA GPUs and AWS Inferentia accelerators.
NVIDIA GPU Runners
GPU Type | GPU Memory | CUDA Cores | Tensor Cores | Use Cases |
---|---|---|---|---|
T4G | 16GB | 2,560 | 320 | Entry-level ML training, inference |
T4 | 16GB | 2,560 | 320 | General-purpose ML, computer vision |
L4 | 24GB | 7,680 | 240 | Balanced training/inference, mid-range ML |
A10G | 24GB | 9,216 | 288 | Advanced training, larger models |
L40S | 48GB | 18,176 | 568 | Large model training, high-performance ML |
AWS Inferentia Accelerators
Accelerator Type | Accelerator Memory | Use Cases |
---|---|---|
TRN1 (Inferentia) | 32GB | High-performance inference |
INF2 (Inferentia2) | 32GB | Next-gen inference optimization |
Pre-installed Software
Each runner comes with the following software pre-installed:
- NVIDIA Device Drivers 555.58
- CCUDA 12.1.0
- cuDNN 9.2.1
- NVIDIA Container Toolkit
- AWS Neuron SDK
Runner Specifications
Besides the GPU, Machine runners offer configurable CPU and RAM options to match your specific workload requirements.
Default Runner Specs
GPU Type | Default vCPUs | Default CPU RAM | Root Volume Storage | NVMe SSD Storage |
---|---|---|---|---|
T4G | 4 | 8GB | 100GB | |
T4 | 4 | 16GB | 100GB | 125GB |
L4 | 4 | 16GB | 100GB | 250GB |
A10G | 4 | 16GB | 100GB | 250GB |
L40S | 4 | 32GB | 100GB | 250GB |
TRN1 | 4 | 8GB | 100GB | |
INF2 | 4 | 16GB | 100GB |
Next Steps
- Learn how to Configure Your Workflows for optimal performance
- Explore Cost Optimization Strategies