The Beam v2 Developer Preview is currently in early access. Sign up here to get started.


  • Scale out workloads to thousands of GPU (or CPU) containers
  • Ultrafast cold-start for custom ML models
  • Automatic scaling up and down to zero
  • Flexible distributed storage for storing models and function outputs
  • Distribute workloads across multiple cloud providers
  • Easily deploy task queues and functions using simple Python abstractions


Create an account on Beam and download the Beam CLI to get started:

curl -sSfL | sh

How It Works

Beam is designed for launching remote serverless containers quickly. There are a few things that make this possible:

  • A custom, lazy loading image format (CLIP) backed by S3/FUSE
  • A fast, redis-based container scheduling engine
  • Content-addressed storage for caching images and files
  • A custom runc container runtime