Beam supports importing images from custom public and private registries.

Public Docker Registries

You can import existing images from remote Docker registries, like Docker Hub, Google Artifact Registry, ECR, GitHub Container Registry, NVIDIA and more.

Just supply a base_image argument to Image.

from beam import endpoint, Image

image = (
    Image(
        base_image="docker.io/nvidia/cuda:12.3.1-runtime-ubuntu20.04",
        python_version="python3.9",
    )
    .add_commands(["apt-get update -y", "apt-get install neovim -y"])
    .add_python_packages(["torch"])
)


@endpoint(image=image)
def handler():
    import torch

    return {"torch_version": torch.__version__}

Beam only supports Debian-based images. In addition, make sure your image is built for the correct x86 architecture.

Private Docker Registries

Beam supports importing images from the following private registries: AWS ECR, Google Artifact Registry, Docker Hub, and NVIDIA Container Registry.

Private registries require credentials, and you can pass the credentials to Beam in two ways: as a dictionary, or exported from your shell so Beam can automatically lookup the values.

Passing Credentials as a Dictionary

You can provide the values for the registry as a dictionary directly, like this:

from beam import Image


image = Image(
    base_image="111111111111.dkr.ecr.us-east-1.amazonaws.com/my-app:latest",
    base_image_creds={
      "AWS_ACCESS_KEY_ID": "xxxx",
      "AWS_SECRET_ACCESS_KEY": "xxxx"
      "AWS_REGION": "xxxx"
    },
)

Passing Credentials from your Environment

Alternatively, you can export your credentials in your shell and pass the environment variable names to base_image_creds as a list:

from beam import Image


image = Image(
    base_image="111111111111.dkr.ecr.us-east-1.amazonaws.com/my-app:latest",
    base_image_creds=[
        "AWS_ACCESS_KEY_ID",
        "AWS_SECRET_ACCESS_KEY",
        "AWS_SESSION_TOKEN",
        "AWS_REGION",
    ],
)

AWS ECR

To use a private image from Amazon ECR, export your AWS environment variables. Then configure the Image object with those environment variables.

You can authenticate with either your static AWS credentials or an AWS STS token. If you use the AWS STS token, your AWS_SESSION_TOKEN key must also be set.

from beam import Image

image = Image(
    python_version="python3.12",
    base_image="111111111111.dkr.ecr.us-east-1.amazonaws.com/my-image:latest,
    base_image_creds=["AWS_ACCESS_KEY_ID", "AWS_SECRET_ACCESS_KEY", "AWS_REGION"],
)

@endpoint(image=image)
def handler():
    pass

GCP Artifact Registry

To use a private image from Google Artifact Registry, export your access token.

export GCP_ACCESS_TOKEN=$(gcloud auth print-access-token --project=my-project)

Then configure the Image object to use the environment variable.

from beam import Image

image = Image(
    python_version="python3.12",
    base_image="us-east4-docker.pkg.dev/my-project/my-repo/my-image:0.1.0",
    base_image_creds=["GCP_ACCESS_TOKEN"],
)

@endpoint(image=image)
def handler():
    pass

NVIDIA GPU Cloud (NGC)

To use a private image from NVIDIA GPU Cloud, export your API key.

export NGC_API_KEY=abc123

Then configure the Image object to use the environment variable.

from beam import Image

image = Image(
    python_version="python3.12",
    base_image="nvcr.io/nvidia/tensorrt:24.10-py3",
    base_image_creds=["NGC_API_KEY"],
)

@endpoint(image=image)
def handler():
    pass

Docker Hub

To use a private image from Docker Hub, export your Docker Hub credentials.

export DOCKERHUB_USERNAME=user123
export DOCKERHUB_PASSWORD=pass123

Then configure the Image object with those environment variables.

from beam import Image

image = Image(
    python_version="python3.12",
    base_image="docker.io/my-org/my-image:0.1.0",
    base_image_creds=["DOCKERHUB_USERNAME", "DOCKERHUB_PASSWORD"],
)

@endpoint(image=image)
def handler():
    pass