Applications on Beam are run inside containers. A container is a lightweight VM that packages a set of software packages required by your application.

Containers are based on container images which are instructions for how a container should be built.

Because you are building a custom application, it is likely that your application depends on some custom software to run.

You can customize the container image used to run your Beam application with the Image parameter.

System Info

Beam provides a variety of hardware with varying versions of software and drivers. If your application is sensitive to certain versions, consider the following:

Container OS:

  • Ubuntu 22.04

GPU Drivers

GPUDriver Version
A10G535.161.07 (CUDA 12.3) or 550.127.05 (CUDA 12.4)
A100-40535.129.03 (CUDA 12.2)
RTX4090550.127.05 (CUDA 12.4)
T4550.120 (CUDA 12.4) or 550.127.05 (CUDA 12.4)

Bring Your Own Dockerfile

You can build images based on your own Dockerfile.

The from_dockerfile() command accepts a path to a valid Dockerfile:

from beam import Image, endpoint

image = Image().from_dockerfile("./Dockerfile").add_python_packages(["numpy"])


@endpoint(image=image, name="test_dockerfile")
def handler():
    return {}

Conda Environments

Beam supports using Anaconda environments via micromamba. To get started, you can chain the micromamba method to your Image definition and then specify packages and channels via the add_micromamba_packages method.

from beam import Image


image = (
    Image(python_version="python3.11")
    .micromamba()
    .add_micromamba_packages(packages=["pandas", "numpy"], channels=["conda-forge"])
    .add_python_packages(packages=["huggingface-hub[cli]"])
    .add_commands(commands=["micromamba run -n beta9 huggingface-cli download gpt2 config.json"])
)

You can still use pip to install additional packages in the conda environment and you can run shell commands too.

If you need to run a shell command inside the conda environment, you should prepend the command with micromamba run -n beta9 as shown above.

Public Docker Registries

You can import existing images from remote Docker registries, like Docker Hub, Google Artifact Registry, ECR, GitHub Container Registry, NVIDIA and more.

Just supply a base_image argument to Image.

from beam import endpoint, Image

image = (
    Image(
        base_image="docker.io/nvidia/cuda:12.3.1-runtime-ubuntu20.04",
        python_version="python3.9",
    )
    .add_commands(["apt-get update -y", "apt-get install neovim -y"])
    .add_python_packages(["torch"])
)


@endpoint(image=image)
def handler():
    import torch

    return {"torch_version": torch.__version__}

Beam only supports Debian-based images. In addition, make sure your image is built for the correct x86 architecture.

Private Docker Registries

Beam supports importing images from the following private registries: AWS ECR, Google Artifact Registry, Docker Hub, and NVCR.

Private registries require credentials, and you can pass the credentials to Beam in two ways: as a dictionary, or exported from your shell so Beam can automatically lookup the values.

Passing Credentials as a Dictionary

You can provide the values for the registry as a dictionary directly, like this:

from beam import Image


image = Image(
    base_image="111111111111.dkr.ecr.us-east-1.amazonaws.com/my-app:latest",
    base_image_creds={
      "AWS_ACCESS_KEY_ID": "xxxx",
      "AWS_SECRET_ACCESS_KEY": "xxxx"
      "AWS_REGION": "xxxx"
    },
)

Passing Credentials from your Environment

Alternatively, you can export your credentials in your shell and pass the environment variable names to base_image_creds as a list:

from beam import Image


image = Image(
    base_image="111111111111.dkr.ecr.us-east-1.amazonaws.com/my-app:latest",
    base_image_creds=[
        "AWS_ACCESS_KEY_ID",
        "AWS_SECRET_ACCESS_KEY",
        "AWS_SESSION_TOKEN",
        "AWS_REGION",
    ],
)

AWS ECR

To use a private image from Amazon ECR, export your AWS environment variables. Then configure the Image object with those environment variables.

You can authenticate with either your static AWS credentials or an AWS STS token. If you use the AWS STS token, your AWS_SESSION_TOKEN key must also be set.

from beam import Image

image = Image(
    python_version="python3.12",
    base_image="111111111111.dkr.ecr.us-east-1.amazonaws.com/my-image:latest,
    base_image_creds=["AWS_ACCESS_KEY_ID", "AWS_SECRET_ACCESS_KEY", "AWS_REGION"],
)

@endpoint(image=image)
def handler():
    pass

GCP Artifact Registry

To use a private image from Google Artifact Registry, export your access token.

export GCP_ACCESS_TOKEN=$(gcloud auth print-access-token --project=my-project)

Then configure the Image object to use the environment variable.

from beam import Image

image = Image(
    python_version="python3.12",
    base_image="us-east4-docker.pkg.dev/my-project/my-repo/my-image:0.1.0",
    base_image_creds=["GCP_ACCESS_TOKEN"],
)

@endpoint(image=image)
def handler():
    pass

NVIDIA GPU Cloud (NGC)

To use a private image from NVIDIA GPU Cloud, export your API key.

export NGC_API_KEY=abc123

Then configure the Image object to use the environment variable.

from beam import Image

image = Image(
    python_version="python3.12",
    base_image="nvcr.io/nvidia/tensorrt:24.10-py3",
    base_image_creds=["NGC_API_KEY"],
)

@endpoint(image=image)
def handler():
    pass

Docker Hub

To use a private image from Docker Hub, export your Docker Hub credentials.

export DOCKERHUB_USERNAME=user123
export DOCKERHUB_PASSWORD=pass123

Then configure the Image object with those environment variables.

from beam import Image

image = Image(
    python_version="python3.12",
    base_image="docker.io/my-org/my-image:0.1.0",
    base_image_creds=["DOCKERHUB_USERNAME", "DOCKERHUB_PASSWORD"],
)

@endpoint(image=image)
def handler():
    pass

Adding Shell Commands

You can also run any shell commands you want in the environment before it starts up. Just pass them into the commands field in your app definition.

Below, we’ll customize our image with requests and some shell commands:

from beam import endpoint, Image


image = (
    Image(python_version="python3.9")
    .add_commands(["apt-get update", "pip install beautifulsoup4"])
    .add_python_packages(["requests"])
)

@endpoint(cpu=1, memory="16Gi", gpu="T4", image=image)
def handler():
    return {}

Adding Python Packages

You can add Python packages to the runtime in the python_packages field:

from beam import Image


Image(python_version="python3.9").add_python_packages(["requests"])
Beam will default to Python 3.8 if no python_version is provided.

Alternatively, you can pass in a path to a requirements.txt file:

from beam import Image


Image(python_version="python3.9", python_packages="requirements.txt")

Passing Secrets

You can pass secrets to the image build by using the with_secrets command:

from beam import Image, endpoint

image = (
    Image()
    .with_secrets(["AWS_SECRET_KEY"])
    .add_commands(
        [
            "echo $AWS_SECRET_KEY",
        ]
    )
)


@endpoint(image=image, name="secret-example")
def handler():
    return {}

Using Environment Variables

If your environment requires certain environment variables set, you can do so using the env_vars parameter:

from beam import function, Image


@function(image=Image(env_vars=["CUDA_HOME=/usr/local/cuda-12.3"]))
def handler():
    import os

    print(os.getenv("CUDA_HOME"))

You can also use the following syntax, if you prefer:

from beam import Image

image = (
    Image(python_version="python3.9")
    .with_envs("HF_HUB_ENABLE_HF_TRANSFER=1")
    .add_python_packages(["huggingface_hub[hf-transfer]"])
)