With Beam, you can deploy web servers that use the ASGI protocol. This means that you can deploy applications built with popular frameworks like FastAPI and Django.

Multiple Endpoints Per App

In the example below, we are deploying a FastAPI web server that uses the Huggingface Transformers library to perform sentiment analysis and text generation.

We also included a warmup endpoint so that we can preemptively get our container ready for incoming requests.

FastAPI requires that request inputs be serialized using Pydantic. You can read more about it here.

app.py
from beam import asgi, Image
from pydantic import BaseModel


# Request payload for API, declared with Pydantic
class Input(BaseModel):
    text: str


def init_models():
    from transformers import pipeline

    # Initialize two simple models
    sentiment_analyzer = pipeline("sentiment-analysis")
    text_generator = pipeline("text-generation", model="gpt2")

    return sentiment_analyzer, text_generator


@asgi(
    name="sentinent-analysis",
    image=Image(python_packages=["transformers", "torch", "fastapi", "pydantic"]),
    on_start=init_models,
)
def web_server(context):
    from fastapi import FastAPI

    app = FastAPI()

    sentiment_analyzer, text_generator = context.on_start_value

    @app.post("/sentiment")
    async def analyze_sentiment(input: Input):
        # Unpack request input and send to ML model
        result = sentiment_analyzer(input.text)
        return result

    @app.post("/generate")
    async def generate_text(prompt="", max_length=1000):
        result = text_generator(prompt, max_length=max_length)
        return result

    @app.post("/warmup")
    async def warmup():
        return {"status": "warm"}

    return app

Sending Requests

If we wanted to perform sentiment analysis, we would send a POST request to https://app.beam.cloud/asgi/id/[ID]/sentiment with a JSON payload of the following format:

{
  "text": "I love Beam!"
}

Launch a Preview Environment

Just like an endpoint, you can prototype your web server using beam serve. This command will monitor changes in your local file system, live-reload the remote environment as you work, and forward remote container logs to your local shell.

beam serve app.py:web_server

Deploying the Web Server

When you are ready to deploy your web server, run the following command:

beam deploy app.py:web_server

You’ll see some logs in the console that show the progress of your deployment.

=> Building image
=> Syncing files
...
=> Invocation details
curl -X POST 'https://app.beam.cloud/asgi/id/1e4edc1b-d7ab-4d6e-a4b5-c4afa12f16df' \
-H 'Connection: keep-alive' \
-H 'Content-Type: application/json' \
-H 'Authorization: [YOUR_AUTH_TOKEN]' \
-d '{}'

Response Types

Beam supports various response types, including any FastAPI response type. You can find a list of FastAPI response types here.