Faster Whisper
This guide will walk you through deploying and invoking a transcription API using the Faster Whisper model on Beam. The API can be invoked with either a URL to an .mp3
file or a base64-encoded audio file.
View the Code
See the code for this example on Github.
Initial Setup
In your Python file, add the following code to define your endpoint and handle the transcription:
Serving the API
In your shell, serve the API by running:
This command will:
- Spin up a container.
- Run it with the specified CPU, memory, and GPU resources.
- Sync your local files to the remote container.
- Print a cURL request to invoke the API.
- Stream logs to your shell.
Invoking the API
Once the API is running, you can invoke it with a URL to an .mp3 file using the following cURL command:
Replace [YOUR-ENDPOINT-ID]
with your actual endpoint ID and [YOUR-AUTH-TOKEN]
with your authentication token.
Summary
You’ve successfully set up a highly performant serverless API for transcribing audio files using the Faster Whisper model on Beam. The API can handle both URLs to audio files and base64-encoded audio files. With the provided setup, you can easily serve, invoke, and develop your transcription API.
Was this page helpful?