Alpaca LoRA Training and Inference

This example demonstrates how to fine-tune and deploy Alpaca-LoRA on Beam.

To run this example, you’ll need a free account on Beam. If you signup here, you’ll get 10 hours of free credit to get started.

Training

We’re going to implement the code in the Llama LoRA repo in a script we can run on Beam.

I’m using the Instruction Tuning with GPT-4 dataset, which is hosted on Huggingface.

The first thing we’ll do is setup the compute environment to run Llama. The training script is run on a 24Gi A10G GPU:

This example only demonstrates the high-level workflow, so specific functions like train are hidden. You can find the entire source code on Github.

from math import ceil

from beam import App, Runtime, Image, Volume
from datasets import load_dataset

# The runtime definition
app = App(
    "fine-tune-llama",
    runtime=Runtime(
        cpu=4,
        memory="32Gi",
        gpu="A10G",
        image=Image(
            python_version="python3.10",
            python_packages="requirements.txt",
        ),
    ),
    volumes=[
        # checkpoints is used to save fine-tuned models
        Volume(name="checkpoints", path="./checkpoints"),
        # pretrained-models is used to cache model weights
        Volume(name="pretrained-models", path="./pretrained-models"),
    ],
)


# Training
@app.run()
def train_model():
    # Trained models will be saved to this path
    beam_volume_path = "./checkpoints"

    # We use the vicgalle/alpaca-gpt4 dataset hosted on Huggingface:
    # https://huggingface.co/datasets/vicgalle/alpaca-gpt4
    dataset = load_dataset("vicgalle/alpaca-gpt4")

    # Adjust the training loop based on the size of the dataset
    samples = len(dataset["train"])
    val_set_size = ceil(0.1 * samples)

    train(
        base_model=base_model,
        val_set_size=val_set_size,
        data=dataset,
        output_dir=beam_volume_path,
    )

In order to run this on Beam, we use the beam run command:

beam run app.py:train_model

When we run this command, the training function will run on Beam’s cloud, and we’ll see the progress of the training process streamed to our terminal:

Deploying Inference API

When the model is trained, we can deploy an API to run inference on our fine-tuned model.

Let’s create a new function for inference. If you look closely, you’ll notice that we’re using a different decorator this time: rest_api instead of run.

This will allow us to deploy the function as a REST API.

# Inference
@app.rest_api()
def run_inference(**inputs):
    # Inputs passed to the API
    input = inputs["input"]

    # Grab the latest checkpoint
    checkpoint = get_newest_checkpoint()

    # Initialize models
    models = load_models(checkpoint=checkpoint)

    model = models["model"]
    tokenizer = models["tokenizer"]
    prompter = models["prompter"]

    # Generate text
    response = call_model(
        input=input, model=model, tokenizer=tokenizer, prompter=prompter
    )
    return response

We can deploy this as a REST API by running this command:

beam deploy app.py:run_inference

If we navigate to the URL printed in the shell, we’ll be able to copy the full cURL request to call the REST API.

We’ll modify the request slightly with a payload for the model:

-d '{"input": "what are the five steps to become a published author?"}'

And here’s the response from the fine-tuned model:

1. Write Your Manuscript: The first and most crucial step is to write your book. This involves coming up with a compelling idea, outlining your plot or structure, and then dedicating time to actually write it. Set a writing schedule and stick to it to make consistent progress.
2. Revise and Edit: After completing your manuscript, take the time to revise and edit it. This may involve multiple drafts where you refine your writing, check for grammar and spelling errors, and ensure the plot or content flows smoothly. Consider seeking feedback from beta readers or joining a writing group for constructive criticism.
3. Query Agents or Publishers: Once you have a polished manuscript, research literary agents or publishers who specialize in your genre or niche. Write a compelling query letter that succinctly summarizes your book, your qualifications, and why it's marketable. Be prepared for rejection and consider submitting to multiple agents or publishers.
4. Secure a Literary Agent or Publisher: If an agent or publisher expresses interest in your work, they may request a full manuscript or further revisions. If they offer representation or a publishing deal, carefully review the terms and negotiate if necessary. Having a literary agent can be particularly helpful as they can advocate for your work and navigate the publishing industry on your behalf.
5. Publication and Promotion: Once you secure a publishing deal, your book will go through the publishing process, including editing, design, and distribution. Work closely with your publisher to ensure your vision is maintained. Concurrently, start building your author platform by creating a website, engaging on social media, and planning a marketing strategy to promote your book upon release.