How to get started with Amazon Bedrock
A step-by-step guide on how to start using Amazon Bedrock models, either through the console or via APIs. Also, how to fine-tune and build custom models.
Apr 15, 2024 • 7 Minute Read
It seems that generative AI models have run wild lately—everybody has one! Think of OpenAI with its GPT and DALL-E models, Google's Bard with PaLM, Meta's Llama, Stability AI with Stable Diffusion, among others. Text, chat, images, audio—yep, there’s a model for that!
But, what if you want a single tool to harness the power of multiple models, with enterprise-ready security, privacy and monitoring baked in? Or, more common these days, what if you want to use your own custom models, trained on your own data? Well, with Amazon entering the generative AI race, all of that just got a lot easier.
In this article, I’ll guide you through how to get started with Amazon Bedrock, from accessing the models, playing around in the playgrounds, using APIs, to creating your own custom model. You can also check out this five-minute Amazon Bedrock video walkthrough by Principal AWS Training Architect, Faye Ellis.
Table of contents
How much does Amazon Bedrock cost?
Pricing for Amazon Bedrock varies widely, depending on the model, but it all starts with tokens. A token is “a few characters and refers to the basic unit that a model learns to understand user input and prompt to generate results.” For estimation purposes, Amazon recommends using 6 characters = 1 token.
There are also broad three categories of pricing:
On-Demand: Only pay for what you use, with no up-front commitment required. This is what I used to create examples for this article, and my costs were approximately $0.09.
Provisioned Throughput: Requires a 1- to 6-month commitment, and is ideal for large workloads that need guaranteed throughput.
Model Customization (Fine-tuning): Charged for training based on the number of tokens processed by the model, plus model storage charged per month per model. WARNING: Don’t play with custom models unless you really know what you’re doing, as these costs could be significant.
For more information, check out pricing for the different models.
How do I access Amazon Bedrock models?
Amazon Bedrock lets you tap into a variety of foundational AI models from big names like AI21 (“AI” 21) Labs, Anthropic, Cohere, Stability AI, and Amazon itself (with the Titan models). But you won’t have access to use those models by default, so when you first get started, you’ll need to request access.
- Log in to the AWS Console and select one of the three regions where Bedrock is generally available: US East (N. Virginia), US West (Oregon), and Asia Pacific (Tokyo).
- Navigate to Amazon Bedrock.
- From the Bedrock home page, click Get started.
- To get a better idea of the models available, and what each does, on the left-hand navigation, click Base models.
- Peruse the model descriptions and decide which ones are most interesting for your use case.
- Now to request access, go to the left-hand navigation and click Model access. Then on the right, click Manage model access.
- Select the base models you’d like access to, and then click Save changes.
You won’t be charged for just having access to the model; charges only accrue when you use the model. Also, not all models will be available, and some (like Claude) will require you to submit a use case before access is granted.
After you’ve requested access, it can take up to 72 hours for it to be granted, but in my experience, it happens much faster.
How can I use the Amazon Bedrock models through the AWS Console?
Amazon Bedrock offers three playgrounds through the AWS Console so you can get familiar with the models: Chat, Text and Image. You can access them from the left-hand navigation.
Using the Chat Playground
The Chat playground supports multi-turn conversations, where you need to iterate and build on previous context. You can also provide instructions or a persona, such as, “Act as an expert in software development.”
Using the Text Playground
The Text playground is slightly simpler than the Chat playground, supporting prompts like, “Write a poem about working with AWS.” You cannot provide instructions or a persona in the Text playground.
Using the Image Playground
From charcoal sketches to hyper-realistic photos, you can get creative in the Image playground to generate just the thing you need. You can also download the image after it’s generated.
How to use the Amazon Bedrock APIs in your application
So, the playgrounds are cool and all, but the real power of this technology comes with being able to incorporate it into your own applications. And that’s where the APIs come in. You can access the APIs using the AWS CLI, an AWS SDK or a SageMaker Notebook. See the documentation for more.
Here, we’ll use Python for a simple prompt example. As noted in the code comments:
We first create the client object to interact with Bedrock.
Then we set up “input,” which is what we’ll pass to the model. TIP: You can use the playgrounds to enter your prompt and parameters, then click “View API request” to get the JSON code.
Make the call to the model, and get back the response.
# To run this code, you first need to install the AWS SDK for Python called boto3
# From the terminal, type "pip install boto3"
import boto3
import json
# Create the client object for interacting with Amazon Bedrock
# Be sure to select a region where Amazon Bedrock is available
bedrock = boto3.client(
service_name='bedrock-runtime',
region_name='us-west-2'
)
# The input we'll send to the model
# TIP: You can get this info in the playgrounds by clicking "View API request" and then updating the code below
input = {
"modelId": "meta.llama2-13b-chat-v1",
"contentType": "application/json",
"accept": "*/*",
"body": "{\"prompt\":\"I need an idea for an app to build on Amazon Bedrock.\",\"max_gen_len\":512,\"temperature\":0.5,\"top_p\":0.9}"
}
# The response from the model
response = bedrock.invoke_model(body=input["body"],
modelId=input["modelId"],
accept=input["accept"],
contentType=input["contentType"])
response_body = json.loads(response['body'].read())
# Print the response from the model
print(response_body)
To run the code from the terminal, type python bedrock.py, and you should get a JSON response that looks like this (though yours will likely be a little different):
In this example, we’re just printing the response to the console, but in the real world, you’d likely display it back to the user of your application.
How to fine-tune and build a custom model using Amazon Bedrock
In the early days of generative AI, we were all blown away with just how good the models were. But as companies are starting to adopt the technology, most are realizing that the base models aren’t 100% what they need. They lack domain-specific or company-specific knowledge. And that’s where fine-tuning and custom models come in.
With fine-tuning, you take one of the base models (like Llama or Titan) with its general knowledge, then you augment it with your own data.
In Amazon Bedrock, you can get to this functionality by clicking on Custom models on the left-hand navigation, then clicking Customize model→Create Fine-tune job.
WARNING: Custom models can be very expensive, so I would not recommend going through with this unless you really know what you’re doing and have the budget to support it. 🙂 I’ll get you started, but you can find full instructions in the documentation.
On the next page, you’ll need to enter the following details:
Model details: Choose the base model that you want to fine-tune, and then give your new fine-tuned model a name.
Job configuration: Enter a unique name for the training job.
Input data: Point to the training data (and optionally validation data) that you want to use to train the model. This data should live in an S3 bucket.
Hyperparameters: Adjust hyperparameters for the model (optional), or accept the defaults.
Output data: Enter the S3 location where the training job output data should live.
Service access: Amazon Bedrock requires permissions to write to S3, so you’ll need to specify a service role with those permissions.
Once you click Fine-tune model, the training job will begin. This could take a while, depending on the base model and how much data you’re passing in. You’ll want to monitor the job, analyze the results, and then eventually use the fine-tuned model for inference.
Wrapping up
By now, hopefully you’ve got a good idea of how to get started with Amazon Bedrock—whether using the UI playgrounds, the APIs, or even fine-tuning your own custom model. This is a fast-growing space, so keep an eye on the Amazon blog for the latest Bedrock news and releases.
You might also enjoy these resources if you want to dig deeper on generative AI: