Articles

How to Use AWS Bedrock: Build AI Chatbots with Console and Python

Learn about AWS Bedrock and how to access powerful foundation models like Claude. Explore the AWS Console and Python SDK and build a customer support chatbot.

Imagine building a generative AI chatbot from scratch. You’d need to choose the right model, set up infrastructure, manage scaling, monitor usage, and ensure security. Sounds complex, right?

That’s precisely why AWS Bedrock exists. It lets you access powerful foundation models like Claude, Titan, and Mistral—all through a simple API without managing any infrastructure.

Let’s start by understanding what AWS Bedrock is and what makes it so powerful.

What is AWS Bedrock?

AWS Bedrock is a fully managed service by Amazon Web Services that allows you to build and scale generative AI applications using foundation models without the need to manage any underlying infrastructure. Instead of spending time setting up and maintaining servers or training your own large language models (LLMs), you can access industry-leading models through simple API calls.

It’s designed specifically for developers and businesses who create innovative AI applications like chatbots, summarization tools, or data extraction systems without worrying about deployment, scaling, or security.

It supports a growing list of top foundation model providers, including:

These models can be used for a wide range of use cases, such as:

  • Summarization
  • Question answering
  • Chatbots
  • Text generation
  • Information extraction

But what makes AWS Bedrock a powerful choice for developers? Let’s explore its features.

Related Course

Generative AI on AWS: Building GenAI Models with Amazon Bedrock

Unlock Amazon Bedrock's potential for GenAI. Learn core concepts, model design, project evaluation, and innovative GenAI business application development.Try it for free

What are the features of AWS Bedrock

Bedrock helps you focus on the application rather than the infrastructure. Whether you’re developing a production-grade app or just experimenting, its features are designed to meet you where you are.

The following are some of the most notable characteristics that make AWS Bedrock a preferred option for GenAI developers:

  • No infrastructure management required: You don’t need to provision, scale, or maintain servers or GPUs. Bedrock handles all the backend operations.

  • No model fine-tuning needed: Use foundation models out of the box or improve them with knowledge bases without the need for extra training.

  • Pay-as-you-go pricing: Pay only for what you use, which makes it scalable for enterprise needs and cost-effective for startups.

  • Unified API access to multiple models: Using a single API format allows you to experiment with different models (such as Titan or Claude) without recreating your code.

  • AWS ecosystem integration: Integrate Bedrock with services such as IAM for permissions, S3 for data storage, and Lambda to compute logic.

  • Guardrails and moderation: Optional features like content filters and safety controls help you build responsible AI applications.

  • High availability and scalability: Being part of AWS, Bedrock offers enterprise-grade reliability and global scaling options.

We’ll start by exploring AWS Bedrock through the Console.

How to use the AWS Bedrock with the AWS Console

The AWS Console offers a beginner-friendly way to interact with Bedrock’s foundation models, with no SDKs or command line interface (CLI) setup required. In this section, we’ll walk you through the process of configuring it, comprehending providers and models, and testing browser prompts.

Creating an AWS account

To get started with AWS Bedrock, you’ll need an AWS account.

  • Go to the home page of AWS
  • Click Create an AWS Account

AWS home page with Create an AWS account button

  • Follow the given steps and enter your email, password, and billing info
  • Once done, log in to the AWS Management Console

Amazon Console home page

Navigate to AWS Bedrock

After logging in:

  • Type “AWS Bedrock” in the search bar

searching for AWS Bedrock in the search bar on console

  • Click on Amazon Bedrock in the search results
  • You’ll land on the Bedrock home dashboard, where you can access models and test prompts

AWS bedrock homepage

What are the providers in AWS Bedrock?

In Bedrock, providers are companies that have created foundation models. AWS has partnered with top AI research labs and companies, allowing developers to use their models directly without training or hosting them.

Each provider offers different models optimized for tasks like chat, summarization, or code generation.

Providers in AWS Bedrock

Note: Based on the location selected, the providers can change.

Here’s a breakdown of some of the providers and their model families, along with their usage insights:

1. Amazon: Titan Models

  • Built by AWS, it is designed for high performance and integration.
  • Includes Titan Text and Embeddings.
  • Great for search, text generation, and data extraction.

2. Anthropic: Claude Models

  • Known for being safe, honest, and helpful.
  • Great for conversation, reasoning, and complex dialogue.
  • Ideal for customer support, chatbots, and assistants.

3. Cohere: Command and Embed Models

  • Focused on retrieval-augmented generation (RAG) and enterprise NLP.
  • Includes Command R+ and Embed models.
  • Great for search, classification, and document intelligence.

4. Meta: Llama Models

  • These are open-weight models that offer competitive performance.
  • Includes Llama 2 and Llama 3 models.
  • Useful for fine-tuned applications, research, and cost-effective deployment.

5. Mistral: Mistral 7B, Mixtral

  • Focus on speed and compactness.
  • Mixtral is a mixture-of-experts model.
  • Good for low-latency chat, reasoning, and code generation.

Testing prompts in the Bedrock Playground

Once you’ve selected a foundation model, you can begin experimenting with it using the Bedrock Playground, a built-in web interface that lets you send prompts, configure settings, and observe outputs in real-time. This is an ideal way to understand how different models behave before integrating them into your application.

Go to Playground from the nav bar on the left and select the “Chat” mode. In Chat mode, you’ll interact with conversational models by entering messages and viewing responses in a chat-like flow. Then choose the model—we are using the Claude 3 Haiku.

Bedrock playground with Claude 3 Haiku model

Note: If Claude 3 Haiku or other models are not visible in your Playground, you may need to request model access via your AWS account.

Requesting model access

You can test Claude’s response style with this customer support prompt:

A customer says their order is delayed. Write a helpful and polite response by apologizing and offering an update.

Understanding prompt parameters

Once you select your model and enter a prompt, you’ll see a few key configuration options that shape how the model replies:

Prompt parameters including temperature, Top P, Top K, maximum length, and stop sequences

1. Temperature: Controls randomness and creativity of the response.

  • Lower (0.1–0.3): More predictable, professional tone
  • Higher (0.7–1.0): More creative or varied output

    Recommended: 0.3–0.5 for customer support

2. Maximum length: Defines how long the output can be.

  • 300–500 tokens is ideal for full replies

    If you need shorter or snappier answers, reduce this value

3. Top-k / Top-p (Optional): Advanced settings that influence word choice diversity. The default works well in most cases.

4. Stop sequences: You can define when the model should stop generating. This is useful if you want cleaner output (e.g., after one reply only).

Here is the output generated for the given prompt by applying these parameters:

Output generated by the Claude 3 Haiku model

If you use the same prompt with the other models, you will notice:

  • Claude’s response will be more natural and empathetic.
  • Titan’s response will be concise but slightly formal.
  • Mistral tends to give briefer, less conversational replies.

Now that you’ve explored Bedrock in the Console, let’s see how to integrate it into your Python projects using the boto3 SDK.

Using AWS Bedrock via Python SDK (boto3)

While the AWS Console is great for initial exploration, real-world applications demand automation, and that’s where the Python SDK (boto3) comes in. Let us interact with it.

Install the boto3 SDK

Start by installing the boto3 by using the following command on the terminal:

pip install boto3

Configure AWS credentials

To configure your AWS credentials, run the following command:

aws configure

Note: If you get an error that says 'aws' is not recognized as an internal or external command, operable program or batch file, use the following command to install AWS CLI:

pip install awscli

Use Claude 3 Haiku in VS Code

Open VS Code and start by importing the necessary libraries:

import boto3
import json

Here:

  • boto3 is the AWS SDK that lets your Python code talk to AWS services.
  • json helps format and parse JSON data, which is required to interact with Bedrock models.

Next, create a Bedrock client and also set up a prompt. A sample prompt that you can use is as follows:

bedrock_object = boto3.client('bedrock-runtime', region_name='ap-south-1')
prompt = "What is the use of AWS Bedrock?"

Here:

  • boto3.client() sets up a client to communicate with the bedrock-runtime service.
  • We’re using the Mumbai region (ap-south-1), where Bedrock is available.
  • prompt is the user query that we want to send to the model.

Once the client is set up, let’s prepare the model parameters:

kwargs = {
"modelId": "anthropic.claude-3-haiku-20240307-v1:0",
"contentType": "application/json",
"accept": "application/json",
"body": json.dumps({
"anthropic_version": "bedrock-2023-05-31",
"max_tokens": 1000,
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": prompt
}
]
}
]
})
}

In the model parameters:

  • modelId: Is the exact version of the Claude 3 Haiku model hosted on Bedrock.

  • contentType / accept: Both are set to "application/json" for proper request/response formats.

  • body: A JSON string that follows Anthropic’s Messages API format:

    • anthropic_version: Required identifier for the message format.
    • max_tokens: Limits how long the model’s response can be.
    • messages: A list of user-assistant exchanges, starting with your prompt.

Note: You can find sample API request formats and model IDs on the AWS Bedrock Console > Model Catalog> Name of the model.

Lastly, invoke the model and unpack the response:

response = bedrock_object.invoke_model(**kwargs)
body = json.loads(response['body'].read())
print(body)

Here:

  • response: Sends the prompt and parameters to the Claude 3 Haiku model and waits for a response.

  • response['body'].read(): Reads the raw response stream.

  • json.loads(...): Parses the result into a Python dictionary.

  • print(body): Displays the output from the Claude model.

A sample output of this code is:

{'id': 'msg_bdrk_01MaKMX8r2F5NqChWNeBqUwe', 'type': 'message', 'role': 'assistant', 'model': 'claude-3-haiku-20240307', 'content': [{'type': 'text', 'text': 'AWS Bedrock is a managed service from AWS that provides a secure and scalable environment for running large language models (LLMs) and other AI/ML workloads. Some key uses and features of AWS Bedrock include:\n\n1. Model hosting and deployment: Bedrock allows you to deploy and run your own custom-trained LLMs or use pre-trained models provided by AWS in a fully managed environment.\n\n2. Inference scaling: Bedrock automatically scales the infrastructure to handle fluctuations in inference workloads, allowing you to handle spikes in demand.\n\n3. Model fine-tuning: Bedrock supports fine-tuning of pre-trained models on your own data to adapt them to your specific use cases.\n\n4. Data governance: Bedrock provides security and governance controls to ensure data privacy and compliance for your AI/ML workloads.\n\n5. Integration with other AWS services: Bedrock can integrate with other AWS services like Amazon S3, Amazon SageMaker, and Amazon Athena to enable end-to-end AI/ML workflows.\n\n6. Cost optimization: Bedrock offers flexible pricing options and can help optimize costs by automatically scaling resources based on demand.\n\nOverall, AWS Bedrock is designed to make it easier for organizations to develop, deploy, and manage large language models and other AI/ML applications at scale, without having to worry about the underlying infrastructure complexity.'}], 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 17, 'output_tokens': 306}}

With this foundation in place, you’re ready to move forward and start building a customer support chatbot using Claude and AWS Bedrock.

Build a customer support chatbot using AWS Bedrock Claude

Let’s build a smart customer support chatbot with Claude using AWS Bedrock’s Python SDK. From setting up a chat interface to generating safe, helpful responses, let’s dive in!

Step 1: Set up a command line chat interface

We’ll use a basic CLI so users can type in queries and receive AI-generated replies. Start by importing the necessary libraries:

import boto3
import json

Step 2: Configure the Claude model with Bedrock

Next, create a Bedrock client and define your Claude model and input region. We’re using claude-3-haiku-20240307-v1:0 and the ap-south-1 (Mumbai) region:

# Set up the client
bedrock = boto3.client("bedrock-runtime", region_name="ap-south-1")
# Prompt the user
user_input = input("Ask your support question: ").strip()

Using .strip() helps remove unnecessary spaces in input.

Step 3: Validate user input (empty or too long)

Before invoking Claude, it’s good practice to validate the input:

# Basic input validation
if not user_input:
print("Please enter a valid question.")
exit()
if len(user_input) > 1000:
print("Your message is too long. Please shorten it and try again.")
exit()

Empty inputs and overly long prompts can break the response quality or exceed token limits.

Step 4: Format the prompt for Claude (messages API)

Claude 3 models require a messages array with a role-based structure. This format is explained in AWS’s Claude SDK documentation:

# Create request payload for Claude
kwargs = {
"modelId": "anthropic.claude-3-haiku-20240307-v1:0",
"contentType": "application/json",
"accept": "application/json",
"body": json.dumps({
"anthropic_version": "bedrock-2023-05-31",
"max_tokens": 1000,
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": user_input
}
]
}
]
})
}

Step 5: Invoke the model and print the response

Send the formatted request and display the output:

# Send the request and print the response
response = bedrock.invoke_model(**kwargs)
result = json.loads(response['body'].read())
# Output Claude's reply
print("\nClaude says:", result['content'][0]['text'])

You can wrap this in a loop or a function for a full-fledged chat system.

Let’s give a sample input to this chatbot and check its response:

Can you tell me the specifications of the latest Galaxy S series phone?

A sample output generated by the chatbot is:

Claude says: Here are the key specifications for the latest flagship Samsung Galaxy S series phone, the Galaxy S23 Ultra:
Display:
- 6.8-inch Dynamic AMOLED 2X display
- 3088 x 1440 pixel resolution (Quad HD+)
- 120Hz adaptive refresh rate
- Gorilla Glass Victus+ protection
Processor:
- Qualcomm Snapdragon 8 Gen 2 chipset
RAM & Storage:
- 8GB/12GB RAM
- 256GB/512GB/1TB storage options
Camera:
- Quad camera setup:
- Primary: 200MP main camera
- Ultra-wide: 12MP
- Telephoto: 10MP with 3x optical zoom
- Telephoto: 10MP with 10x optical zoom
- 40MP front-facing camera
Battery:
- 5,000mAh battery
- 45W wired fast charging
- Wireless charging support
Other Features:
- IP68 water and dust resistance
- S Pen stylus support
- 5G connectivity
- Android 13 with One UI 5.1 software
The Galaxy S23 Ultra is the top-end model in Samsung's latest Galaxy S23 series lineup. It features a premium design, powerful specs, and advanced camera capabilities.

You’re now set to create smarter, safer chatbots with Claude and AWS Bedrock!

Conclusion

In this article, we explored how AWS Bedrock enables developers to build powerful generative AI applications by providing easy access to leading foundation models like Claude without managing infrastructure. We saw how to use the AWS Console and Python SDK to create scalable, secure, and intelligent chatbots efficiently and effectively.

If you’re eager to advance your skills in AI-powered applications, check out Codecademy’s Build AI Chatbots with Python course.

Frequently asked questions

1. What models are supported by AWS Bedrock?

Bedrock supports foundation models like Claude (Anthropic), Amazon Titan, Mistral, etc.

2. What is AWS Bedrock used for?

AWS Bedrock enables developers to build scalable generative AI applications using foundation models without managing infrastructure.

3. Does AWS Bedrock support fine-tuning?

Bedrock allows prompt engineering but not direct fine-tuning.

4. What’s the difference between AWS Bedrock and SageMaker?

AWS Bedrock provides easy access to foundation models via a serverless API for generative AI without infrastructure management. SageMaker offers a full ML platform for building, training, and deploying custom models with more control over data and compute.

5. What is the difference between AWS Bedrock and Amazon Q?

  • AWS Bedrock is a serverless service providing access to multiple foundation models for creating generative AI applications without managing infrastructure. It supports text, image generation, chatbots, and more.

  • Amazon Q is a fully managed AI chat assistant built on Bedrock models, designed for enterprise productivity, integrated tightly with tools like Amazon CodeWhisperer and QuickSight to assist with workplace tasks and BI insights.

6. Can I use AWS Bedrock for free?

AWS Bedrock does not have a free tier—usage is billed based on input and output tokens processed.

Codecademy Team

'The Codecademy Team, composed of experienced educators and tech experts, is dedicated to making tech skills accessible to all. We empower learners worldwide with expert-reviewed content that develops and enhances the technical skills needed to advance and succeed in their careers.'

Meet the full team