Building Multi-Agent Application with CrewAI
Have you ever wished you could have a team of AI specialists working on your projects? Imagine having one AI expert for research, another for writing, and a third for editing - all working together seamlessly. Traditional AI tools give you one general-purpose assistant, but what if you need specialized expertise for complex tasks?
The world of artificial intelligence is rapidly evolving, and one of the most exciting developments is the ability to create teams of AI agents that can work together to accomplish complex tasks. CrewAI makes this possible by providing a robust framework for building multi-agent applications where different AI agents can collaborate, each bringing their own specialized skills to solve problems that would be difficult for a single agent to handle alone.
What is CrewAI?
CrewAI is a free tool for building teams of AI agents (computer helpers). Unlike working with a single large language model (LLM), CrewAI allows you to create multiple specialized agents that can collaborate on complex tasks, each with their own role, goals, and expertise.
CrewAI is like having a team of helpers. Each helper is good at one thing. One helper finds information. Another helper writes. A third helper checks the writing. They all work together to get the job done.
The framework is built on the concept that complex problems are often best solved through collaboration. By breaking down large tasks into smaller, specialized roles, CrewAI helps you create more efficient and effective AI solutions.
Key components of CrewAI
CrewAI consists of several essential components that work together:
Agents: Individual AI entities with specific roles, goals, and capabilities
Tasks: Defined objectives that agents need to accomplish
Crews: Teams of agents working together toward a common goal
Tools: External capabilities that agents can use to perform their tasks
Processes: Workflows that define how agents collaborate and execute tasks
Learn How to Use AI for Coding
Ready to learn how to use AI for coding? Learn how to use generative AI tools like ChatGPT to generate code and expedite your development.Try it for freeBenefits of using CrewAI
CrewAI offers numerous advantages over traditional single-agent approaches, making it a powerful choice for complex AI applications.
Specialization and expertise
Instead of one AI that does everything okay, you get several AIs. Each agent brings a high level of expertise to its designated task. A research agent can focus on gathering information, while a writing agent concentrates on creating content.
Improved problem-solving capabilities
Complex problems often require multiple perspectives and skill sets. CrewAI enables different agents to tackle different aspects of a problem simultaneously, leading to more comprehensive solutions.
Scalability and flexibility
As your needs grow, you can easily add new agents with different specializations to your crew. Need translation capabilities? Add a translation agent. This modular approach makes your AI system highly adaptable.
Quality control and validation
Multiple agents can review and validate each other’s work, similar to peer review processes. This collaborative approach helps catch errors and improve the overall quality of outputs.
Cost optimization
CrewAI works great with local models through Ollama (a free program that runs AI on your computer). This means no expensive monthly fees or usage limits. You can run smart AI models on your own computer without ongoing costs.
Privacy and control
By using local models with Ollama, your data never leaves your machine. This keeps your information completely private and under your control.
Building a multi-agent app with CrewAI
Let’s walk through creating your first multi-agent application using CrewAI. We’ll build a content creation crew that includes a researcher, writer, and editor working together to create blog posts.
Prerequisites
Before we start, ensure you have:
Python 3.10 or higher (but less than 3.13)
Sufficient disk space for AI model downloads
Basic familiarity with Python programming
Step 1: Install Ollama
First, we need to install Ollama. This program will run our AI models on our own computer.
On Windows:
Download the Windows installer from ollama.ai and run it.
On macOS:
Download the macOS app from ollama.ai and install it like any other Mac application, or use Homebrew (a tool for installing programs):
brew install ollama
On Linux:
curl -fsSL https://ollama.ai/install.sh | sh
Step 2: Download AI models
Once Ollama is installed, the next step is to download the models we’ll use:
# Download Mistral (fast, efficient, and reliable)ollama pull mistral
Verify the model is available:
ollama list
We should see mistral:latest
in the list.
Step 3: Install CrewAI
CrewAI uses uv
as its dependency management tool (a program that handles other programs). This makes the setup easier. For detailed installation instructions, we can also refer to the official CrewAI installation guide.
Install uv (Skip if already installed)
On Windows:
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.sh | iex"
On macOS/Linux:
curl -LsSf https://astral.sh/uv/install.sh | sh
If we don’t have curl
(a download tool), use wget
:
wget -qO- https://astral.sh/uv/install.sh | sh
Install CrewAI
Run the following command to install the CrewAI command line interface (CLI):
uv tool install crewai
If you encounter a PATH warning, update your shell (tell your computer where to find the program):
uv tool update-shell
Verify the installation (check that it worked):
uv tool list
We should see something like:
crewai v0.102.0- crewai
Step 4: Create your project
Create a new CrewAI project. We’ll type a command to do this:
crewai create crew content_creation_crew
Important: When we run this command, CrewAI will ask us to pick a provider (which AI service to use). We’ll see a list like this:
Creating folder content_creation_crew...Select a provider to set up:1. openai2. anthropic3. gemini4. nvidia_nim5. groq6. huggingface7. ollama8. watson9. bedrock10. azure11. cerebras12. sambanova13. otherq. QuitEnter the number of your choice or 'q' to quit
Select option 7 (ollama) since we’re using Ollama for this tutorial.
This makes a folder with all the files we need. The files are already set up for us:
content_creation_crew/├── .gitignore├── knowledge/├── pyproject.toml├── README.md├── .env└── src/└── content_creation_crew/├── __init__.py├── main.py├── crew.py├── tools/│ ├── custom_tool.py│ └── __init__.py└── config/├── agents.yaml└── tasks.yaml
Step 5: Configure environment variables
Navigate to our project directory:
cd content_creation_crew
Step 6: Define your agents
Now we’ll create three specialized agents for our content creation pipeline. Each agent has a specific role, goal, and backstory that defines their behavior.
Edit the config/agents.yaml
file to define our team of agents:
researcher:role: >Senior Research Analystgoal: >Uncover cutting-edge developments and trends in the specified topicbackstory: >You're a seasoned research analyst with a knack for uncovering the latestdevelopments in technology and business. Known for your ability to find the mostrelevant information and present it clearly. You work methodically and providecomprehensive insights based on your knowledge.writer:role: >Tech Content Strategistgoal: >Craft compelling content on tech topics that resonate with the target audiencebackstory: >You're a renowned content strategist, known for your insightful and engaging articles.You transform complex concepts into compelling narratives that captivate and educateyour audience. Your writing is clear, engaging, and accessible.editor:role: >Chief Content Editorgoal: >Ensure the highest quality of content through thorough editing and reviewbackstory: >With a keen eye for detail and a passion for excellence, you review content to ensureit meets the highest standards of quality, clarity, and engagement. You focus onimproving flow, correcting errors, and enhancing readability.
What this configuration does:
Researcher: Gathers and analyzes information about the given topic
Writer: Creates engaging, well-structured content from research
Editor: Reviews and polishes content for publication quality
Step 7: Define tasks
Tasks specify the actual work each agent will perform. Each task is assigned to a specific agent and can depend on the output of previous tasks, creating a workflow pipeline.
Edit the config/tasks.yaml
file to specify what each agent should do:
research_task:description: >Conduct a comprehensive analysis of the latest advancements in {topic}.Identify key trends, breakthrough technologies, and notable industry developments.Use your existing knowledge to provide detailed insights and analysis.Your final answer MUST be a detailed research report with key findings and insights.expected_output: >A comprehensive 3-paragraph research report with introduction, key findings, and future implicationsagent: researcherwriting_task:description: >Using the research report from the researcher, write a compelling blog post about {topic}.Make the content engaging and accessible to a general tech audience.Your post should be informative yet easy to understand, with a clear structureincluding introduction, main content sections, and conclusion.expected_output: >A well-structured blog post with engaging title, introduction, main content, and conclusionagent: writercontext:- research_taskediting_task:description: >Review the blog post for clarity, engagement, and accuracy. Edit for grammar,style, and flow. Ensure the content is polished and ready for publication.Focus on improving readability and ensuring the message is clear and compelling.expected_output: >A polished, publication-ready blog post with any necessary corrections and improvementsagent: editorcontext:- writing_task
What this configuration does:
research_task: The first task where the researcher analyzes the given topic and creates a detailed report
writing_task: Takes the research output as context and creates a blog post from it
editing_task: Uses the blog post as context and refines it for publication
Sequential workflow: Each task builds on the previous one, with
context
defining dependenciesDynamic input: The
{topic}
placeholder gets replaced with user input at runtime
Step 8: Set up the crew
The crew file is like a manager. It tells our AI agents what to do. It also sets up how they talk to each other.
The crew.py
file controls our agents and tasks. Here’s how it should look:
from crewai import Agent, Crew, Process, Taskfrom crewai.project import CrewBase, agent, crew, taskfrom crewai import LLM@CrewBaseclass ContentCreationCrewCrew():"""ContentCreationCrew crew"""agents_config = 'config/agents.yaml'tasks_config = 'config/tasks.yaml'def __init__(self) -> None:# Initialize the LLM with Ollama using Mistralself.llm = LLM(model="ollama/mistral",base_url="http://localhost:11434")@agentdef researcher(self) -> Agent:return Agent(config=self.agents_config['researcher'],llm=self.llm,verbose=True)@agentdef writer(self) -> Agent:return Agent(config=self.agents_config['writer'],llm=self.llm,verbose=True)@agentdef editor(self) -> Agent:return Agent(config=self.agents_config['editor'],llm=self.llm,verbose=True)@taskdef research_task(self) -> Task:return Task(config=self.tasks_config['research_task'],agent=self.researcher())@taskdef writing_task(self) -> Task:return Task(config=self.tasks_config['writing_task'],agent=self.writer(),context=[self.research_task()])@taskdef editing_task(self) -> Task:return Task(config=self.tasks_config['editing_task'],agent=self.editor(),context=[self.writing_task()])@crewdef crew(self) -> Crew:"""Creates the ContentCreationCrew crew"""return Crew(agents=self.agents,tasks=self.tasks,process=Process.sequential,verbose=True)
What this code does:
LLM setup: Connects to Ollama (the program running AI on your computer) with the Mistral model
Agent creation: Makes three agent helpers, each using the same AI brain but with different jobs from the YAML file (minimal instruction files)
Task setup: Creates tasks and connects them to their agents
Task flow: Links tasks together so the output from one becomes input for the next
Crew building: Combines all agents and tasks into one team that works step by step
Detailed logging: Shows you what each agent is doing, so you can watch the process
Step 9: Configure the main entry point
This is the script that users will run to interact with our multi-agent system. It handles user input, manages the crew execution, and displays results.
Update main.py
to handle input and run our crew:
#!/usr/bin/env pythonimport sysfrom content_creation_crew.crew import ContentCreationCrewCrewdef run():"""Run the crew with a specific topic."""print("Welcome to the Content Creation Crew!")print("This crew will help you create comprehensive blog posts on any topic.")print()topic = input("Enter the topic you want to create content about: ")if not topic.strip():print("Please provide a valid topic.")returnprint(f"\nCreating content about: {topic}")print("This may take a few minutes as the agents collaborate...")print("-" * 50)inputs = {'topic': topic}try:result = ContentCreationCrewCrew().crew().kickoff(inputs=inputs)print("\n" + "="*50)print("FINAL RESULT:")print("="*50)print(result)except Exception as e:print(f"An error occurred: {e}")print("Make sure Ollama is running and the mistral model is available.")print("Try running: ollama list")print("If mistral is not listed, run: ollama pull mistral")def train():"""Train the crew for a given number of iterations."""topic = input("Enter the topic for training: ")inputs = {'topic': topic}try:ContentCreationCrewCrew().crew().train(n_iterations=int(sys.argv[1]), inputs=inputs)except Exception as e:raise Exception(f"An error occurred while training the crew: {e}")if __name__ == "__main__":run()
What this code does:
User Interface: Provides a friendly welcome message and prompts for input
Input Validation: Checks that the user provides a valid topic
Crew Execution: Creates the crew instance and runs it with the provided topic
Progress Feedback: Shows the user what’s happening during execution
Result Display: Formats and displays the final output from all three agents
Error Handling: Provides helpful troubleshooting messages if something goes wrong
Training Function: Includes an optional training mode for improving agent performance over time
Step 10: Install dependencies and run
Install the necessary dependencies:
crewai install
Make sure Ollama is running:
# Check if Ollama is already runningollama list# If not running, start Ollama serviceollama serve
Note: If we get “address already in use” error, it means Ollama is already running, which is good!
Run our crew:
crewai run
When prompted, enter a topic like “artificial intelligence in healthcare” and watch our agents collaborate to create a comprehensive blog post!
Understanding the workflow
When we run the crew, here’s the sequential process:
Researcher analyzes the topic using its knowledge base
Writer takes the research and crafts an engaging blog post
Editor reviews and polishes the content for publication
Each agent builds upon the previous agent’s work, creating a collaborative workflow.
AI agents vs LLMs
Understanding the difference between AI agents and Large Language Models (LLMs - smart text programs) is important for knowing why CrewAI is useful.
LLMs are computer programs that learn from lots of text. Think of them like very smart autocomplete systems. They’re great at understanding and creating text, but they work one conversation at a time and need careful instructions for specific tasks. Examples include Mistral, Llama 2, and GPT models.
AI agents use LLMs as their brain but add extra abilities like memory, tool usage, and making decisions on their own. CrewAI lets multiple agents work together on special jobs, creating systems that handle complex, multi-step problems better than single models.
Here’s a comparison table to understand the difference between LLMs and AI agents:
Aspect | LLMs | AI agents (CrewAI) |
---|---|---|
Functionality | Text generation and analysis | Autonomous task execution with tools |
Memory | Stateless (no memory between interactions) | Persistent context and role awareness |
Collaboration | Single model operation | Multi-agent teamwork |
Tool usage | Limited to text input/output | Can use external APIs, databases, tools |
Problem solving | One-shot responses | Multi-step planning and execution |
Workflow | Individual task completion | End-to-end process automation |
Privacy | Depends on provider | Full local control with Ollama |
Cost | Per-token pricing (cloud models) | Free with local models |
When to use each approach
Use LLMs directly when: You need basic text generation, have straightforward single-step tasks, want minimal setup complexity, or are prototyping ideas.
Use AI agents (CrewAI) when: Tasks require multiple specialized skills, you need persistent context, external tool integration is necessary, quality and reliability are paramount, or you want to automate complex multi-step workflows.
Conclusion
CrewAI opens exciting possibilities for AI development by enabling collaborative multi-agent systems. Instead of relying on single AI models, we can now build specialized teams that work together, just like human collaboration patterns.
By integrating with Ollama, CrewAI provides privacy, control, and cost-effectiveness while running entirely on our own hardware.
Want to build even more sophisticated AI applications? Explore our Creating AI Applications using Retrieval-Augmented Generation (RAG) to build the foundational skills that will help you create advanced AI solutions and take your development skills to the next level.
Frequently asked questions
1. What does CrewAI do?
CrewAI is a framework for building teams of AI agents that collaborate on complex tasks. It allows you to create specialized agents with different roles and coordinate their efforts to solve problems that would be difficult for a single AI to handle alone.
2. Is CrewAI better than LangChain?
CrewAI and LangChain serve different purposes. LangChain is a broader framework for building LLM applications, while CrewAI specifically focuses on multi-agent collaboration. CrewAI excels when you need multiple specialized agents working together, while LangChain is better for general LLM application development.
3. Is CrewAI better than AutoGen?
Both enable multi-agent AI systems but have different strengths. AutoGen focuses on conversational multi-agent systems with flexible conversation patterns, while CrewAI emphasizes structured workflows and role-based collaboration. CrewAI is more beginner-friendly with its YAML configuration system.
4. What is the difference between CrewAI and MetaGPT?
MetaGPT focuses specifically on software development workflows with pre-built development team roles. CrewAI is more general-purpose, allowing you to create multi-agent systems for any domain. Choose MetaGPT for software development projects and CrewAI for broader multi-agent applications.
'The Codecademy Team, composed of experienced educators and tech experts, is dedicated to making tech skills accessible to all. We empower learners worldwide with expert-reviewed content that develops and enhances the technical skills needed to advance and succeed in their careers.'
Meet the full teamRelated articles
- Article
How to Build Agentic AI with LangChain and LangGraph
Learn to build AI agents with LangChain and LangGraph. Create autonomous workflows using memory, tools, and LLM orchestration. - Article
Build an AI Travel Assistant With Google Agent Development Kit (ADK)
Learn how to build multi-agent AI applications using Google Agent Development Kit (Google ADK) by developing an AI travel assistant. - Article
How to Run Deepseek R1 Locally
Learn how to set up and use Deepsake R1 locally using Ollama.
Learn more on Codecademy
- Free course
Learn How to Use AI for Coding
Ready to learn how to use AI for coding? Learn how to use generative AI tools like ChatGPT to generate code and expedite your development.Beginner Friendly1 hour - Free course
Learn How to Use AI for SQL
Learn to generate SQL with AI, transform natural language to SQL, and utilize LLMs for SQL operations in our innovative course.Beginner Friendly1 hour - Free course
Intro to Generative AI
Dive into the many forms of generative AI and learn how we can best use these new technologies!Beginner Friendly< 1 hour