CrewAI Tutorial: Multiple Agents Working Together in Python

By NeuralNine

Share:

Crew AI: A Detailed Summary

Key Concepts:

  • Crew AI: A Python framework for orchestrating multiple agents to solve complex tasks through separation of concerns.
  • Agents: Individual AI entities with specific roles (e.g., Researcher, Summarizer, Risk Checker) and tasks.
  • Tasks: Defined objectives assigned to specific agents.
  • Tools: Functions agents can utilize to interact with external resources (e.g., Yahoo Finance API).
  • Separation of Concerns: Dividing a complex problem into smaller, manageable tasks assigned to specialized agents.
  • LLM Provider: The Large Language Model service used to power the agents (e.g., OpenAI, Olama, Anthropic).
  • UV: A Rust-based Python package manager used for installing Crew AI and its dependencies.

1. Introduction to Crew AI & Core Principles

The video introduces Crew AI, a Python framework designed for building applications powered by multiple interacting AI agents. Unlike traditional approaches where a single agent calls various tools, Crew AI emphasizes separation of concerns. This means breaking down a task into distinct roles, assigning each role to an agent, and allowing them to collaborate. The presenter highlights the potential for more complex workflows and specialized agent capabilities. A practical example is presented: gathering a stock briefing (AAPL, NVDA) using agents for research, summarization, and risk assessment.

2. Installation & Project Setup

The installation process begins with installing uv, a Rust-based Python package manager. Crew AI is installed as a uv tool using the command uv tool install crewai. A new Crew AI project is created using crewai create crew <project_name>. The project structure includes:

  • .env file: Stores the LLM provider API key and model selection.
  • source directory: Contains the core logic, including tools, agents, and the crew definition.
  • knowledge directory: Holds contextual information, similar to a system prompt, defining the overall environment.
  • config directory: Contains configuration files for the agents and tasks.
  • crew.py: Defines the agents and their associated tasks.
  • main.py: Orchestrates the crew and executes the workflow.

3. LLM Provider Configuration: OpenAI & Olama

The video demonstrates configuring Crew AI with two LLM providers: OpenAI and Olama.

  • OpenAI: Requires obtaining an API key from platform.openai.com (Settings -> API Keys) and pasting it into the .env file. The desired GPT model (e.g., gpt-4-0125-preview) is also specified in the .env file.
  • Olama: Involves modifying the .env file to specify Olama as the provider, setting the API base URL to http://localhost:11434, and specifying the local model (e.g., olama/quen). Additional dependencies (uv at, fastapi, apuler, email-validator, fastapi-sso) must be installed using uv.

4. Agent & Task Definition (YAML Configuration)

Agents and tasks are defined using YAML files within the config directory. Each agent definition includes:

  • Role: A descriptive title of the agent's function (e.g., "Stock Research Assistant").
  • Goal: The agent's primary objective (e.g., "Gather as much information as possible on [ticker symbol]").
  • Backstory: Contextual information to ground the agent's identity.

Each task definition includes:

  • Description: A detailed explanation of the task.
  • Expected Output: The format and content of the desired result.
  • Agent: The agent assigned to perform the task.

The presenter simplifies the initial project structure by removing unnecessary comments and streamlining the YAML files for clarity.

5. Tool Implementation: Yahoo Finance API

A custom tool is implemented to retrieve stock data from the Yahoo Finance API using the yfinance library. The get_stock_data function takes a ticker symbol as input and returns a formatted string containing:

  • Current stock price.
  • Percentage change from the previous close.
  • Recent news headlines (up to 8).

The function handles potential errors (e.g., missing price data) and formats the output for readability.

6. Workflow Execution & Agent Interaction

The main.py file orchestrates the crew and initiates the workflow. The example focuses on gathering a stock briefing for a given ticker symbol (e.g., AAPL). The agents interact in the following sequence:

  1. Collector: Uses the get_stock_data tool to retrieve stock information.
  2. Summarizer: Condenses the news headlines into five key bullet points.
  3. Risk Checker: Identifies potential risks based on the news and price data.
  4. Brief Writer: Creates a concise daily brief summarizing the findings.

The presenter demonstrates the execution process using both OpenAI and Olama, showcasing the flexibility of the framework.

7. Limitations & Alternatives

The presenter acknowledges that Crew AI might not be suitable for all production scenarios. They suggest that Langgraph might be a more appropriate choice for complex workflows. They also note that the performance of the local Olama model depends on available VRAM and model capabilities.

Notable Quote:

“I probably wouldn't use it in production. I don't know when I would use it. I think probably in most cases I would use Langgraph if I need complex workflows.” – The Presenter


Data & Statistics:

  • The example uses the stock ticker symbols AAPL (Apple) and NVDA (Nvidia).
  • The Olama model used in the demonstration is olama/quen with 0.6 billion parameters.
  • The summarizer agent is tasked with creating exactly five bullet points.
  • The risk checker agent is tasked with identifying three to five potential risks.
  • The brief writer agent is tasked with creating a 6-8 line daily brief.

Synthesis/Conclusion:

Crew AI offers a compelling approach to building AI applications by leveraging the power of multiple specialized agents. Its emphasis on separation of concerns and its flexibility in supporting various LLM providers make it a valuable tool for experimentation and prototyping. While it may not be ideal for all production environments, it provides a solid foundation for exploring the potential of multi-agent systems. The framework's ease of setup and clear structure, combined with the provided example, make it accessible to developers with varying levels of experience.

Chat with this Video

AI-Powered

Hi! I can answer questions about this video "CrewAI Tutorial: Multiple Agents Working Together in Python". What would you like to know?

Chat is based on the transcript of this video and may not be 100% accurate.

Related Videos

Ready to summarize another video?

Summarize YouTube Video