Building AI Agents that actually work (Full Course)
By Greg Isenberg
Key Concepts
- AI Agents: Autonomous systems that move from "question-to-answer" (chat) to "goal-to-result" (execution).
- Agent Loop: The core operational cycle of an agent: Observe, Think, and Act.
- Agent Harness: The platform or application (e.g., Claude Code, Codeex, OpenClaw, Manis) that facilitates the agent loop and connects tools.
- Context Engineering: The practice of loading an agent with specific business information so prompts can remain simple.
- MCP (Model Context Protocol): A standardized protocol that allows LLMs to communicate with external tools (Gmail, Notion, Stripe, etc.) without needing custom code for each integration.
- Skills: Reusable SOPs (Standard Operating Procedures) for AI, packaged as markdown files to automate repetitive tasks.
- AIOS (AI Operating System): A centralized, local-file-based workspace where agents manage various departments of a business.
1. The Agent Loop: From Chat to Execution
Unlike traditional chat models that require constant human intervention (ping-pong interaction), an AI agent is given a goal and autonomously iterates until completion.
- The Process: The agent observes the environment (files/context), thinks about the next logical step, acts (e.g., writes code, searches the web), and repeats this loop until the parameters defined in the prompt are met.
- Transparency: Platforms like Claude Code are highlighted for their ability to display this "thinking" process, allowing users to see how the agent researches, plans, and executes tasks.
2. Onboarding and Context Management
Treating an agent like a new employee is essential for performance.
agents.md(orclaude.md): A system prompt file stored locally in the project folder. It contains the agent's role, business context, and working preferences.- Memory Management: Unlike cloud-based chat models that have hidden, uncontrollable memory, agents require explicit memory files (
memory.md). - Self-Improving Loop: By instructing the agent to update
memory.mdwhenever a correction is made (e.g., "Never sign off with 'Cheers'"), the agent compounds its knowledge over time, reducing errors in future sessions.
3. Connecting Tools via MCP
The Model Context Protocol (MCP) acts as a universal translator between the LLM and external software.
- Real-World Application: By connecting tools like Gmail, Google Calendar, Notion, Granola, and Stripe via MCP, users can perform complex workflows—such as summarizing an inbox, drafting a proposal, creating a payment link, and updating a project board—without ever leaving the agent interface.
- Security: Security is managed by scoping permissions. Users can grant read-only access to sensitive platforms to minimize risk while maintaining functionality.
4. Building and Chaining Skills
Skills are the "SOPs for AI." They allow users to package a successful process into a reusable format.
- Methodology:
- Manual Execution: Perform a task once with the agent.
- Skill Creation: Use a "Skill Creator" skill to package the previous session into a markdown file.
- Invocation: Call the skill by name in future sessions to replicate the exact process.
- Advanced Workflow: Skills can be chained together (e.g., a "Morning Briefing" skill that triggers a "Meeting Prep" skill, which in turn triggers a "Research" skill).
- Scheduling: Many harnesses now support cron-job-like scheduling, allowing agents to run these skills autonomously at specific times (e.g., 9:00 AM daily).
5. Strategic Implementation
- Workspace Structure: Organize folders by department (e.g., "Executive Assistant," "Content Team," "Marketing").
- Global vs. Project Level:
- Global: Skills or context files used across all departments (e.g., a "Truncate" skill for shortening text).
- Project Level: Specific tasks that should not clutter other agents (e.g., a "Referral" skill for a specific contact).
- Recommendation for Beginners: Start with simpler harnesses like Co-work or Claude Code to master the fundamentals of context and skills before moving to more autonomous, complex platforms like OpenClaw.
Synthesis
The transition from "chatting" to "agentic workflows" represents a 10x–20x increase in productivity. By moving away from cloud-based chat interfaces toward local, file-based AI operating systems, users can build a compounding library of skills and context. The ultimate goal is to automate manual processes so that the agent becomes a self-improving, autonomous employee capable of managing entire business departments.
"Prompt engineering used to be the big thing... now it's all about context engineering. It's about how well can you load up your agent with all the information about your business so that your prompts can be stupidly simple." — Remy Gasill
Chat with this Video
AI-PoweredHi! I can answer questions about this video "Building AI Agents that actually work (Full Course)". What would you like to know?