FULL Guide to Becoming a Principled Agentic Engineer (Build Anything with AI)
By Cole Medin
Key Concepts
- AI Coding Assistant: A tool (e.g., Claude Code) used to automate coding, planning, and administrative tasks.
- PIV Loop: A core methodology consisting of Planning, Implementation, and Validation.
- System Evolution: The practice of refining AI rules, commands, and skills based on past errors to improve future performance.
- AI Layer: A collection of custom rules, commands, and skills that standardize workflows for AI agents.
- MCP (Model Context Protocol): A standard for connecting AI assistants to external data sources like Jira or Confluence.
- Sub-agents: Specialized, secondary AI processes used to handle research or complex tasks to prevent context window overload.
- Brownfield Development: The process of building new features or fixing bugs within an existing codebase.
1. The Three-Phase Framework
The speaker emphasizes that AI coding should not be "vibe coding" (random, unguided experimentation). Instead, it requires a structured, repeatable system:
- Phase 1: Ideation: Moving from unstructured brainstorming to a structured Product Requirement Document (PRD).
- Phase 2: The PIV Loop: The iterative process of handling individual Jira tickets or GitHub issues.
- Phase 3: System Evolution: A feedback loop where the AI’s performance is analyzed to update the "AI Layer" (rules and commands), preventing recurring errors.
2. Step-by-Step Methodology
A. Ideation & Planning
- Brain Dump: Use speech-to-text to provide the AI with a high-level overview of desired features or bugs.
- Clarification: Use the
ask user questiontool to force the AI to identify and resolve assumptions. - PRD Generation: Execute a
create PRDcommand to turn the conversation into a formal document (Executive Summary, Mission, Target Users, Scope). - Story Creation: Use a
create storiescommand to parse the PRD into actionable Jira tickets, utilizing the Jira MCP server to automate ticket creation.
B. The PIV Loop (Task Execution)
- Prime: Run a
primecommand to load codebase context, git logs, and specific Jira issue details into the AI’s memory. - Explore: Use sub-agents to research the codebase, preventing the main agent from becoming overwhelmed by token limits.
- Plan: Create a
plan.mdfile that outlines specific files to change, task order, and validation strategies. - Implement: Start a fresh session and run an
implementcommand using theplan.mdas the source of truth. - Validate: The AI performs self-validation (linting, unit tests, type checking) before passing control back to the human for final review.
3. System Evolution & Optimization
The speaker argues that when an AI makes a mistake, it is an opportunity to improve the system rather than just a one-off fix.
- Retroactive Analysis: After a bug, ask the AI to review its own rules and commands to identify why the error occurred.
- Continuous Improvement: Update global rules (e.g., style conventions) or add new steps to the
validateworkflow to ensure compliance. - Version Control: Treat the AI Layer (commands/skills) like code; check them into source control so the entire team benefits from the improvements.
4. Key Arguments & Perspectives
- Human-in-the-Loop: Despite the power of AI, the engineer must remain in the "driver's seat" by performing planning and final validation.
- Avoid Over-Engineering: Many open-source AI frameworks are too bloated. The speaker advocates for a "simple on purpose" foundation that can be molded to existing team conventions.
- Administrative Automation: The goal is to offload "backstage work" (creating tickets, updating statuses, writing PR descriptions) to the AI, allowing developers to focus on high-leverage tasks.
5. Notable Quotes
- "Our job as an engineer is to no longer write the code, but to do the higher leverage tasks like the planning and validating."
- "Just because you can fit a million tokens into a large language model does not mean that you should because they get overwhelmed just like people do."
- "The days are gone now of going to Stack Overflow in order to get your questions answered."
6. Synthesis/Conclusion
The core takeaway is that reliable AI coding is achieved by treating the AI as a partner in a structured, repeatable process. By separating planning from implementation, using sub-agents to manage context, and treating every bug as a chance to evolve the system's rules, developers can significantly increase their output. The system is designed to be tool-agnostic, meaning the methodology works regardless of whether one uses Jira, GitHub, or Linear, provided there is a clear separation between work management and code generation.
Chat with this Video
AI-PoweredHi! I can answer questions about this video "FULL Guide to Becoming a Principled Agentic Engineer (Build Anything with AI)". What would you like to know?