Designing AI To Scale Human Thought — Jun Yu Tan, Tusk

By AI Engineer

AITechnologyBusiness
Share:

Key Concepts

  • AI Augmentation vs. Automation
  • Blind Spot Detection
  • Cognitive Partnership
  • Proactive Guidance
  • Trust in AI Systems (Progressive, Contextual, Bidirectional)
  • Skill Growth and Visualization
  • Novelty Criticality Framework

AI Augmentation vs. Automation

The core argument is that instead of focusing on automating complex tasks with AI, which often leads to suboptimal results, we should leverage AI to augment human capabilities and help them produce higher quality work.

  • Automation Approach: AI performs the entire task, removing the human from the loop (e.g., AI writes and sends an email).
  • Augmentation Approach: AI assists the human in performing the task, providing suggestions, identifying potential issues, and enhancing their decision-making (e.g., AI brainstorms email points, checks tone, and the human reviews and sends).
  • The speaker uses the analogy of an offshore contractor (automation) versus a new team member that we grow together with (augmentation).

Three Core Interaction Patterns for Augmentation-Based UX

1. Blind Spot Detection

  • Focuses on identifying patterns or considerations that humans might miss in their work.
  • The key question is: "What didn't you consider?" rather than "What did you do wrong?".
  • Examples include temporal blind spots (poor decisions when tired) and social blind spots (technical feedback perceived as personal criticism).
  • Challenge: Managing the signal-to-noise ratio and delivering suggestions without causing defensiveness.
  • Tusk Implementation (AI Testing Platform):
    • Identifies edge cases and bugs in pull requests.
    • Creates and executes unit tests to validate issues.
    • Surfaces verified issues that will cause problems.
    • Outlines assumptions (business or engineering) and potential fixes.
    • Users provide feedback (thumbs up/down) to train the AI.
  • Systematic Pessimism: AI systematically looks for potential problems and second-order effects in code changes.
  • Results: Helped companies like DeepLearning.AI and TeamPay catch verified bugs in 43% of pull requests and add almost 1000 new tests in 2 months.

2. Cognitive Partnership

  • Moving beyond stateless answering machines to AI systems that adapt to user's mental models.
  • Building a "theory of mind" about users: understanding how they think, learn, and prefer to work.
  • Examples:
    • A code editor that learns refactoring patterns and suggests similar improvements.
    • A research assistant that understands how a user synthesizes information.
  • Challenge: Personalization without being creepy; users need to feel understood, not surveyed.

3. Proactive Guidance

  • Knowing when to suggest something, not just what to suggest.
  • Great proactive guidance feels like serendipity, not interruption.
  • Examples:
    • A calendar app that suggests meeting times based on energy patterns.
    • A writing tool that suggests taking a break when the user is stuck.
  • Challenge: Finding the "Goldilocks zone" – not too reactive, not too overwhelming.

Principles for Designing Augmentative AI

1. Trust

  • Trust is crucial in augmentation interfaces because of the collaborative relationship between human and AI.
  • Trust needs to be:
    • Progressive: Build trust with low-stakes suggestions before moving to high-impact decisions.
    • Contextual: AI should be trusted differently across domains and situations.
    • Bidirectional: Both parties adapt; the AI learns user preferences, and the user learns AI capabilities.

2. Skill Growth and Visualization

  • Augmentative AI should facilitate skill growth, not just automate tasks.
  • This involves:
    • Skill visualization: Showing users their growing expertise over time.
    • Graduated complexity: Unlocking or adapting features as competence increases.
    • Shifting explanation patterns based on user understanding.
  • Goal: Genuine skill enhancement, not just the illusion of it.
  • Product Metrics: Track user growth and capability building, not just engagement and usage.

Novelty Criticality Framework

A framework for managing noise and prioritizing suggestions in blind spot detection.

  • High Novelty, High Criticality: Critical discoveries (e.g., race conditions, data exposure) - interrupt the user.
  • Low Novelty, High Criticality: Essential reminders (e.g., common bugs, security checks) - clear warnings, some interruption allowed.
  • High Novelty, Low Criticality: Learning moments (e.g., alternative approaches, new language features) - gentle suggestions.
  • Low Novelty, Low Criticality: Matters of polish (e.g., formatting, minor optimizations) - batch and make optional.
  • Prioritization: People can only process 3 or more meaningful suggestions per review session.

Synthesis/Conclusion

The future of AI lies in augmentation, not just automation. By focusing on building AI systems that enhance human capabilities, foster trust, and promote skill growth, we can unlock new levels of creativity, thoughtfulness, and innovation. The most profound technologies don't just replace humans; they unlock what makes us uniquely human. The next decade won't be about AI doing our work, but AI helping us think in ways that we couldn't before.

Chat with this Video

AI-Powered

Hi! I can answer questions about this video "Designing AI To Scale Human Thought — Jun Yu Tan, Tusk". What would you like to know?

Chat is based on the transcript of this video and may not be 100% accurate.

Related Videos

Ready to summarize another video?

Summarize YouTube Video