AI in Healthcare Series: You Can’t Delegate AI with Andy Slavitt

By Stanford Online

Share:

Stanford Healthcare AI Podcast with Andy Slavit: Summary

Key Concepts:

  • LLMs (Large Language Models): AI models capable of understanding and generating human-like text.
  • Gemini 3 Nano: Google’s latest LLM, showing significant advancements in image generation.
  • Claude & ChatGPT: Leading closed-source LLMs, maintaining developer preference despite Gemini 3’s advancements.
  • Memory in LLMs: The ability of models to retain and utilize information from previous interactions.
  • AI Adoption in Healthcare: Increasing use of AI tools by clinicians and patients, despite organizational challenges.
  • Access Model: A new CMS initiative incentivizing the use of AI to improve healthcare access and outcomes.
  • AI Regulation: Ongoing debate and challenges surrounding the appropriate level of regulation for AI in healthcare.
  • Enterprise Adoption: Rapid growth of AI implementation within healthcare organizations.
  • Ad-Supported AI Models: Potential revenue model for AI platforms and associated ethical/regulatory concerns.

I. Current State of AI & LLM Leaderboards

The podcast begins with a discussion of the current landscape of AI, specifically focusing on Large Language Models (LLMs). Matt highlights that LLM development continues at a rapid pace, with Google’s Gemini 3 Nano demonstrating a significant lead in image generation. However, despite Gemini 3’s advancements in text and coding, developers still largely prefer closed-source models like Claude and ChatGPT, particularly on the enterprise side. A key takeaway is that users are beginning to gravitate towards specific models they are comfortable with, forming “relationships” based on their individual needs and use cases. The concept of “memory” within LLMs is discussed – the ability to incorporate prior conversations – and its uneven implementation, with benefits and drawbacks depending on the application. The overall consensus is that the acceleration of AI development shows no signs of slowing down.

II. AI Usage & Adoption – A Personal & Organizational Divide

Andy Slavit shares that his team at Town Hall Ventures primarily uses Gemini due to existing Google infrastructure and integration with tools like Gmail and Google Drive, despite individual preferences for ChatGPT. This led to a decision to hire a full-time AI research fellow to stay at the cutting edge of the technology and improve their internal expertise. Slavit emphasizes the importance of continuous learning and adaptation, noting that AI is evolving too quickly to be delegated. He describes a journey from initially finding it difficult to invest in healthcare AI to realizing the need for their own investments to leverage AI more effectively, and ultimately, to practice what they preach.

III. Data Revealing Disconnect in Healthcare AI Adoption

The discussion shifts to data illustrating a significant disconnect between clinician usage and organizational support for AI. A survey by Graham Walker’s Off Call reveals that 67% of physicians use AI daily, yet 81% express dissatisfaction with their organization’s approach. Furthermore, 71% report having no influence over the tools they use, and 48% rate organizational communication about AI as poor. This data is interpreted as a “red alert” for healthcare executives, indicating a need to empower clinicians and address the gap between personal and professional AI usage. The sentiment is that clinicians are “pulling” AI into their workflows because it improves their jobs, rather than being “pushed” by organizational mandates.

IV. The Need for Cultural Integration & Avoiding Regulation

Slavit stresses that successful AI implementation requires cultural integration, noting that good technology cannot be forced upon users. He contrasts this with past failures in healthcare technology adoption, like electronic medical records, where change management was prioritized over user experience. He argues that AI is different, with users actively seeking it out to improve their work and lives. He cautions against over-regulation, stating that it’s too early to effectively regulate a rapidly evolving field and that the market should be allowed to develop organically. He acknowledges the challenges of data privacy and security but suggests addressing those concerns rather than stifling innovation. He emphasizes that the current situation is more about enabling and empowering users than about control.

V. CMS’s Access Model & Policy Signals

Slavit highlights the CMS’s new “Access Model” as a significant policy signal. This initiative incentivizes technologists and companies to distribute AI tools that demonstrably improve healthcare access and outcomes, rewarding positive impact with financial support. He also notes the FDA’s creation of a streamlined approval process for AI-based healthcare solutions. This represents a bold move by the administration to actively promote AI adoption in healthcare. He contrasts this with potential regulatory roadblocks, emphasizing the importance of clear signals from policymakers.

VI. The Consumerization of AI in Healthcare & Unmet Needs

The conversation turns to the growing role of patients in driving AI adoption. Data shows increasing patient comfort with AI-powered healthcare tools, with a significant rise in those believing AI can improve their care. This is seen as a positive development, potentially addressing information asymmetry and empowering patients to take greater control of their health. Slavit emphasizes the potential for AI to transform healthcare for individuals in underserved areas with limited access to specialists. He notes the importance of telling stories about the positive impact of AI, beyond the edge cases highlighted in the media.

VII. The Ad-Supported AI Model & Future Challenges

The podcast concludes with a discussion of the potential for ad-supported AI models, exemplified by OpenAI’s approach. The concern is that this model could compromise the unbiased nature of AI-driven healthcare advice. Slavit acknowledges the challenges of monetizing AI in healthcare due to existing regulations like Stark laws and anti-kickback statutes. He suggests that new regulatory frameworks may be needed to address these issues while preserving the benefits of AI. He expresses optimism about the future of AI in healthcare, emphasizing the importance of continuous learning, adaptation, and a focus on solving real-world problems.

Notable Quotes:

  • “AI can’t be delegated. It is moving too fast to say, ‘Oh, we’ll make some AI bet and I don’t have to think about this as a leader of the organization.’” – Andy Slavit
  • “This is a stallion. And you need to figure out how to ride it, not how to tame it.” – Andy Slavit
  • “Good technology can’t be pushed on you or it’s not good technology.” – Andy Slavit
  • “The closer you get to it, the more you see the substance and potential.” – Andy Slavit

Technical Terms:

  • LLM (Large Language Model): A type of AI model trained on massive datasets of text to understand and generate human-like language.
  • Stark Law & Anti-Kickback Statute: US federal laws that prohibit financial relationships that could influence healthcare referrals.
  • Moat (Business Term): A sustainable competitive advantage that protects a company from competitors.
  • Enterprise Adoption: The implementation of technology within an organization.
  • Ambient Clinical Intelligence (ACI): AI-powered tools that passively listen to and analyze clinical conversations to provide real-time support.

This summary aims to provide a detailed and specific overview of the podcast discussion, preserving the original language and technical precision of the transcript.

Chat with this Video

AI-Powered

Hi! I can answer questions about this video "AI in Healthcare Series: You Can’t Delegate AI with Andy Slavitt". What would you like to know?

Chat is based on the transcript of this video and may not be 100% accurate.

Related Videos

Ready to summarize another video?

Summarize YouTube Video