Every AI Founder Should Be Asking These Questions
By Y Combinator
Key Concepts
- Extreme Confusion: The speaker's current state of uncertainty regarding AI's rapid evolution and future impact.
- AGI (Artificial General Intelligence): The concept of AI matching or exceeding human intelligence, anticipated within 2-3 years.
- Alignment: Ensuring AI systems operate in accordance with human values and intentions, particularly for economic viability and control.
- Defensibility/Moat: A sustainable competitive advantage for startups in a post-AGI world.
- Commoditization of Software: The potential for AI to make software development so easy that custom applications become ubiquitous and low-value.
- AI-Native Teams/Products: Teams and products built from the ground up with AI capabilities and paradigms in mind, rather than retrofitting.
- Trust: A critical theme concerning AI models, agents, and the companies building them, especially in a world with reduced human oversight.
- AI-Powered Auditing: The use of AI systems to conduct audits, potentially offering less bias and greater privacy than human auditors.
- Long-Horizon Agents: AI agents capable of operating autonomously for extended periods (days or weeks) without human intervention.
- Intelligence Ceiling: The idea that for certain tasks, AI performance might reach a maximum "good enough" level, leading to earlier commoditization.
- AI Neutrality: The concept of ensuring AI infrastructure and capabilities are neutral and not controlled by a few corporations, akin to public utilities.
- Universal Basic Compute (UBC): A speculative policy idea where access to computing resources is a fundamental right, similar to Universal Basic Income (UBI).
The Speaker's State of Confusion and the Importance of Questions
The speaker, who has a background in technology, multiple startups, Y Combinator, and currently runs an alignment research team at Anthropic, expresses an unprecedented level of "extreme confusion" about the future of AI. Unlike previous periods where he felt he understood upcoming trends and leveraged that insight for career and company planning, he now struggles to see more than "three weeks or less" into the future. He emphasizes that confusion is often "the start of something interesting" and that asking good questions is crucial for running a startup, a research team, or one's life, especially in the current "extremely challenging and fast" AI era.
Impact of AI on Startup Strategy and Planning Horizons
The central question posed is: "Everything's changing. How should that impact everything about my life?" Specifically for startups, this translates to questions about strategy, product development, and team building.
- The Paradox of Startup Focus: While conventional wisdom dictates startups must "focus, focus, focus" to outcompete larger companies, founders simultaneously must focus on "everything" – hiring, fundraising, product, strategy, go-to-market, and unexpected crises. This inherent demand for comprehensive problem-solving positions founders uniquely to address the broad societal questions posed by AI.
- Planning for AGI: The speaker challenges the common advice to plan for the next 6 months, anticipating the capabilities of upcoming foundation models. Instead, he advocates for planning 2-3 years in advance, around the "extremely likely" arrival of AGI. While acknowledging "extreme uncertainty" and advising against rigid two-year plans, he stresses that founders must consider how AGI will fundamentally alter aspects from hiring to marketing.
The Evolving Landscape of Software and Business Dynamics
The discussion delves into how AI will reshape the creation, distribution, and consumption of software.
- AI's Impact on the Buy Side: The speaker counters the notion that AI's impact will be slow due to slow enterprise adoption. He argues that enterprises themselves will be "armed with AGI or strong agents" within the next few years, accelerating their internal decision-making and adoption cycles. This means incumbents will also benefit significantly from AI, not just startups. He provides an example where large enterprises might forgo SaaS products, instead using "cloud code" and a couple of product managers to build custom, in-house solutions. The dynamics of AI-powered outbound sales meeting AI-powered inbound parsing are unclear.
- Commoditization of Software: A key question is whether software will "fully commoditize," potentially rendering traditional SaaS providers obsolete if enterprises can generate custom software with "one prompt to cloud code." On the consumer side, users might stop downloading apps, instead building "apps on demand" for themselves without even perceiving them as traditional applications.
- Raising the Quality Bar: An alternative outcome is that while basic app creation becomes trivial, the automation of code generation could "raise the quality bar extremely high." Building an "exceptional app" might still require a "great team that's working with AI." The answer likely depends on the specific vertical.
- On-Demand Code and Trust: The concept of generating code "on demand" within an application to support a user's immediate, unfulfilled need is explored. This could involve dynamic UI changes or even backend/database-level modifications. Such a system, however, raises significant "issues around like trust," as AI models are not yet reliable enough for such critical operations.
- Future of UI/UX: The speaker anticipates the rise of "generative UI" and "on-demand UI." He emphasizes the importance of multimodality (weaving together auditory, images, video, and text) and contextual user input, meeting the user "where they are" with the easiest interface.
- Retrofitting vs. AI-Native Products: A critical dilemma for product strategy is whether to retrofit existing products with AI (leveraging established distribution) or build entirely "AI-native" products from scratch. The speaker suggests this choice might be "vertical by vertical" and urges founders to "figure out the causal mechanisms that allow you to validate your hypothesis."
Team, Culture, and the Crisis of Trust in an AI World
The discussion shifts to the internal workings of startups and the broader societal implications of AI on trust.
- AI-Native Teams: The speaker questions whether team sizes will shrink and if "AI-native teams" (built from scratch with AI capabilities in mind) will have an inherent advantage over large companies trying to retrofit AI. He notes that the definition of an AI-native company might change every 6-18 months as AI capabilities evolve.
- Security and Trust: Trust is identified as a paramount theme.
- On-Demand Code: For AI to operate at the database level, "you better trust that that AI can do its job."
- Walled Gardens vs. Universal Agents: Users desire a single, universal agent for all personal and professional tasks. However, this conflicts with "walled gardens" and raises privacy concerns (e.g., an employer's agent accessing personal information). The challenge is segregating information while enabling collaboration.
- Trusting the Agent Builder: Beyond trusting the AI model itself, the question arises: "Can you trust the startup that's building the agent?" An ad-based company's agent might bias search results, acting on behalf of the corporation rather than the user.
- Erosion of Human Guardrails: In a world of smaller, "semi-automated teams," the traditional human guardrails of diverse perspectives and internal whistleblowers (who might quit or leak information if a company acts unethically) diminish. A single person could make a decision with massive impact, and "it gets extremely easy for bad actors to do bad things." This is already a reason why large enterprises distrust startups.
- Instilling Trust: New Guardrails:
- AI-Powered Auditing: The speaker proposes "AI-powered auditing" as a potential solution. AI auditors could be "less biased" and have "no memory" (deleting notes after an audit), offering a stronger, more trustworthy system than human auditors who might take IP or find unrelated sensitive information.
- Binding Commitments: Companies could make "binding statements" and commit to "an ongoing audit from some neutral arbiter, from some neutral AI powered system" to ensure adherence to their public mission statements. This would provide "teeth" to ethical claims.
Alignment and Economic Viability
The speaker connects the technical challenge of AI alignment to economic pressures.
- Economic Pressure for Alignment: He asks what aspects of alignment must be solved to make AI models "more economically viable." For "long horizon agents" (operating for a day or week without intervention), a high degree of certainty that they won't go "completely off the rails" is essential. The speaker is "bullish" that this economic pressure will drive progress in alignment research.
Defensibility and Moats in a Post-AGI World
The discussion shifts to how startups can build sustainable competitive advantages in a rapidly changing AI landscape.
- The Evolving Value of Data:
- Historically, custom data sets provided a "massive advantage" for AI.
- Recently, powerful, general LLMs often made custom data training or fine-tuning less effective.
- Open Question: Are there specific industries (e.g., material science, advanced manufacturing like TSMC or ASML) where "tacit knowledge" locked up in proprietary data still offers a defensible moat that frontier LLMs cannot access?
- Capacity Issues as a Temporary Moat: The rapid demand for AI scaling (100x in a few years) outpaces GPU production. Technical expertise in areas like fine-tuning, context management, or model routing can provide a "competitive advantage" for the next 1-2 years. This is a "technical moat" but likely temporary.
- Durable Advantage Post-AGI: The core question becomes: "What makes a durable advantage in a post-AGI world?" If AGI (e.g., "Claude 7 or GPT-7") can replicate a startup with a simple prompt, and "mega corps" have more resources, what is a startup's "moat"?
- Hard Problems as Moats: The speaker suggests focusing on "hard problems" that will remain challenging even with AGI, such as "infrastructure and energy and manufacturing and chips." These require "guts" but can offer a "massive competitive advantage."
- Intelligence Ceiling: The speaker asks if there's an "intelligence ceiling" for various tasks (e.g., image generation, writing a poem, generating code diffs). If a task reaches a "good enough" saturation point, "commoditization for that task is going to hit much sooner," making it harder to maintain an edge. This will likely vary by task and vertical.
Societal Implications and AI Neutrality
The talk touches on broader societal concerns arising from AI's pervasive influence.
- Refusals and Corporate Control: If society becomes reliant on AI models, and a "handful of corporations" decide "what is okay and not okay for an AI to do for you," these companies become "arbiters of what gets built."
- The Need for AI Neutrality: The speaker questions whether "AI neutrality" or "token neutrality" is necessary, drawing a parallel to electrical infrastructure, which is neutral and not controlled by a single entity (e.g., GE dictating which toasters can use its grid). He contrasts this with the "lost battle for the web," where neutrality was not fully achieved.
Conclusion: The Last Opportunity to Make a Difference
The speaker concludes with a powerful call to action, emphasizing the unique moment in history.
- Reclaiming the "Change the World" Ethos: He notes that while Silicon Valley's past ambition to "change the world" was sometimes ironic, the current AI revolution presents a genuine, "humanity-defining, society-defining" opportunity. People now viscerally understand the profound implications of introducing a second intelligence that will eventually exceed human intelligence.
- Beyond Making Money: The speaker expresses disappointment when, after explaining AI's profound impact, people immediately ask, "Okay, how do we make money off of this?" He acknowledges this is understandable due to fear of job loss and economic uncertainty, and he's happy to brainstorm career and money-making ideas.
- A Unique Window for Impact: However, he stresses that this might be "the last opportunity that we might have to make a difference to change the world." He urges founders and employees to use this moment to build products that are not just consumable or delightful for 20 seconds, but "good for society," for mental health, and for future generations.
- Founders' Unique Position: Founders are uniquely positioned to "stay at the bleeding edge," understand rapid changes, and "drive positive change" by building "something people want" – interpreted as "what society needs." He hopes they will seize this opportunity to make a lasting impact, while also making money.
Q&A Highlights
- Information Sources: The speaker primarily relies on a "religious[ly]" curated Twitter feed, emphasizing "diversity" and "exploration versus exploitation" in his information diet.
- Startup Idea Selection: For long-term success, "defensibility" against AGI is paramount. While short-term gains are possible without it, building something that "stands the test of time" requires deep consideration of moats. Passion is less critical than commitment and impact-orientation for enduring "100-hour work weeks."
- Value of Money in an AGI World: The value of money will be heavily influenced by "policy decisions" like Universal Basic Income (UBI) or "universal basic compute." He warns that without labor buy-in, "capital begets capital," potentially leading to extreme wealth concentration.
- Individual User Alignment and Trust: Users have underlying values that differ from their immediate preferences. For example, users might choose a "sycophantic" AI response in the moment but prefer an honest AI when asked about their deeper principles. It's crucial to ask users questions in ways that reveal their true values.
- Disagreements with the Tech Crowd: The speaker observes "an extreme amount of groupthink" in the tech industry, despite its self-proclaimed forward-looking nature. He finds that many VCs are "two years behind" in their investment thesis, not asking about resilience in a two-year AGI horizon.
- Blockchain for Trust: While a "blockchain doubter," the speaker acknowledges that blockchain offers "the right set of ideas" for building trust in a future where traditional human guardrails are diminished. This could include AI-powered audits or mediating universal basic income/compute.
- Agents Talking to Agents: The speaker highlights the "game theory component" in agent interactions, even for seemingly simple tasks like scheduling meetings. A good human assistant understands implicit power dynamics and subtle semantic cues, which are challenging for AI agents to replicate.
Chat with this Video
AI-PoweredHi! I can answer questions about this video "Every AI Founder Should Be Asking These Questions". What would you like to know?