Steve Jurvetson on the state of AI... Is it futile to regulate AI??
By This Week in Startups
Key Concepts
- AI Regulation Analogy: Comparing AI regulation to regulating teenagers.
- "Alignment" in AI: The concept of ensuring AI's goals and actions match human intentions or values.
- "Safety" in AI: Preventing AI from causing harm or engaging in dangerous behavior.
- Pre-training Safety Guarantees: The idea of proving AI safety before training begins.
- Post-hoc Regulation: Implementing checks and balances after AI has been trained or has produced outputs.
- Authoritarian Regimes and AI Alignment: The potential for authoritarian governments to align AI with their own ideologies.
AI Regulation as Teenager Regulation
The speaker proposes an analogy for AI regulation: substitute "teenager" for "AI" when discussing regulation. This highlights the difficulty and potential futility of certain regulatory approaches. For instance, ensuring a teenager is "aligned" or "does no crime" is challenging. The most effective approach, similar to parenting, involves monitoring inputs (what questions are asked) and outputs (preventing dangerous results from being shared). This is akin to having a police force that can act after an event, rather than preventing the possibility of an event entirely.
Premature and Ineffective Regulatory Proposals
The speaker criticizes proposals like California's initial bill, which suggested proving safety before AI training commences. This is deemed "ridiculous" and highlights the impracticality of such upfront guarantees. The analogy of a "speed trap" or "DUI trap" is used to illustrate that while regulations can exist for predictable behaviors (like driving), they are often reactive rather than purely preventative. The core argument is that attempting to guarantee safety and alignment before development is premature and fundamentally flawed.
The Futility of "Alignment" and "Safety" as Currently Defined
The speaker argues that achieving "safety" and "alignment" in the way they are currently described is a "fool's errand."
- Alignment: The concept of aligning AI with specific interests (e.g., "woke," "western liberal ideals," "conservative ideas") is presented as an unachievable goal. Even if it were possible, the speaker contends that authoritarian regimes would also be able to align AI with their own cultures and norms. This would lead to a "dystopian world" where "really bad AIs will be all over the place." The speaker emphasizes that most of the world's population lives under authoritarian regimes, making the prospect of them controlling aligned AI particularly concerning.
Conclusion
The central argument is that current approaches to AI regulation, particularly those focused on pre-training safety and absolute alignment, are misguided and impractical. The speaker suggests that a more realistic approach involves post-hoc monitoring and intervention, similar to how societal issues like crime or drunk driving are managed. Furthermore, the potential for authoritarian regimes to weaponize "aligned" AI poses a significant global risk, making the pursuit of universal, ideologically driven alignment a dangerous endeavor.
Chat with this Video
AI-PoweredHi! I can answer questions about this video "Steve Jurvetson on the state of AI... Is it futile to regulate AI??". What would you like to know?