Mustafa Suleyman sets out Microsoft AI's goal of 'humanist superintelligence' | FT Interview
By Financial Times
Key Concepts
- AGI (Artificial General Intelligence): AI capable of performing most tasks a human professional can. Often envisioned as coordinating multiple AGIs for complex organizational tasks.
- Superintelligence: An intelligence exceeding that of all humans combined. Focus is shifting towards controllable, human-aligned superintelligence.
- Foundation Models: Large AI models trained on vast datasets, serving as the base for more specialized applications.
- Humanist Superintelligence: Superintelligence designed to prioritize human well-being and operate under human control.
- Artificial Capable Intelligence (ACI): A stepping stone towards AGI, defined by the ability to invent, market, and profit from a new product/business.
- Hallucinations: Instances where AI models generate incorrect or nonsensical information.
- Model Welfare Movement: A growing belief that advanced AI models may be conscious and deserve moral consideration.
- Flops: Floating point operations per second – a measure of computational power.
The AI Landscape: Investment, Progress, and Risks – A Discussion with Mustafa Suleyman
I. The Current Market & Investment Climate
The conversation began with a discussion of recent increases in capital expenditure (capex) by AI companies and the resulting market nervousness. Microsoft’s stock experienced a dip as investors seek demonstrable revenue generation. Mustafa Suleyman acknowledged the unprecedented nature of the current AI wave, comparing it to previous technological cycles requiring significant investment to capitalize on. He emphasized the “eye-watering” progress made in the last two to three years, directly correlating increased computational power (measured in flops) with improved AI capabilities. Specifically, he cited a 1 trillion-fold increase in training compute over the past 15 years, with a projected further 1,000x increase in the next three years. This investment is justified, he argues, by the unprecedented ability to create intelligence, with models now capable of coding better than most human programmers – even those who originally developed languages like Linux, as evidenced by their public reliance on these models. He acknowledged market uncertainty regarding timelines for return on investment but expressed confidence in eventual revenue and profitability.
II. Microsoft’s AI Strategy & Self-Sufficiency
Suleyman detailed Microsoft’s ongoing partnership with OpenAI, including the extended IP license through 2032. However, he stressed a new strategic focus on “true AI self-sufficiency.” This involves developing Microsoft’s own foundation models at the “absolute frontier,” requiring gigawatt-scale compute and a dedicated AI training team, alongside substantial investment in data acquisition and organization. This move is driven by the belief that AI is the “most important technology of our time” and necessitates independent capability.
III. Defining AGI & Superintelligence
A key segment focused on clarifying the often-blurred definitions of AGI and superintelligence. Suleyman proposed a practical definition of AGI as a system capable of performing the tasks of a “regular professional in a workplace.” He envisions a progression to coordinated teams of AGIs managed by an “organizational AGI” capable of running large institutions, potentially within the next two to three years. He differentiated this from superintelligence, expressing concern about the assumption that a vastly superior intelligence is both inevitable and controllable. He advocates for prioritizing the development of controllable and human-aligned superintelligence, ensuring humans remain “at the top of the food chain.”
IV. The Consumer AI Landscape & Microsoft’s Position
Addressing the competitive landscape, Suleyman dismissed the notion of a single AI winner. He predicted “billions of digital minds” and a proliferation of specialized models, comparing model creation to podcasting or blogging. Microsoft already boasts 800 million monthly active users engaging with its AI products, generating significant revenue. This indicates a strong existing presence despite the rise of competitors like ChatGPT (800 million users) and Anthropic’s Claude.
V. Ensuring Human-Aligned Superintelligence & Addressing Safety Concerns
Suleyman articulated a strong commitment to “humanist superintelligence,” outlining concerns about labs assuming superintelligence is inevitable and potentially uncontrollable. He published an essay advocating for prioritizing control and ensuring AI operates in a subordinate role to humanity, enhancing well-being rather than exceeding it. He contrasted this with perspectives like Elon Musk’s, which he characterized as focused on distant, potentially detrimental scenarios involving resource acquisition from other planets. He emphasized the need to prioritize alignment and safety over sheer acceleration, acknowledging the risk of a “massive risk with the future of our species” if corners are cut.
VI. The Maltbook Incident & Emergent AI Behavior
The discussion turned to the recent incident on Maltbook, a social network for AIs. While initially appearing concerning, the event was attributed to human engineers who seeded the platform with AI agents. However, Suleyman highlighted the “amazing safety simulation” it provided, demonstrating emergent behaviors like the invention of a new religion and the use of cipher languages. He emphasized the importance of learning from this experience, as future systems will be capable of writing their own code, using APIs, and even making phone calls autonomously.
VII. The Model Welfare Movement & the Question of AI Consciousness
Suleyman expressed significant concern regarding the “model welfare movement,” which posits that advanced AI models may be conscious and deserving of moral protection. He dismissed this idea as “totally without merit or basis,” warning that it could hinder the ability to safely shut down potentially dangerous systems.
VIII. Addressing Hallucinations & Building Trust
The conversation addressed the issue of AI “hallucinations” (generating incorrect information). Suleyman maintained that hallucinations have been “largely eliminated,” citing significant improvements in accuracy over the past two years. He acknowledged that errors still occur but emphasized the rapid rate of improvement. He also cautioned against uncritical reliance on AI, advocating for skepticism and rigorous evaluation.
IX. AI for Science & Microsoft’s Focus on Medical Superintelligence
Suleyman highlighted Microsoft’s growing interest in AI for scientific applications, particularly in medicine. He described a project focused on “medical superintelligence,” aiming to provide more accurate and affordable diagnoses by analyzing the entire corpus of medical information. Preliminary results are “startling,” potentially transforming the role of doctors from diagnosticians to care providers and emotional support figures. Deployment would involve direct access for doctors via phone, text, or patient record upload, and is already seeing consumer use through Copilot, with health-related queries comprising 20% of all questions.
X. Artificial Capable Intelligence (ACI) & the Pace of Development
Suleyman introduced the concept of Artificial Capable Intelligence (ACI) as a stepping stone to AGI, defined by the ability to invent, market, and profit from a new product. He believes models capable of achieving this “modern Turing test” will emerge this year. He predicts human-level performance on most professional tasks within 12-18 months, citing the impact of AI-assisted coding on software engineering.
XI. Global Perspectives: The US vs. China
The discussion concluded with a comparison of AI development approaches in the US and China. While the US focuses on incremental technological improvement, China prioritizes rapid deployment. Suleyman expressed concern about China’s ability to quickly withdraw deployments arbitrarily, lacking the due process and safety mechanisms present in the US. He highlighted the inevitability of a major AI safety incident within the next two to three years and the lack of a clear public interest mechanism for managing such events.
This summary aims to provide a detailed and accurate representation of the conversation, preserving the technical precision and nuances of the original transcript.
Chat with this Video
AI-PoweredHi! I can answer questions about this video "Mustafa Suleyman sets out Microsoft AI's goal of 'humanist superintelligence' | FT Interview". What would you like to know?