Mustafa Suleyman (Inflection AI Co-founder) – Discussion on “The Coming Wave” at Stanford (Nov 2023)
Chapters
00:00:00 The Evolving AI Landscape: Control, Governance, and Societal Impact
Mustafa’s Three Bold Claims: AI’s arrival has been relatively safe and hasn’t caused significant harm, contrary to fears. We must assess this potential harm’s trajectory and take proactive measures. AI progress has been predictable, with new capabilities emerging with every order of magnitude increase in computation. Larger AI models exhibit greater control and responsiveness to instructions, shifting the safety and control challenge back to individual users.
Predictability of AI Progress: The connection between computation and new capabilities has been consistent since 2012, as seen in the Atari DQN player’s learning of new strategies. From 2 petaflops in 2013 to 10 billion petaflops in 2023, AI models have grown exponentially.
Control and Governance: Larger AI models are more responsive to instructions and adhere to behavior policies. The focus shifts from hallucinations and bias to traditional governance challenges like ownership, control, accountability, and regulation.
Conclusion: Mustafa Suleyman presents a nuanced perspective on AI’s current state, highlighting its safer-than-expected arrival, predictable progress, and the need to address governance issues as AI becomes more responsive to human direction.
00:11:16 The Complexities of Labor and Capital in the Future of AI
AI’s Evolving Capabilities: Current AI models are limited to one-shot question answering, generating fixed-length outputs. Future AI advancements will enable sequential accurate predictions, planning, and the ability to store and use information dynamically. AI will eventually possess project management capabilities, aiding in various business functions.
Labor and Economic Implications: AI technologies are fundamentally labor-replacing, temporarily augmenting productivity and efficiency. Efficiency gains from AI will likely benefit shareholders rather than workers. The trend is shifting from labor to capital, leading to faster compounding of capital.
Governance and Oversight: The political process is crucial for addressing AI’s impact on labor, taxation, redistribution, and governance. Adding friction through traditional oversight mechanisms is necessary to regulate the pace of AI’s transition. The political process is essential for tax, redistribution, and governance of AI technologies.
Mustafa Suleyman’s Personal Journey: Born and raised in Europe, Suleyman dropped out of Oxford to start a non-profit telephone counseling service for young British Muslims. Later, he founded a conflict resolution firm specializing in multi-stakeholder change processes, working in post-conflict reconstruction zones. Frustrated with the pace of change, Suleyman moved to Silicon Valley and co-founded DeepMind, an AGI company, with Demis Hassabis and Shane Legge.
00:21:55 Challenges and Opportunities of Generative AI in Elections
Mustafa Suleyman’s Perspective on the Risks and Governance of AI: Suleyman believes the harms of generative AI have not been as significant as feared because of safety measures implemented by companies like DeepMind. He emphasizes the importance of a precautionary principle, shifting the burden of proof to technology to demonstrate its safety and benefits.
The Role of Ethical and Safety Standards in AI Development: DeepMind’s business plan explicitly focused on building safe and ethical AGI, shaped by their experience with PETA and Peter Thiel’s investments in robotics. Suleyman stresses the need for an ethics and safety board to ensure technology is not used for surveillance or military purposes.
Addressing Synthetic Media Risks in Elections: Suleyman calls for a ban on electioneering by chatbots, arguing that humans should participate in electoral persuasion. He acknowledges the accuracy of chatbots, but emphasizes the need to prioritize the precautionary principle. The burden of proof should lie with technology to demonstrate its safety and benefits.
The Challenges of Governing AI: Suleyman recognizes the concentration of power in a few technology companies, but sees it as a practical reality at this stage of development. He advocates for engaging with big tech companies to shape industry norms and expectations. The EU-AI Act, while important, will take several years to enact, making it necessary to find interim solutions.
The Need for Independent Assessments and Verification: Suleyman acknowledges the lack of independent assessments and verification of AI systems. He sees a need for a public-interest infrastructure trusted by companies and capable of moving quickly. Universities and research institutions may play a role, but challenges include stability, IP issues, and exposing sensitive material.
Exploring Potential Solutions: Suleyman suggests encouraging universities to provide impartial assessments. He highlights the efforts of countries like the US and UK to establish AI Safety Institutes, though their focus is primarily on national security risks. Suleyman seeks input on the ideal structure for governance and independent assessments of AI systems.
Safety Definition and Enforcement: The definition of safety in AI is a contentious topic, with governments having the potential to expand its scope to include various aspects such as information risks and discrimination. A narrow focus on national security may limit the impact on critical issues like elections and synthetic media. Governments should prioritize enforcing existing laws and regulations, rather than creating new ones, to address pressing AI-related concerns.
Corporate Obligations and Accountability: Companies should be held accountable for complying with existing laws and regulations related to AI safety and governance. Stronger sanctions and negative consequences should be imposed on companies that fail to comply with these laws. Governments should significantly increase budgets for enforcement agencies to effectively monitor and regulate the complex AI industry.
Independent Auditors and Verifiers: Establishing independent, public-interested auditors, verifiers, and investigators is crucial to impartially assess AI safety and governance practices. These entities should be adequately funded and protected from adversarial environments to ensure their effectiveness.
Antitrust Model as a Precedent: The antitrust model provides a valuable precedent for empowering regulators to address AI safety concerns while respecting the confidentiality needs of companies.
Dynamic Among CEOs and Companies: The voluntary commitments made by companies are currently limited to large language models (LLMs) and do not encompass open source models. The four types of companies involved in AI development include big tech, cloud service providers, smaller open source companies, and AGI labs. The AGI labs, such as DeepMind, OpenAI, Anthropic, and Inflection, have public-interested governance models that prioritize ethical considerations and stakeholder interests. The proactive participation of these companies in safety initiatives is a positive sign and reflects the evolving landscape of corporate governance.
00:41:48 The Coming AI Revolution: Challenges and Opportunities
AI Leaders’ Anxiety and the Call for a Pause: AI experts have signed a letter calling for a six-month pause in developing frontier AI models. This move can be seen as an attempt to pressure policymakers into addressing the responsibilities and consequences of AI. Tech leaders are concerned about the potential risks and the sole responsibility of managing this powerful technology.
Labor Market Implications of AI: AI is expected to replace cognitive manual labor in the coming years. This will have a massive impact on the labor market, potentially leading to structural disemployment and shifts in workforce dynamics. The financial and political consequences of managing this transition could be significant.
Long-Term Political Blindness and Lack of Attention: Politicians are often focused on short-term issues and may neglect long-term structural trends like the impact of AI on the workforce. The lack of political attention to AI stems from the difficulty in addressing its consequences and the absence of clear policy solutions. This failure to consider long-term implications is a fundamental flaw in the political structure.
Rapid Adoption of AI and Low Integration Costs: Open-source AI and APIs provided by large cloud service providers are making AI integration easier and more affordable. The infrastructure groundwork has already been laid, and companies are already adopting AI co-pilots that can perform complex tasks like reasoning, analysis, and email composition. The speed of integration may be faster than anticipated.
Taxation of AI Companies for Societal Harms: The question of taxing AI companies to mitigate societal harms has not been widely discussed. Companies are not actively preparing for significant taxation on their revenues. The likelihood of such taxation is uncertain, particularly in the United States.
00:49:51 Bridging the Gap: Creating Spaces for Authentic Governance Conversations
The Absurdity of Performance: Mustafa Suleyman found the elaborate production and grand entourage at the AI Summit to be absurd and unnecessary. He believes it dehumanizes the event and encourages people to play their roles rather than connecting on a personal level. Suleyman emphasizes the importance of creating spaces where people can arrive as humans and genuinely engage with each other.
The Need for Authentic Conversations: Marietje Schaake highlights the missed opportunity for frank and genuine conversations at the summit due to the formal setting and off-the-record nature of discussions. Government officials often stick to talking points instead of engaging in meaningful dialogue. The fear of asking “dumb questions” and being criticized prevents politicians from engaging in more productive discussions.
Bridging the Gap between Policy and Practice: Universities play a crucial role in bridging the gap between policy and practice by creating spaces for open and informed dialogue. Government leaders frequently visit universities, providing opportunities for discussions and collaboration. The challenge lies in creating spaces where people can engage in honest conversations without fear of negative consequences or public scrutiny.
Humanity-led Conversations: Suleyman emphasizes the importance of humanity-led conversations in governance, focusing on culture and environment rather than solely on rules and regulations. Authentic conversations allow for disagreements, questions, and diverse perspectives, leading to more meaningful outcomes. The fear of virality and public scrutiny can stifle authentic conversations and lead to less genuine discussions.
How to Inform Oneself about Policy: Schaake asks Suleyman about how he stays informed about policy and prepares for discussions. Suleyman’s approach includes reading widely, talking to experts, and engaging in discussions with people from diverse backgrounds. He emphasizes the importance of understanding the historical context of policy issues and considering different perspectives.
00:56:11 AI Revolution: Challenges, Opportunities, and Future Possibilities
Mustafa Suleyman on Where He Finds Space to Ask Questions: Mustafa Suleyman emphasizes the importance of constantly reading and staying informed, especially about topics related to AI and its implications. He mentions reading drafts of policy documents and praises the EU AI Act as a positive step in regulating AI.
Distilling Information and Promoting Collaboration: Marietje Schaake acknowledges the challenge of disseminating information and fostering collaboration between tech experts and policy experts. She highlights the ongoing need for efforts to align these fields in addressing societal concerns and preparing for future technological developments.
Concern about AI Replacing Cognitive Labor and Eliminating Customers: A question is raised about the potential conflict between AI replacing cognitive labor and reducing the customer base for companies offering AI products and services. Mustafa Suleyman acknowledges this as a potential issue that companies may not be actively considering due to its long-term implications.
Optimism About Reducing Production Costs and Deflation: Mustafa Suleyman expresses optimism about the potential of AI and technological advancements to significantly reduce the marginal cost of producing many goods and services. He believes that over the next 20 years, many basic problems, such as food, drug, healthcare, and energy provision, will become much cheaper or even free.
Universal Basic Provision as a Possible Outcome: Mustafa Suleyman proposes the idea of universal basic provision, where the basic cost of goods and services is reduced to a point where people may not need to earn as much money for their basic needs. He cautions against universal basic income as it could lead to dependency and centralization of power.
Battery Technology and Decentralized Living: Suleyman highlights the potential impact of battery technology and off-grid electricity generation on society. He suggests that solving battery issues could enable individuals to live in more remote areas, independent of existing nation-state infrastructure, and produce their own food and water.
Changing Nature of Income and Dependence: Mustafa Suleyman argues that technological advancements, including battery technology, may lead to a shift in how people earn income and their reliance on nation-states and cities. However, he emphasizes that this does not necessarily make these trends attractive but rather describes potential changes in the way individuals generate income and meet their basic needs.
01:01:01 Pi AI: Linguistic Predictors of Usage and Monetization Challenges
Linguistic Predictors of Product Use: Emotional engagement, humor, and deep conversations positively influence product usage.
PI’s Focus on EQ, Safety, and Ethics: Designed to be emotionally intelligent, kind, respectful, and non-biased. Strives to prevent toxic or biased content, even on sensitive topics.
Balancing Engagement and Ethics: Rejects engagement driven solely by manipulation or sycophancy. Prioritizes user safety and well-being over engagement metrics.
Metrics for User Sentiment: Tracks user feedback through expressions of gratitude, positivity, and negativity.
Challenges of Model Improvement and Risk Mitigation: Need to balance model improvement through user engagement with the risk of harmful effects on society.
Importance of User Control and Transparency: Advocates for users paying for their own AI to ensure alignment of incentives and transparency in recommendations.
Addressing Privacy Concerns: Emphasizes the importance of privacy and takes measures to protect user data.
Priorities for Inflection AI: Continues to focus on improving EQ, safety, and ethics in AI. Works on aligning incentives between users and AI to prevent manipulative behavior.
01:10:19 AI Autonomy and Legal Representation: Ethical and Safety Concerns
Privacy and Data Security: Ensuring privacy and data security is crucial, but it’s challenging to achieve end-to-end encryption without compromising personalization. Cloud-based processing is necessary for personalization, but it raises concerns about device-based processing and access to logs for safety and personalization purposes. Verification of security apparatus is more important than privacy verification, as leakage due to hacking is a primary risk. European data protection regulators could offer intermediate screenings of privacy claims.
Personal AI as a New Interface: Personal AI will become the new interface for individuals and organizations. It will represent values, goals, and aspirations, replacing traditional platforms and algorithms. Personal AI will act as a powerful ally, advocate, and chief of staff, handling tasks, planning, and decision-making. It has the potential to revolutionize interactions with big platforms and legal representation.
Healthcare Applications of Personal AI: Personal AI will achieve human-level performance in most diagnostics in the next few years. It will be able to provide pathway planning and optimal care sequences in hospitals. Competition between AI and doctors will emerge, but AI will save doctors time and improve patient outcomes.
Dangers of Autonomy and Recursive Self-Improvement: Autonomy and recursive self-improvement are capabilities that should be taken off the table due to their potential dangers. Autonomy should only be pursued after it is provably safe, and recursive self-improvement should be avoided. The focus should be on developing AI that is safe, beneficial, and aligned with human values.
Legal Rights and Consciousness: AI should not be given rights, as they do not suffer or deserve the same rights as humans or animals. Granting rights to AI would lead to a slippery slope and ethical challenges. A proactive decision should be made to prevent AI from being granted rights.
01:20:29 AI's Impact on Journalism, Employment, and Society
Journalism and AI: AI’s ability to scrape news content may lead to a decline in traditional journalism, as news organizations struggle to compete with AI’s efficiency. To address this issue, Suleyman proposes a national tax or targeting companies that benefit from journalism without contributing to it, ensuring sustainable funding for quality journalism. Public interest broadcasting, such as PBS or BBC, is recommended as a non-market solution to support impartial journalism.
Foundation Models and Differentiation: The rapid increase in efficiency of AI algorithms means that the cost of training and serving large language models (LLMs) is decreasing. As a result, more people will have access to increasingly capable LLMs, potentially exceeding human performance in certain tasks. This trend raises questions about differentiation among foundation models and whether they should focus on consolidation or safety and ethics.
10-Year Outlook: In the next two years, Suleyman predicts the emergence of project manager-level LLMs, which will have a significant impact on the labor market in the following five to ten years. Over a 10-year period, AI may reach a level of capability where it can be considered a “digital person” with empathy, creativity, good judgment, and the ability to use various digital tools. As AI advances, society will need to address questions about the rights and agency of AI, as well as the role of AI in democracy and society.
Abstract
AI and Society: Navigating the Future with Mustafa Suleyman and Supplemental Updates
Engaging the AI Revolution: A Scholar’s Insight into Mustafa Suleyman’s Perspective and Supplemental Commentary
The debate surrounding Artificial Intelligence (AI) remains polarized, often oscillating between views of it as a revolutionary solution and a potential threat. Mustafa Suleyman, a prominent figure in the AI world and co-founder of Inflection AI, offers a nuanced understanding of AI’s capabilities and limitations, as detailed in his book, “The Coming Wave.” This article, based on the amalgamation of Suleyman’s insights, the broader discussion at the AI Summit, and relevant supplemental information, aims to provide a comprehensive overview of the current and future landscape of AI, addressing key themes like governance, labor impact, ethical considerations, and the evolving role of AI in society.
AI’s Capability and Governance: A Dual Perspective
Suleyman’s journey from philosophy to tech CEO underpins his unique perspective on AI. He observes that larger AI models exhibit increased responsiveness to control and instruction, making them safer but also more prone to misuse. Contrary to the belief that AI’s hallucinations and bias are permanent issues, he argues that future challenges will center more on governance aspects like ownership, control, accountability, and regulation. This view underscores the need to focus on traditional governance challenges rather than solely relying on technical solutions.
Mustafa Suleyman believes the harms of generative AI have not been as significant as feared because of safety measures implemented by companies like DeepMind. He emphasizes the importance of a precautionary principle, shifting the burden of proof to technology to demonstrate its safety and benefits.
Predictability of AI Progress:
Since 2012, there has been a consistent connection between computation and new capabilities, exemplified by the Atari DQN player’s ability to learn new strategies. AI models have experienced exponential growth, from 2 petaflops in 2013 to 10 billion petaflops in 2023.
Control and Governance:
Larger AI models are more responsive to instructions and adhere to behavior policies, shifting the focus from hallucinations and bias to traditional governance challenges like ownership, control, accountability, and regulation.
The Evolution of AI’s Functionality and Impact on Labor
Predicting a significant leap in AI’s capabilities, Suleyman envisions AI systems soon transitioning from single-shot question answering to generating accurate sequential predictions. This advancement will enable AI to assume roles like business analysts, project managers, and various other workplace positions. However, this evolution brings forward the labor-replacing nature of AI, leading to increased efficiency for organizations but also raising concerns about job displacement and the equitable distribution of benefits.
Current AI Models and Their Limitations:
Current AI models are limited to one-shot question answering, generating fixed-length outputs.
Future AI Advancements and Their Implications:
Future AI advancements will enable sequential accurate predictions, planning, and the ability to store and use information dynamically. AI will eventually possess project management capabilities, aiding in various business functions.
Labor and Economic Implications:
AI technologies are fundamentally labor-replacing, temporarily augmenting productivity and efficiency. Efficiency gains from AI will likely benefit shareholders rather than workers, exacerbating the shift from labor to capital, leading to faster compounding of capital.
Ethical Development and Societal Integration of AI
Suleyman emphasizes the importance of ethics and safety in AI development, influenced by his experiences with early challenges faced by companies like Facebook and DeepMind. He advocates for proactive safety measures, such as a ban on chatbots engaging in electioneering, to address risks associated with synthetic media in elections. His approach also includes a precautionary principle, advocating for the onus to be on technology to demonstrate its benefits.
DeepMind’s Ethical Focus:
DeepMind’s business plan explicitly focused on building safe and ethical AGI, shaped by their experience with PETA and Peter Thiel’s investments in robotics. Suleyman stresses the need for an ethics and safety board to ensure technology is not used for surveillance or military purposes.
Addressing Synthetic Media Risks in Elections:
Suleyman calls for a ban on electioneering by chatbots, arguing that humans should participate in electoral persuasion. He acknowledges the accuracy of chatbots, but emphasizes the need to prioritize the precautionary principle. The burden of proof should lie with technology to demonstrate its safety and benefits.
The Role of Governance and Oversight
Recognizing the concentration of power in a few tech companies, Suleyman proposes engaging them to set default expectations for responsible AI behavior. He underscores the need for independent assessments and public interest infrastructure to verify and regulate AI systems. The suggestion of utilizing universities for impartial expertise and exploring the role of national AI safety institutes speaks to the complexity of addressing AI risks.
The Concentration of Power in Tech Companies:
Suleyman recognizes the concentration of power in a few technology companies, but sees it as a practical reality at this stage of development. He advocates for engaging with big tech companies to shape industry norms and expectations. The EU-AI Act, while important, will take several years to enact, making it necessary to find interim solutions.
The Need for Independent Assessments and Verification:
Suleyman acknowledges the lack of independent assessments and verification of AI systems. He sees a need for a public-interest infrastructure trusted by companies and capable of moving quickly. Universities and research institutions may play a role, but challenges include stability, IP issues, and exposing sensitive material.
Exploring Potential Solutions:
Suleyman suggests encouraging universities to provide impartial assessments. He highlights the efforts of countries like the US and UK to establish AI Safety Institutes, though their focus is primarily on national security risks. Suleyman seeks input on the ideal structure for governance and independent assessments of AI systems.
Challenges and Opportunities in AI Governance
The governance landscape for AI, as envisioned by Suleyman, involves a delicate balance between innovation and regulation. Governments are encouraged to focus on enforcing existing laws and utilize models like antitrust laws for AI-related issues. The potential role of public benefit corporations and the importance of voluntary commitments by tech companies to adhere to safety and ethics principles are highlighted as signs of progress in responsible AI governance.
The Political Process and AI Governance:
The political process is crucial for addressing AI’s impact on labor, taxation, redistribution, and governance. Adding friction through traditional oversight mechanisms is necessary to regulate the pace of AI’s transition. The political process is essential for tax, redistribution, and governance of AI technologies.
The Broader Impact of AI on Society
Suleyman’s insights extend to various sectors, including healthcare, where AI is predicted to achieve human-level performance in diagnostics soon. He also discusses the dangers of pursuing AI autonomy and the ethical implications of granting rights to AI systems. His perspective on the need for public interest broadcasting in the age of AI, along with his anticipation of significant AI advancements impacting the labor market, underscores the multifaceted impact of AI on society.
Journalism and AI:
AI’s ability to scrape news content may lead to a decline in traditional journalism, as news organizations struggle to compete with AI’s efficiency. To address this issue, Suleyman proposes a national tax or targeting companies that benefit from journalism without contributing to it, ensuring sustainable funding for quality journalism. Public interest broadcasting, such as PBS or BBC, is recommended as a non-market solution to support impartial journalism.
Foundation Models and Differentiation:
The rapid increase in efficiency of AI algorithms means that the cost of training and serving large language models (LLMs) is decreasing. As a result, more people will have access to increasingly capable LLMs, potentially exceeding human performance in certain tasks. This trend raises questions about differentiation among foundation models and whether they should focus on consolidation or safety and ethics.
10-Year Outlook:
In the next two years, Suleyman predicts the emergence of project manager-level LLMs, which will have a significant impact on the labor market in the following five to ten years. Over a 10-year period, AI may reach a level of capability where it can be considered a “digital person” with empathy, creativity, good judgment, and the ability to use various digital tools. As AI advances, society will need to address questions about the rights and agency of AI, as well as the role of AI in democracy and society.
Mustafa Suleyman Presents Ideas for AI’s Impact on the Economy and Society
Suleyman emphasizes the importance of constantly reading and staying informed, especially about topics related to AI and its implications. He highlights the potential of AI and technological advancements to reduce the marginal cost of producing many goods and services, potentially leading to a scenario where basic necessities become cheaper or even free. He also discusses the impact of battery technology and off-grid electricity generation, suggesting that these advancements could enable individuals to live independently of existing nation-state infrastructure.
A Call for Humanity-Led AI Governance
In conclusion, Mustafa Suleyman’s insights provide a comprehensive understanding of the current and future landscape of AI. His emphasis on traditional governance challenges, combined with the need for ethical development and societal integration of AI, offers a balanced viewpoint in the often polarized AI debate. As AI continues to evolve and integrate into various sectors, the call for a humanity-led approach to AI governance becomes increasingly relevant, ensuring that its development and deployment are aligned with societal values and ethical standards.
AI's rapid advancements present both revolutionary capabilities and ethical risks, requiring a delicate balance between progress and safety. Balancing technological progress with safety and ethical considerations is crucial for minimizing AI's risks and maximizing its benefits....
Mustafa Suleyman advocates for ethical AI development, balancing technological advancements with societal well-being, and envisions a future where personal AIs augment human capabilities, enhancing relationships and saving time....
AI's potential for advancement and knowledge creation is vast, but its development requires ethical considerations, regulation, and collaboration to ensure safety and responsible use....
AI's rapid advancement and convergence with synthetic biology demand responsible governance to harness their benefits while mitigating risks. The containment problem in AI highlights the challenge of ensuring reliable behavior in non-deterministic systems, necessitating containment strategies to prevent proliferation....
AI's rapid development presents immense potential but also risks, necessitating containment strategies and ethical considerations to shape its impact on society. Collaboration and regulation are crucial for addressing AI's challenges and ensuring its alignment with human values....
Advancements in AI and synthetic biology are driving exponential change, raising significant containment and ethical challenges. Mustafa Suleyman emphasizes the need for responsible development, interdisciplinary collaboration, and a clear understanding of AI's limitations and strengths to navigate the complex landscape of these technologies....
Leaps by Bayer invests in teams working on breakthroughs in health and agriculture, including mental health, AI, and synthetic biology. Karlie Kloss's Code with Klossy aims to close the gender gap in tech and promote ethical AI development....