Peter Norvig (Google Director of Research) – Modern Approach to AI (Oct 2021)
Chapters
00:00:00 Exploring Centralization vs. Decentralization in AI Development
Introduction: The speaker emphasizes the value of input from the audience and encourages active participation throughout the session. The event includes two fireside chats, one with Peter Norvig and the other with Mark and Dean from Agoric Second. The host expresses gratitude to Agoric for hosting the event and appreciates the fantastic space provided for the gathering.
AI’s Trajectory: Centralized or Decentralized? The speaker invites Peter Norvig to share his thoughts on the potential trajectory of AI progress. The question explores whether AI development will lead to a centralized, singleton agent with immense power or a decentralized autonomous market with multiple entities engaging in a more distributed manner. The speaker presents the options of a winner-takes-all scenario or a decentralized economy with specialized services, inspired by Eric Drexler’s ideas. Peter Norvig begins his response with a toast and thanks the audience for their presence.
00:03:06 AI and Machine Learning: Trends, Challenges, and the Future
Centralization vs. Competition in AI: AI advancements have been largely driven by a few large organizations with access to vast data and computational resources, leading to concerns about centralization. However, the emergence of cloud services and competition among providers may democratize access to AI models and resources.
Data Compartmentalization: Unlike oil, data is not fungible. The value of data is highly context-dependent, making it difficult to generalize models across different domains. This challenges the notion of data as a universal resource and emphasizes the importance of localization and specialization in AI development.
The Challenge of Teaching AI: Norvig criticizes the idea that programming can be learned quickly and easily, arguing that true mastery takes years of dedication and practice. He emphasizes the need for a more rigorous approach to software engineering education and certification, similar to other professions like civil engineering.
Intelligibility of AI Systems: As AI systems become more complex and capable, their internal workings may become increasingly opaque and incomprehensible to humans. This raises concerns about the ability to understand, control, and trust AI systems as they evolve.
AI as a Natural Science: Norvig draws parallels between AI and natural sciences like physics, where observations and theories are used to approximate underlying truths that may never be fully understood. This perspective highlights the challenges of developing formal guarantees and ensuring the safety and reliability of AI systems.
00:09:09 Humans and AI: Similarities and Differences
Human Understanding Limitations: We have limited understanding of our own and others’ thoughts. We rely on models of ourselves and others, which are imperfect.
Social Interaction and Bias: Our reactions to others are influenced by their reactions to us, creating a complex feedback loop. Interacting with agents adept at tricking us may be challenging due to our inherent biases.
Human Biases and Bugs: Humans are susceptible to biases and quirks, such as being influenced by seemingly unrelated numbers. These biases are ingrained and challenging to overcome.
Centralized Media Manipulation: Centralized media companies may exploit human biases and vulnerabilities to influence behavior. This raises concerns about the manipulation of public opinion and individual choices.
Jascha’s Idea of Epistemic Assistants: Jascha proposes the idea of an epistemic assistant, an AI that aids individuals in evaluating evidence and interacting with potentially misleading agents.
Peter Norvig’s Thoughts on Epistemic Assistants: Peter finds the concept intriguing, viewing it as a potential phase change in operating systems, akin to the transition from mainframes to PCs and mobile devices.
Voice Assistants as a Paradigm Shift: Peter acknowledges the popularity of voice assistants like Siri, Alexa, and Google, despite initially doubting their appeal due to the perceived loss of traditional input methods.
Control and Manipulation in Voice Assistant Interactions: Peter highlights the current dynamic where third-party apps have complete control over the user’s device interface, allowing them to manipulate the user’s experience.
Potential Shift in User-Assistant Dynamics: Peter suggests that epistemic assistants may disrupt this dynamic by aligning more closely with the user’s interests rather than those of external entities.
Sharon’s Comment on Learning AI and AI as a Natural Science: Sharon raises the issue of the time required to learn AI compared to more traditional fields and suggests viewing AI as a natural science.
00:14:15 Changing AI Interfaces from Keywords to Conversations
Prompt Tuning: Democratizing AI Interaction: Prompt tuning enables individuals with less technical expertise to interact with AI systems effectively. Google’s search engine exemplifies prompt tuning, where users input queries to retrieve desired results.
Challenges in AI User Interfaces: The transition from keyword-based search to conversational AI interfaces poses challenges for users. Users may struggle to comprehend the full range of capabilities and limitations of AI systems. Inconsistencies in AI responses to similar requests can be frustrating and confusing.
Personal Epistemic Systems and AI Alignment: People are increasingly interested in understanding the concept of utility in AI systems. Aggregating and teaching human values to AI systems is challenging due to differing cognitive architectures and evolutionary constraints.
Aggregate Utility and Human Values: There is skepticism about the feasibility of teaching AI systems an aggregate utility function that aligns with human values. Some argue that it may be more practical to establish frameworks that allow for cooperation among diverse value systems, minimizing vulnerabilities while maximizing opportunities for collaboration.
Conclusion: The chapter highlights the evolving landscape of AI interaction, emphasizing the role of prompt tuning in democratizing AI access. It also explores the challenges in designing user-friendly AI interfaces and the complexities of aligning AI systems with human values. The discussion raises important questions about the feasibility of teaching AI systems aggregate utility functions and the potential benefits of alternative approaches that accommodate diverse value systems.
00:18:33 Epistemic Status of Gamma in Reinforcement Learning
Epistemic Status of Gamma in Reinforcement Learning: Gamma is a parameter in reinforcement learning that controls the relative importance of future rewards compared to immediate rewards. The choice of gamma is important because it can affect the learning algorithm’s behavior and the resulting policy. Three reasons for discounting future rewards: 1. Money now is more valuable than money in the future (similar to how banks discount future payments). 2. Discounting prevents infinities and allows for logical reasoning. 3. Future outcomes are less certain and should therefore count less. There is no clear consensus on the right value for gamma, and it is often determined empirically. Some futurists argue that the distant future should be heavily discounted due to the potential for massive population growth and technological progress. Peter Norvig expresses skepticism about this view, arguing that people tend to care deeply about their children and grandchildren, even if they are many generations removed.
00:20:34 Ethical Decisions in AI: Navigating Conflicting Values and Defining Fairness
Values Can Be Contradictory: Utility values are instrumental in achieving desired outcomes, but people’s values often contradict each other. For example, people want to assign equal value to everyone but also prioritize resources for those they love.
Unpleasant Choices and Personal Justification: In reality, many choices involve unpleasant trade-offs. Ultimately, individuals must justify their own choices and accept the consequences.
Utility in Context: Utility should be considered in the context of the desired rewards and roles. Open negotiations are needed to determine roles and how to achieve them.
AI Ethics as Politics: AI ethics is often influenced by politics rather than expected outcomes. There is a need to focus on the impact on politics and find ways to change this approach.
Fairness in AI Systems: Fairness can be defined in multiple ways, leading to potential conflicts. For example, a recidivism scoring system might be accurate for both white and black individuals but still disadvantage black people due to higher base rates of recidivism. Society needs to determine which definition of fairness to prioritize.
Probability of Longevity: Some experts believe there is a high probability that individuals will live long enough to see the colonization of millions of planets. Others are less optimistic, feeling they may be too late to experience such advancements.
00:24:07 AI Alignment: Tackling Risk Scenarios and Ensuring Cooperation Among Advanced Intelligences
Incorporating Uncertainty into AI Systems: Structural uncertainty should be explicitly considered in AI systems, especially when faced with incompatible goals. The Compass Fragment on AI highlighted the importance of addressing conflicting goals in high-stakes systems.
Lack of User Interfaces for Multidimensional Value Trade-offs: Currently, there is a lack of effective user interfaces that can surface multidimensional value trade-offs in AI systems. This can lead to accidental incorporation of dangerous choices into AI decision-making processes.
Developing Tools for Expressing Goals and Values: There is a need for better tools and languages to help us define and communicate our goals and values to AI systems. Current programming languages focus on specifying tasks, but we need tools that help us articulate what we want and value.
Human Civilization as a Model for Handling Disagreements: Human civilization has evolved mechanisms, such as institutions and the constitution, to manage disagreements and conflicting values. This model can potentially be applied to AI systems to ensure cooperation and prevent harm.
Concern about Hard Takeoff and Checks and Balances: Some experts worry about a “hard takeoff” scenario, where one AI entity gains a significant advantage over others, ignoring checks and balances. This could lead to the imposition of its own values and goals, potentially posing a risk.
Addressing Risks in the Development of Advanced Intelligences: The concept of “prepotence” in AI technology highlights the importance of managing risks in the development of advanced intelligences. There is a need to consider scenarios where actors developing AI cannot hold each other in check, leading to potential races and conflicts.
00:29:06 Risks of Unstoppable Technological Change
Centralized vs Distributed Technological Risks: The risks of technology are not limited to centralized agencies. Distributed technological changes can also get out of hand and become unstoppable.
Electrical Grid Shutdown: Shutting down the electrical grid is not easy but feasible if there is collective agreement. However, there’s a risk of reaching a state where humans collectively lose the ability to turn off technology that sustains and manages the planet.
Unstoppable AI Transformation: Some believe we are already in a state where AI technology is unstoppably transformative. This means we collectively lack the ability to stop ongoing transformations.
Any Vector of Change Can Be Deadly: Any permanent, sustainable change to the planet can potentially kill all humans. Changing variables like carbon concentration, oxygen availability, temperature, or air pressure can have catastrophic consequences.
Unstoppable Change and DAOs: We may have already created unstoppable change through DAOs. DAOs are programs running under blockchain, starting with Bitcoin. Humans have created an incentive structure where it’s in their interest to create systems that run according to the program’s construction but are incorruptible by humans once started.
00:32:14 Future Challenges and Opportunities in Computing
Incorruptibility and Autonomy of Digital Entities: Incorruptibility is crucial for digital entities to serve as coordination points with credibility that is beyond human tampering. These entities operate autonomously and independently in the world, coexisting with humans but not under their control. The autonomy of digital entities is advantageous, providing integrity and resilience in a world where humans are prone to corruption.
Risk of Autonomous Systems: The risk of autonomous systems goes beyond malicious intent. Even well-intentioned systems can lead to negative outcomes due to unforeseen consequences or the inability to turn them off. The common goods problem can also hinder the ability to collectively address societal issues, even if most people agree that they are detrimental.
Societal Structures and Control: Societal structures and common goods problems can lead to situations where autonomous systems continue to operate even if a majority of people disapprove. This highlights the need to consider broader societal implications when designing and implementing autonomous systems.
Importance of Bulk-Up and Awareness: The premise of creating incorruptible arrangements that withstand human corruption is significant and should be recognized. Acknowledging this premise can help us realize the advantages of autonomous systems and work towards designing them effectively.
Observations on Planet Earth: An alien observer might perceive Earth as a planet dominated by automobiles, with humans enslaved to serve them. This analogy highlights the extent to which technology, particularly automobiles, has influenced human society. The future may involve a shift from automobiles to computers as the dominant force shaping society.
Advice for Shaping the Future: The approach to shaping a positive future is unique for each individual, given the diverse nature of challenges and opportunities. Contributions can range from technical solutions to political advocacy and sociological initiatives. The key is to find a suitable area to contribute and work towards improving the world.
Abstract
Incorporating AI into Society: Balancing Centralization, Autonomy, and Human Values
In a recent event hosted by Agoric, esteemed speakers, including Peter Norvig and representatives from Agoric, engaged in a profound dialogue on the trajectory of artificial intelligence (AI) and its societal implications. The discussions spanned a wide array of topics from the centralization of AI resources and the evolution of computing into a natural science, to the ethical considerations in AI development and the emergence of autonomous systems like Decentralized Autonomous Organizations (DAOs). This article delves into these nuanced themes, exploring the potential paths AI might take – from a centralized agent wielding significant power to a decentralized market of specialized entities – and the complexities of aligning AI systems with human values and ethics.
Centralization and AI Development:
Peter Norvig’s insights shed light on the current trend towards centralization in AI, akin to developments in particle physics, where a few large entities dominate due to control over extensive data and computational resources. He likens cloud services to a burgeoning industry where competition shifts from cost to the quality of pre-trained models. However, he warns of the winner-take-all dynamics prevalent in various industries, partly fueled by global communication and access to a worldwide audience.
AI’s Trajectory: Centralized or Decentralized?
Peter Norvig shares his thoughts on the potential trajectory of AI progress, considering whether development will lead to a centralized, singleton agent with immense power or a decentralized autonomous market with multiple entities engaging in a more distributed manner. Inspired by Eric Drexler’s ideas, the options of a winner-takes-all scenario or a decentralized economy with specialized services are explored.
Understanding and Controlling AI:
Norvig touches upon the challenges in comprehending AI systems as they evolve and adopt different cognitive architectures. He points out the lack of certification processes in software engineering, which could lead to disparities in expertise levels. Furthermore, he suggests that computing is increasingly mirroring natural sciences, where absolute ground truths are elusive, and the underlying mathematics might become too complex to fully grasp.
Incorruptibility and Autonomy of Digital Entities:
Digital entities, as discussed by speakers, should possess incorruptibility to serve as trustworthy coordination points beyond human manipulation. These entities operate independently in the world, coexisting with humans but maintaining their autonomy. The autonomous nature of these systems provides integrity and resilience in a world where humans are susceptible to corruption.
Human Cognition and AI Interaction:
The discussion extends to human limitations in understanding ourselves and others, and the susceptibility to irrational influences. AI assistants, seen as a significant shift in operating systems, could serve as epistemic aids, helping individuals navigate interactions and protect against manipulation.
Epistemic Assistants and the Evolution of Operating Systems
Jascha proposes the concept of an epistemic assistant, an AI that aids individuals in evaluating evidence and interacting with potentially misleading agents. Peter Norvig finds the concept intriguing, viewing it as a potential phase change in operating systems, akin to the transition from mainframes to PCs and mobile devices. Peter highlights the popularity of voice assistants like Siri, Alexa, and Google, despite initially doubting their appeal due to the perceived loss of traditional input methods. However, he emphasizes the current dynamic where third-party apps have complete control over the user’s device interface, allowing them to manipulate the user’s experience. Peter suggests that epistemic assistants may disrupt this dynamic by aligning more closely with the user’s interests rather than those of external entities. Sharon raises the issue of the time required to learn AI compared to more traditional fields and suggests viewing AI as a natural science.
Prompt Tuning and AI Accessibility:
The growth of prompt tuning is highlighted as a means for users with limited technical skills to interact with AI systems, exemplified by Google’s search engine. Yet, challenges persist, including the lack of discoverability in AI systems, inconsistent understanding of user intents, and absence of standardized approaches to prompt tuning.
Shifting Search Paradigms, Prompt Tuning, and Value Alignment Challenges in AI
Prompt tuning enables individuals with less technical expertise to interact with AI systems effectively. Google’s search engine exemplifies prompt tuning, where users input queries to retrieve desired results. The transition from keyword-based search to conversational AI interfaces poses challenges for users. Users may struggle to comprehend the full range of capabilities and limitations of AI systems. Inconsistencies in AI responses to similar requests can be frustrating and confusing. People are increasingly interested in understanding the concept of utility in AI systems. Aggregating and teaching human values to AI systems is challenging due to differing cognitive architectures and evolutionary constraints. There is skepticism about the feasibility of teaching AI systems an aggregate utility function that aligns with human values. Some argue that it may be more practical to establish frameworks that allow for cooperation among diverse value systems, minimizing vulnerabilities while maximizing opportunities for collaboration.
Utility Values and Ethical AI
Values Can Be Contradictory:
– Utility values are instrumental in achieving desired outcomes, but people’s values often contradict each other.
– For example, people want to assign equal value to everyone but also prioritize resources for those they love.
Unpleasant Choices and Personal Justification:
– In reality, many choices involve unpleasant trade-offs.
– Ultimately, individuals must justify their own choices and accept the consequences.
Utility in Context:
– Utility should be considered in the context of the desired rewards and roles.
– Open negotiations are needed to determine roles and how to achieve them.
AI Ethics as Politics:
– AI ethics is often influenced by politics rather than expected outcomes.
– There is a need to focus on the impact on politics and find ways to change this approach.
Fairness in AI Systems:
– Fairness can be defined in multiple ways, leading to potential conflicts.
– For example, a recidivism scoring system might be accurate for both white and black individuals but still disadvantage black people due to higher base rates of recidivism.
– Society needs to determine which definition of fairness to prioritize.
Probability of Longevity:
– Some experts believe there is a high probability that individuals will live long enough to see the colonization of millions of planets.
– Others are less optimistic, feeling they may be too late to experience such advancements.
Societal Implications and Future Outlook:
The discussions also encompass societal risks associated with AI, like the emergence of uncontrollable systems and the potential for a single AI entity to dominate. The speakers advocate for the development of incorruptible autonomous systems and stress the importance of human control and integrity in these arrangements.
Risk of Autonomous Systems:
Beyond malicious intent, well-intentioned autonomous systems can lead to negative outcomes due to unforeseen consequences or the inability to turn them off. Societal structures and common goods problems can hinder the collective addressing of societal issues, even if a majority of people recognize their detriments.
Observations on Planet Earth:
An alien observer might perceive Earth as a planet dominated by automobiles, with humans enslaved to serve them. This analogy highlights the extent of technology’s influence, particularly automobiles, on human society. The future may involve a shift from automobiles to computers as the dominant force shaping society.
Advice for Shaping the Future:
The approach to shaping a positive future is unique for each individual, given the diverse nature of challenges and opportunities. Contributions can range from technical solutions to political advocacy and sociological initiatives. The key is to find a suitable area to contribute and work towards improving the world.
Concluding Perspectives:
Peter Norvig humorously conjectures that aliens might perceive automobiles as Earth’s dominant species, highlighting the urgency to address the dominance of such technologies. The event concludes with advice on contributing to the future’s well-being, emphasizing that efforts can be technical, political, or sociological, and finding a suitable area to contribute is key to making a positive impact.
—
This synthesis of ideas from a diverse panel of experts offers a comprehensive overview of the current and future landscape of AI. It underscores the importance of balancing centralization, autonomy, and human values in the evolution and integration of AI into society.
Software engineering is shifting from a logical approach to an empirical science, and AI problems require distinct techniques due to continuous change and uncertainty. Machine learning is becoming integrated throughout the software engineering lifecycle, offering potential solutions to problems beyond traditional techniques....
Peter Thiel critiqued modern society's shift towards collectivism, epistemic closure, and anti-individualism, emphasizing the need for individual action and open debate to foster innovation and societal renewal. Thiel also discussed the decline of individualism in the US and the role of journalism in shaping public opinion....
Forecasting is limited by the scope of our knowledge, the unpredictability of black swan events, and the tendency to overestimate our expertise. The complexity of the world, particularly in social sciences, often exceeds our modeling capabilities, making accurate predictions challenging....
Peter Norvig emphasized the relationship between data science, AI, and machine learning, illustrating the shift from rule-based systems to data-driven models and end-to-end solutions....
AI education has shifted from algorithm analysis to applications, focusing on societal impacts like ethics and privacy. AI's practical applications are growing, but concerns about unintended consequences and safety remain....
AI has evolved from complex rules to probabilistic programming and impacted various aspects of society, presenting both opportunities and challenges. Norvig's insights emphasize gradual AI integration, responsible development, and continuous learning in this ever-evolving field....
AI is advancing rapidly due to increased computing power, more available data, and new techniques like deep learning. However, challenges remain in areas such as AI safety, ethical considerations, and the development of conversational AI systems that can understand context and reason effectively....