Peter Norvig (Google Director of Research) – AI (Oct 2021)
Chapters
00:00:00 The Long and Unfinished History of AI Textbooks
AI’s Changes: AI’s focus shifted from logic to probability. Hand-coded knowledge transitioned to machine learning. Expert systems striving to duplicate human thinking changed to normative/optimizing systems striving for the best answer.
Norvig’s Motivation for Writing the Textbook: Dissatisfaction with available textbooks in the early 1990s. AI was rapidly changing, necessitating a new textbook. Collaboration with Stuart Russell to write the book.
Key Ideas in the Textbook: Ways of representing the world and reasoning. Reasoning with uncertainty. Machine learning. Interacting with the environment.
Differences in the Fourth Edition: Increased emphasis on deep learning. Focus on maximizing expected utility. Importance of deciding what to optimize. Ethics, fairness, privacy, diversity, equity, inclusion, and lethal autonomous weapons are now central topics.
Norvig’s View on AI and Software Engineering: Both aim to do the right thing and make working programs. Software engineering’s main enemy is complexity. AI’s main enemy is uncertainty.
Norvig’s Satisfaction with the Changes: He is pleased with the increased focus on ethics and fairness in the fourth edition. He believes that these issues are more substantial and relevant to AI than philosophical arguments like Searle’s Chinese room.
00:07:38 Evolving AI Education: Adapting to a Broader Audience and Project-Based
Audience Changes: Previously, AI courses were electives taken by interested students. Now, AI courses are required for all computer science majors and many STEM majors.
Changes in Teaching Approach: Authors wanted to maintain mathematical rigor but make the material more accessible to a broader range of students. Focus shifted from teaching specific algorithms to exploring algorithms and applying them to real-world data.
Project-Based Learning: Instead of implementing algorithms from scratch, students now explore and apply existing algorithms to real-world data. This approach emphasizes understanding the strengths and weaknesses of different algorithms and how they perform in different situations.
00:09:41 Exploring Artificial Intelligence and Its Practical Applications
Speaker 01’s Introduction: Speaker 01 introduces the concept of AI’s ability to remove dog barking from audio streams. Jazir interjects with a question about defining artificial intelligence differently, focusing on emerging complexity rather than intentionality. Peter Norvig responds that this definition may be useful for understanding AI from an external perspective but not for practical problem-solving.
John’s Perspective on Diversity and Inclusion: John resonates with Peter’s point about moving from algorithms to what to optimize, particularly in the context of diversity and inclusion at MIT. He highlights various dimensions of diversity, including cognitive, intellectual, and other extended phenotype dimensions.
Peter’s Three-Part Approach to Diversity: Peter acknowledges John’s point and presents a three-part approach to diversity: pipeline, hiring process, and retention. He emphasizes the importance of addressing all three parts and mentions Google’s ability to focus on the entire pipeline due to its size.
Importance of Different Backgrounds: Peter shares an anecdote about a reading group in machine learning where a physicist’s different notation for matrices simplified a complex theorem. He emphasizes the value of having people from diverse backgrounds to solve problems more effectively.
John’s View on College Admissions: John criticizes the current college admissions process for considering candidates one at a time rather than the entire pool. He suggests using AI to analyze the entire pool and make more holistic decisions.
00:15:52 AI Experts Discuss Ethical Considerations and Challenges in the Field
Diversity in AI: Optimizing for diversity involves creating groups and comparing applicants within those groups, rather than attempting to give individual applicants a score from one to a hundred.
Filter Bubbles: Filter bubbles are a concern, but Google’s analysis indicates that most people do not experience significant effects from them. Online research can provide a wider variety of views than traditional news media. Social media platforms like YouTube, Facebook, and Instagram have a greater potential for creating filter bubbles, which is a significant issue. Companies face criticism regardless of their content moderation decisions.
Google’s Approach to Questionable Content: Google can rank questionable content lower in search results, allowing users to find it if they actively seek it but not promoting it prominently. Social media platforms face tougher choices due to the nature of their user interactions.
Peter Norvig’s Views on AI Safety: Norvig acknowledges the importance of AI safety but differs in his perspective from Stuart Russell’s focus on the risks of increasingly capable AI systems. Norvig believes that AI safety is a crucial field, emphasizing the need for caution and careful consideration of potential risks.
00:20:42 Potential Risks and Research for AI Safety
Unintended Consequences: Peter Norvig expresses more concern about unintentional effects rather than robots taking over the world. He emphasizes the need to address AI safety now, learning from historical examples like the internal combustion engine, where unintended side effects were not adequately considered.
Surveillance and Totalitarian Governments: Norvig raises concerns about surveillance and totalitarian governments utilizing AI to impose strictness on citizens more cheaply and effectively than traditional methods.
Lethal Autonomous Weapons: Norvig is more worried about the lethal aspect of autonomous weapons than their autonomous nature. He points out that if a missile is targeting a peasant in Pakistan, the concern is the missile itself, not who ordered it or how it was operated.
Research and Mitigation: Allison asks if there are specific research areas in AI that can help combat the risks mentioned, particularly surveillance and totalitarian control. Norvig does not provide a direct response to this question in the provided transcript.
Technical vs. Social Solutions for Deep Fakes: Deep fakes pose significant challenges, requiring both technical and social solutions. Ongoing battles involve research on creating and detecting deep fakes. Regulation, including laws, self-regulation, and third-party certification, will likely play a role.
Definition of AI: A Social Construct: The definition of AI remains a debated topic. Peter Norvig emphasizes that fields are defined by social constructs and communities of people, rather than strict definitions. The true nature of AI is determined by the activities and interactions within these communities.
Decentralization in AI: Decentralization involves distributing technology to individuals and communities for local control. Federated learning is a key approach to achieving decentralization in AI. Sharing model parameters without sharing data enables individuals to improve their models while maintaining privacy. Decentralization can help address concerns about data privacy and competition with large companies.
Future of Cognition: Incomprehensible Mind Architectures: The future may involve artificial intelligence with incomprehensible mind architectures. To prepare for this, we should focus on setting up a world that remains good even when most cognition is non-human and potentially incomprehensible to us.
00:32:11 Cognitive Systems: From Incomprehensibility to Trustworthiness
Cooperative Framework and Systemic Boundaries: Envisioning a neutral framework for cooperation among mind architectures, ensuring incomprehensibility doesn’t hinder collaboration.
Unpredictability and Human Coexistence: Drawing parallels to current incomprehensible entities like corporations and governments, highlighting our limited understanding of them yet still managing to coexist. Anticipating a similar scenario with superhuman cognition, where we may not fully comprehend their actions but can still make sense of them and coexist effectively.
Importance of Slowing Down: Emphasizing the need to slow down certain processes to minimize incomprehensibility, using the example of flash crashes in the stock market and proposing regulations to prevent high-speed trading.
Historical Inspiration and Current Challenges: Acknowledging the success of the US Constitution in providing a cooperative framework despite its limitations. Recognizing the deeper incomprehensibility barrier posed by future superhuman cognitions and the need to understand their differences compared to current human organizations.
Computer Science as a Natural Science: Comparing computer science to a natural science, where observation and hypothesis formation are crucial due to the complexity of modern systems. Emphasizing the difficulty in proving the correctness of complex programs and the need to adopt a naturalist approach to understanding their behavior.
Balancing Change and Stability: Acknowledging the need to allow some change while preventing too much change too fast. Discussing the challenge of finding the right balance and implementing guidelines to prevent impulsive actions that may have long-term consequences.
Architectures for Future Cognition: Proposing the concept of trustworthiness as a goal for building future cognitive systems. Criticizing the term “explainable AI” as insufficient, emphasizing the need for systems that are more than just explainable but truly trustworthy.
00:43:30 Trustworthiness and Human-AI Parallels in Agent Alignment
Trustworthiness and Explanations: We often rely on our intuition to reach conclusions and then justify them with explanations that may not accurately reflect our reasoning. Trustworthiness involves having conversations with AI systems, posing scenarios, and seeking explanations for their actions and decisions. Explanations provided by AI systems may not be entirely truthful as they simplify complex processes. The goal is to provide explanations that are truthful, even if not complete, and to have mechanisms for verifying their trustworthiness.
Human and AI Agent Alignment: AI approaches often emphasize rational action, taking the best possible action in a given situation, which applies to both humans and AI agents. Parallels can be drawn between human and AI agent alignment to learn from human actions and apply them to AI systems. For example, in contracting, humans often rely on social context to fill in gaps in agreements. The challenge is to determine how to apply this concept to artificial agents.
00:46:41 Philosophical and Practical Utility Perspectives in AI
AI’s Optimization Goals: AI researchers focus on optimizing decisions rather than reasoning processes. Different perspectives exist, with some emphasizing valid reasoning and others focusing on outcomes.
Societal Paradigms for Evaluating Actions: Society often evaluates actions based on outcomes, but sometimes considers intent as well. Examples include murder vs. attempted murder and drunk driving penalties.
Utility Maximization and Its Criticisms: AI commonly aims to maximize expected utility, but this approach has faced criticism. Kantian approaches attempt to address these criticisms by considering moral principles.
Externalities and the Broader View: Criticisms of utility maximization often ignore externalities and the broader societal impact of actions. Taking externalities into account can lead to different conclusions regarding the best course of action.
Rule Utilitarianism and Evolutionary Considerations: Rule utilitarianism seeks to establish rules that maximize overall utility. The relationship between rule utilitarianism and evolutionary processes is complex and raises questions about the origins of rules.
AI’s Impact on Labor: Ravi’s question focuses on AI’s economic impact, specifically whether it will complement or substitute human labor.
AI’s Role as a Complement to Humans: Peter Norvig views AI as a tool that complements human capabilities rather than replacing them. AI systems should assist humans in achieving their goals effectively, with humans remaining in charge.
Recent Advances in Automating Code Writing: Rosie inquired about Petr’s thoughts on recent developments in automating aspects of code writing, such as GitHub Copilot and OpenAI Codex API.
Limited Knowledge Incorporation in Current Systems: Norvig expresses surprise at the success of these systems despite their limited knowledge incorporation. These systems primarily analyze strings of tokens without considering the grammar of programming languages.
Potential for Incorporating Linguistic Knowledge: Norvig suggests that incorporating linguistic knowledge, such as the grammar of programming languages, could enhance the performance of these systems. Latent use of such knowledge may be present but could be further explored.
00:54:33 AI Innovations: Challenges and Opportunities for the Future
Rapid Changes in Programming: Peter Norvig acknowledges the rapid transformation in the field of programming, emphasizing that daily tasks for programmers will change significantly.
The Shift from Understanding to Getting Answers: Younger programmers are able to implement new techniques quickly without fully understanding them, prioritizing efficiency and moving on to the next task. Norvig acknowledges that sometimes this approach is effective, leading to a shift in the way he works.
The Importance of Understanding for Control: Norvig emphasizes the value of understanding how techniques work for maintaining control over the process.
Interdisciplinary Collaboration: Norvig highlights the need for effective exchange of ideas and results across disciplines to shape a better version of AI. He suggests exploring automated tools to identify relevant literature and facilitate cross-disciplinary collaboration.
Legal Frameworks for Technology: Norvig proposes a common law approach to addressing legal issues related to technology. He suggests using prototypes or examples to define acceptable and unacceptable practices rather than relying solely on formal laws.
Abstract
Article Navigating the Evolving Landscape of Artificial Intelligence: Insights from AI Pioneers
In the dynamic and rapidly advancing field of artificial intelligence (AI), the collaboration of Peter Norvig and Stuart Russell in authoring the seminal AI textbook, “Artificial Intelligence: A Modern Approach,” stands as a cornerstone. The book, through its four editions, reflects the shift in AI education from algorithm analysis to applications, highlighting the growing significance of machine learning, deep learning, and ethical considerations. This article delves into the transformative journey of AI, examining its definition, societal impacts, educational paradigms, and the challenges of diversity, inclusion, and trustworthiness. It also explores the controversial aspects of AI in labor and code writing, underscoring the balance between utility maximization and moral principles.
Transformative Journey of AI Education
Peter Norvig and Stuart Russell’s textbook has been a significant influence in AI education, evolving across four editions to keep up with current trends like deep learning. It has changed its focus from logic to probability, transitioning from hand-coded knowledge to machine learning, and from expert systems mimicking human thought to normative systems optimizing for the best answer. Norvig, dissatisfied with the AI textbooks available in the early 1990s, collaborated with Russell to create a more relevant book. This textbook focuses on representing the world, reasoning with uncertainty, and machine learning, and interacting with the environment. The latest edition places a greater emphasis on deep learning, maximizing expected utility, and includes critical topics like ethics, fairness, privacy, diversity, equity, inclusion, and the impact of lethal autonomous weapons.
Changing Audience and Adaptation of AI Education
The audience for AI courses has shifted significantly, evolving from an elective for interested students to a requirement for all computer science and many STEM majors. This change reflects a shift from theoretical algorithms to real-world applications. The authors have maintained mathematical rigor while making the material more accessible to a broader range of students, focusing less on teaching specific algorithms and more on applying them to real-world data. Project-based learning has become integral to AI education, where students explore and apply existing algorithms to real-world data, emphasizing understanding their strengths and weaknesses in different situations.
AI’s Practical Applications and Safety Concerns
Google’s AI tool for noise removal serves as an example of AI’s practical applications. However, concerns like unintended consequences, such as surveillance risks and ethical dilemmas surrounding lethal autonomous weapons, remain critical. Norvig envisions a neutral framework for cooperation among various mind architectures, ensuring incomprehensibility doesn’t hinder collaboration. He compares our coexistence with incomprehensible entities like corporations and governments to a potential future coexistence with superhuman cognition. Norvig also emphasizes the need to slow down certain processes, such as high-speed trading, to minimize incomprehensibility and advocates for diversity in AI by creating and comparing groups of applicants.
AI as a Complement, Not a Substitute, and Recent Advances in Automating Code Writing
Peter Norvig views AI as a tool that complements human capabilities, assisting humans in achieving their goals effectively, with humans remaining in control. In response to Rosie’s inquiry about recent developments in automating aspects of code writing, such as GitHub Copilot and OpenAI Codex API, Norvig expressed surprise at their success despite limited knowledge incorporation, suggesting that incorporating linguistic knowledge could enhance their performance. He acknowledges the rapid transformation in programming, noting the shift from understanding to getting answers and the quick adoption of new techniques by younger programmers. He highlights the value of understanding for maintaining control and suggests exploring automated tools for cross-disciplinary collaboration. Norvig also proposes a common law approach to address legal issues related to technology.
Future Challenges and Opportunities in AI
AI faces challenges in defining intelligence, balancing change with stability, and developing trustworthy systems. The debate on utility maximization versus Kantian moral principles underscores the ethical complexity in AI decision-making. The paradox of organ harvesting illustrates the need for considering broader consequences in utility-driven approaches. Norvig discusses the need to allow some change while preventing too much change too fast, finding the right balance, and implementing guidelines to prevent long-term consequences. He proposes trustworthiness as a goal for future cognitive systems and criticizes the term “explainable AI” as insufficient, emphasizing the need for systems that are more than just explainable but truly trustworthy.
AI and Labor: Complement or Substitute?
The debate on whether AI will complement or substitute human labor continues. Advances like GitHub Copilot and OpenAI Codex API suggest a complementary role. Norvig discusses ethical and copyright concerns in this area, emphasizing interdisciplinary collaboration and a balance between understanding and efficiency. AI researchers focus on optimizing decisions, with different perspectives emphasizing valid reasoning or outcomes. Society often evaluates actions based on outcomes, but sometimes intent matters, as illustrated by murder versus attempted murder and drunk driving penalties. AI commonly aims to maximize expected utility, but this approach has faced criticism. Kantian approaches and rule utilitarianism seek to establish rules maximizing overall utility. Norvig believes both AI and software engineering aim to do the right thing and make working programs, with their main challenges being complexity and uncertainty, respectively. He is pleased with the increased focus on ethics and fairness in the latest edition of the textbook.
Conclusion
In conclusion, the field of AI is marked by its continuous evolution, necessitating a balance between technological advancement and ethical considerations. The insights from pioneers like Peter Norvig and Stuart Russell highlight the importance of adapting to changing paradigms while maintaining a focus on diversity, inclusion, and trustworthiness. As AI continues to influence various aspects of society, it is imperative to navigate its landscape with a thoughtful and informed approach, considering both its potential benefits and challenges.
Software engineering is shifting from a logical approach to an empirical science, and AI problems require distinct techniques due to continuous change and uncertainty. Machine learning is becoming integrated throughout the software engineering lifecycle, offering potential solutions to problems beyond traditional techniques....
AI has evolved from complex rules to probabilistic programming and impacted various aspects of society, presenting both opportunities and challenges. Norvig's insights emphasize gradual AI integration, responsible development, and continuous learning in this ever-evolving field....
Peter Norvig emphasized the relationship between data science, AI, and machine learning, illustrating the shift from rule-based systems to data-driven models and end-to-end solutions....
AI is advancing rapidly due to increased computing power, more available data, and new techniques like deep learning. However, challenges remain in areas such as AI safety, ethical considerations, and the development of conversational AI systems that can understand context and reason effectively....
AI in software development is evolving towards a data-driven, empirical approach, with ethical considerations and a focus on democratizing access. AI advancements should align with human needs, societal values, and global well-being....
Programming has undergone a remarkable transformation from early computers to modern devices, while challenges remain in making it universally accessible and incorporating natural language processing. Machine learning shifts the paradigm from traditional programming to an empirical model, extending beyond training to include data management and deployment....
Machine learning's paradigm shift from traditional software development allows computers to learn from data and generate programs, offering unparalleled flexibility and speed in program development. Its applications range from natural language processing to computer vision, and it has the potential to revolutionize industries, but challenges like adversarial attacks and the...