Peter Norvig (Google Director of Research) – Modern Approach to AI (Oct 2021)


Chapters

00:00:00 Exploring Centralization vs. Decentralization in AI Development
00:03:06 AI and Machine Learning: Trends, Challenges, and the Future
00:09:09 Humans and AI: Similarities and Differences
00:11:21 The Future of AI Agents as Assistants
00:14:15 Changing AI Interfaces from Keywords to Conversations
00:18:33 Epistemic Status of Gamma in Reinforcement Learning
00:20:34 Ethical Decisions in AI: Navigating Conflicting Values and Defining Fairness
00:24:07 AI Alignment: Tackling Risk Scenarios and Ensuring Cooperation Among Advanced Intelligences
00:29:06 Risks of Unstoppable Technological Change
00:32:14 Future Challenges and Opportunities in Computing

Abstract

Incorporating AI into Society: Balancing Centralization, Autonomy, and Human Values

In a recent event hosted by Agoric, esteemed speakers, including Peter Norvig and representatives from Agoric, engaged in a profound dialogue on the trajectory of artificial intelligence (AI) and its societal implications. The discussions spanned a wide array of topics from the centralization of AI resources and the evolution of computing into a natural science, to the ethical considerations in AI development and the emergence of autonomous systems like Decentralized Autonomous Organizations (DAOs). This article delves into these nuanced themes, exploring the potential paths AI might take – from a centralized agent wielding significant power to a decentralized market of specialized entities – and the complexities of aligning AI systems with human values and ethics.

Centralization and AI Development:

Peter Norvig’s insights shed light on the current trend towards centralization in AI, akin to developments in particle physics, where a few large entities dominate due to control over extensive data and computational resources. He likens cloud services to a burgeoning industry where competition shifts from cost to the quality of pre-trained models. However, he warns of the winner-take-all dynamics prevalent in various industries, partly fueled by global communication and access to a worldwide audience.

AI’s Trajectory: Centralized or Decentralized?

Peter Norvig shares his thoughts on the potential trajectory of AI progress, considering whether development will lead to a centralized, singleton agent with immense power or a decentralized autonomous market with multiple entities engaging in a more distributed manner. Inspired by Eric Drexler’s ideas, the options of a winner-takes-all scenario or a decentralized economy with specialized services are explored.

Understanding and Controlling AI:

Norvig touches upon the challenges in comprehending AI systems as they evolve and adopt different cognitive architectures. He points out the lack of certification processes in software engineering, which could lead to disparities in expertise levels. Furthermore, he suggests that computing is increasingly mirroring natural sciences, where absolute ground truths are elusive, and the underlying mathematics might become too complex to fully grasp.

Incorruptibility and Autonomy of Digital Entities:

Digital entities, as discussed by speakers, should possess incorruptibility to serve as trustworthy coordination points beyond human manipulation. These entities operate independently in the world, coexisting with humans but maintaining their autonomy. The autonomous nature of these systems provides integrity and resilience in a world where humans are susceptible to corruption.

Human Cognition and AI Interaction:

The discussion extends to human limitations in understanding ourselves and others, and the susceptibility to irrational influences. AI assistants, seen as a significant shift in operating systems, could serve as epistemic aids, helping individuals navigate interactions and protect against manipulation.

Epistemic Assistants and the Evolution of Operating Systems

Jascha proposes the concept of an epistemic assistant, an AI that aids individuals in evaluating evidence and interacting with potentially misleading agents. Peter Norvig finds the concept intriguing, viewing it as a potential phase change in operating systems, akin to the transition from mainframes to PCs and mobile devices. Peter highlights the popularity of voice assistants like Siri, Alexa, and Google, despite initially doubting their appeal due to the perceived loss of traditional input methods. However, he emphasizes the current dynamic where third-party apps have complete control over the user’s device interface, allowing them to manipulate the user’s experience. Peter suggests that epistemic assistants may disrupt this dynamic by aligning more closely with the user’s interests rather than those of external entities. Sharon raises the issue of the time required to learn AI compared to more traditional fields and suggests viewing AI as a natural science.

Prompt Tuning and AI Accessibility:

The growth of prompt tuning is highlighted as a means for users with limited technical skills to interact with AI systems, exemplified by Google’s search engine. Yet, challenges persist, including the lack of discoverability in AI systems, inconsistent understanding of user intents, and absence of standardized approaches to prompt tuning.

Shifting Search Paradigms, Prompt Tuning, and Value Alignment Challenges in AI

Prompt tuning enables individuals with less technical expertise to interact with AI systems effectively. Google’s search engine exemplifies prompt tuning, where users input queries to retrieve desired results. The transition from keyword-based search to conversational AI interfaces poses challenges for users. Users may struggle to comprehend the full range of capabilities and limitations of AI systems. Inconsistencies in AI responses to similar requests can be frustrating and confusing. People are increasingly interested in understanding the concept of utility in AI systems. Aggregating and teaching human values to AI systems is challenging due to differing cognitive architectures and evolutionary constraints. There is skepticism about the feasibility of teaching AI systems an aggregate utility function that aligns with human values. Some argue that it may be more practical to establish frameworks that allow for cooperation among diverse value systems, minimizing vulnerabilities while maximizing opportunities for collaboration.

Utility Values and Ethical AI

Values Can Be Contradictory:

– Utility values are instrumental in achieving desired outcomes, but people’s values often contradict each other.

– For example, people want to assign equal value to everyone but also prioritize resources for those they love.

Unpleasant Choices and Personal Justification:

– In reality, many choices involve unpleasant trade-offs.

– Ultimately, individuals must justify their own choices and accept the consequences.

Utility in Context:

– Utility should be considered in the context of the desired rewards and roles.

– Open negotiations are needed to determine roles and how to achieve them.

AI Ethics as Politics:

– AI ethics is often influenced by politics rather than expected outcomes.

– There is a need to focus on the impact on politics and find ways to change this approach.

Fairness in AI Systems:

– Fairness can be defined in multiple ways, leading to potential conflicts.

– For example, a recidivism scoring system might be accurate for both white and black individuals but still disadvantage black people due to higher base rates of recidivism.

– Society needs to determine which definition of fairness to prioritize.

Probability of Longevity:

– Some experts believe there is a high probability that individuals will live long enough to see the colonization of millions of planets.

– Others are less optimistic, feeling they may be too late to experience such advancements.

Societal Implications and Future Outlook:

The discussions also encompass societal risks associated with AI, like the emergence of uncontrollable systems and the potential for a single AI entity to dominate. The speakers advocate for the development of incorruptible autonomous systems and stress the importance of human control and integrity in these arrangements.

Risk of Autonomous Systems:

Beyond malicious intent, well-intentioned autonomous systems can lead to negative outcomes due to unforeseen consequences or the inability to turn them off. Societal structures and common goods problems can hinder the collective addressing of societal issues, even if a majority of people recognize their detriments.

Observations on Planet Earth:

An alien observer might perceive Earth as a planet dominated by automobiles, with humans enslaved to serve them. This analogy highlights the extent of technology’s influence, particularly automobiles, on human society. The future may involve a shift from automobiles to computers as the dominant force shaping society.

Advice for Shaping the Future:

The approach to shaping a positive future is unique for each individual, given the diverse nature of challenges and opportunities. Contributions can range from technical solutions to political advocacy and sociological initiatives. The key is to find a suitable area to contribute and work towards improving the world.

Concluding Perspectives:

Peter Norvig humorously conjectures that aliens might perceive automobiles as Earth’s dominant species, highlighting the urgency to address the dominance of such technologies. The event concludes with advice on contributing to the future’s well-being, emphasizing that efforts can be technical, political, or sociological, and finding a suitable area to contribute is key to making a positive impact.



This synthesis of ideas from a diverse panel of experts offers a comprehensive overview of the current and future landscape of AI. It underscores the importance of balancing centralization, autonomy, and human values in the evolution and integration of AI into society.


Notes by: ZeusZettabyte