Peter Norvig (Google Director of Research) – What is Human-Centered AI? | Institute for Experiential AI (Nov 2023)
Chapters
Abstract
“AI’s Evolution: Navigating the Complex Intersection of Technology, Society, and Ethics”
In the rapidly evolving landscape of artificial intelligence (AI), distinct challenges and paradigm shifts have become apparent. From its emergence in 1956 as a field distinct from traditional disciplines like statistics and economics, AI has continuously redefined problem-solving methodologies with its unique ability to handle vast amounts of complex data. This article delves into the journey of AI, highlighting key aspects such as the distinction from software engineering, the epiphany about traditional software’s limitations, and the emergence of human-centered AI. It also examines the levels of automation, Ben Schneiderman’s two-dimensional approach to human-machine interaction, and the evolution of AI textbooks. The article further explores the challenges in defining objectives, the role of AI in justice and implementation, the impact of diverse teams, and the necessity for a responsible AI agenda.
Main Ideas and Expansion:
AI’s Emergence and Unique Challenges:
Artificial Intelligence (AI), since its inception in 1956, has carved a unique niche for itself, distinguishing from fields like statistics by emphasizing on maximizing expected utility and rational decision-making. It encountered resistance from established fields, especially when it extended its problem-solving methods beyond their traditional boundaries. AI set out to address various types of problems, ranging from large-scale data inference and symbolic reasoning to game playing, showcasing its distinct problem-solving capabilities.
Differentiation and Distinction from Software Engineering:
AI stands out in its ability to address complex, data-intensive problems, marking its differentiation from other fields. In contrast to software engineering, which is more concerned with complexity and precision, AI deals with uncertainty and ambiguity in problem-solving, offering a distinct approach.
Peter Norvig’s Realization and Human-Centered AI:
The realization by Peter Norvig about the limitations of traditional software in complex scenarios marked a pivotal moment in AI development. It underscored the necessity for AI-driven approaches and led to the shift towards human-centered AI. This new focus brought together multidisciplinary teams early in the development process, aiming to augment human capabilities with an emphasis on transparency and fairness.
Expanding Scope and Benefits of Human-Centered Approach:
The human-centered approach in AI acknowledges various constituencies, ensuring that solutions are relevant in a broader societal context. This is evident in applications like self-driving cars and judicial decision support systems. This approach leads to more inclusive solutions, fostering acceptance and reducing resentment among stakeholders.
Levels of Automation and Schneiderman’s Framework:
The Society of Automotive Engineers’ framework outlines six levels of automation, ranging from no automation to full automation. Building on this, Ben Schneiderman proposed a two-dimensional framework for human-machine interaction, focusing on the level of automation, user control, and other dimensions like choice, transparency, certainty, and speed.
Evolution of AI Textbooks and Challenges in Defining Objectives:
AI textbooks have evolved significantly, moving from a focus on algorithms in 1995, to big data in 2003, and currently to the definition of objectives in 2020. Defining objectives in AI poses challenges, including the lack of tools for articulating objectives, anticipating unintended consequences, and integrating societal values and ethics. This complexity is highlighted in systems like the COMPAS recidivism prediction tool, which underscores the intricate nature of fairness in AI. It emphasizes the importance of multidisciplinary teams, continuous monitoring, and the balance between accuracy and unintended consequences. AI’s reliance on potentially biased data necessitates the consideration of societal impact and individual harm alongside accuracy metrics. The role of AI in justice and policy implementation is crucial, as is the importance of diverse teams in identifying potential biases and ensuring alignment with societal goals.
Adversarial Attacks and Advertising:
The susceptibility of AI systems to adversarial attacks raises concerns about their reliability and trustworthiness. Additionally, AI plays a role in counteracting advertising strategies that exploit human cognitive biases.
Measuring Metrics and Goodhart’s Law:
Discussions around AI metrics involve understanding the trade-offs between different stakeholders’ needs and the importance of measuring the right things. Goodhart’s Law cautions against the pitfalls of focusing solely on a metric, advocating for a broader measure of success.
Building Data and Overcoming AI Fear:
Constructing and utilizing data from multiple sources offers more comprehensive insights. Reducing fear and increasing familiarity with AI can enhance human capabilities and encourage exploration.
AI in Creative Industries:
AI’s influence extends to the creative industries, impacting writers and artists. It offers potential for collaboration and has influenced the power dynamics in the music industry.
Conclusion and Responsible AI Agenda:
The article concludes with Peter Norvig acknowledging the challenges in establishing a responsible AI agenda. He suggests revisiting earlier points for insights into responsible AI practices.
Music Industry and Power Dynamics:
The shift from physical media to digital downloads significantly disrupted the music industry, affecting artists’ compensation. Unionizing efforts have been effective in assisting artists in gaining more power and securing better deals.
Responsible AI:
Formulating a responsible AI agenda is a complex task, requiring a thoughtful approach to societal implications. Peter Norvig recommended referring to his slides for more information on this topic.
Power Dynamics and AI:
Evaluating AI advancements within the context of power dynamics is essential to ensure they do not exacerbate existing imbalances.
Notes by: QuantumQuest