Mira Murati (OpenAI CTO) – AI’s Human Factor | Stanford’s Dr. Fei-Fei Li and OpenAI CTO Mira Murati (Sep 2022)
Chapters
Abstract
The New North Star in AI: Charting a Course for Ethical and Effective Robotics
In a rapidly evolving world shaped by technological advancements, the development of artificial intelligence (AI) and robotics holds immense promise yet presents complex challenges. This article delves into the pioneering work of Fei-Fei Li and her team in defining a new benchmark for robotics, Mira Murati’s insights on AI development at OpenAI, the crucial role of universities, industry, and governments in AI safety, and the paradigm shift in AI and its societal implications. By interweaving these narratives, we aim to provide a comprehensive understanding of the current state and future direction of AI and robotics, emphasizing ethical considerations and human-centric approaches in this transformative field.
Revolutionizing Robotics: Fei-Fei Li’s Benchmark
Fei-Fei Li and her team are revolutionizing robotics with a new benchmark of 1,000 tasks, inspired by everyday human activities, based on comprehensive labor surveys. This benchmark is an ambitious project aiming to redefine robotics research and development principles. It focuses on practical, real-world tasks, ranging from mundane household chores to complex interactions, with the goal of creating robots that can seamlessly integrate into human life. Li expresses excitement about this paper, feeling a sense of déjà vu reminiscent of the ImageNet days, envisioning the realization of robotics dreams. The benchmark, inspired by tasks from the American Labor Bureau survey and its European counterpart, includes high-demand robot tasks like cleaning the toilet and packing kids’ lunches, moving beyond the toy examples commonly seen in robotics research.
Paradigm Shift in AI Development: OpenAI’s Vision
Mira Murati of OpenAI emphasizes a paradigm shift in AI development, combining large-scale neural networks with extensive datasets and computational power. This approach has led to significant advancements in AI, including the development of models like GPT-3, Codex, and DALL-E. OpenAI aims to create AI systems with deep, human-like understanding, leveraging the linguistic and visual competencies of these models.
OpenAI’s Approach to AI Safety
OpenAI’s mission extends to the safe and beneficial deployment of general AI systems. Recognizing the challenges in predicting AI-related risks and biases, OpenAI adopts a strategy of controlled, iterative releases through APIs. This approach allows for the identification and mitigation of emerging risks in real-world use. As AI models become more capable, the complexity of managing risks increases, necessitating human oversight and collaboration with universities, industry, and governments. OpenAI focuses on iterative risk management, controlled deployment, and collaboration for AI safety.
Collaborative Efforts in AI Safety
Feifei Li, representing the Human-Centered Artificial Intelligence Institute, highlights the collective responsibility of universities, industries, and governments in AI safety. These collaborative efforts aim to create AI systems that align with human values and contribute to societal well-being.
Stanford High’s Human-Centered AI Initiative
Stanford’s Human-Centered AI Institute (Stanford High) dedicates itself to embedding human values in AI research and policy. It emphasizes creating AI systems that augment human capabilities, considering social implications at each development stage, including problem definition, data collection and annotation, algorithm development, and decision-making. Stanford High focuses on educating future technologists and leaders, ensuring they deeply understand the human-centeredness of AI.
DALI and GPT-3 Safety Measures
To ensure safety in the development of DALI and GPT-3, measures like controlled access, red teaming, and expert consultation are employed. These efforts include building safeguards into the models and developing tools for safe deployment. DALI and GPT-3 use the front end of the API to understand potential risk flows and train safety models, iterating to develop a safety model that goes beyond human capabilities. They work with industry experts and trusted users to red team the model and identify potential failure points. GPT-3, prone to making up information and not admitting its lack of expertise, focuses on building in mitigations from the model perspective and through post-deployment tools to make the model more robust and reliable.
Addressing Model Limitations
OpenAI acknowledges the limitations of generative models like GPT-3, particularly in creating fabricated information and lacking expertise in certain areas. Efforts are underway to enhance the models’ reliability and robustness.
Feedback-Driven Model Improvement
Mira Murati’s team employs reinforcement learning with human feedback to refine InstructGPT models, aiming for increased reliability, usefulness, and safety. This approach sets a new industry standard by integrating safety with enhanced capabilities.
Ethics and Safe AI Development
Fei-Fei Li recognizes the growing trend of incorporating ethics and safety in AI product development. Academia faces the challenge of establishing guardrails akin to Institutional Review Boards for AI research. Stanford’s HAI plays a pivotal role in facilitating multidisciplinary research that addresses ethics and social impacts.
Ethical and Social Considerations
Stanford HAI has instituted an Ethics and Society Review Board (ESR) to evaluate the social and ethical implications of grant-funded research. This initiative exemplifies the institute’s commitment to responsible AI development.
National Research Cloud
The Biden administration and Congress are exploring the creation of a National Research Cloud to democratize AI resources. This initiative aims to foster a more equitable AI innovation and education landscape.
Human Amplification through AI
GBD3 and DALI exemplify AI’s potential to augment human creativity and problem-solving abilities. These systems encourage the exploration of new ideas and perspectives, enhancing human capacity for innovation.
Artistic Creation with AI
AI can augment human creativity and encourage the exploration of new ideas and perspectives.
Feedback-based Reinforcement Learning
Google’s approach to improving the reliability and safety of their InstructGPT models through reinforcement learning, based on user feedback from their API, is a notable development. These models are designed to be more helpful, safe, and useful, performing tasks according to the operator’s intentions.
Deployment-driven Safety
Google’s focus on deployment has enhanced the safety, reliability, and effectiveness of their AI models. This marks a significant shift from theoretical safety discussions to practical implementation, setting a standard for the industry.
Importance of Ethics in AI Development
Fei-Fei Li underscores the increasing emphasis on ethics in both AI development and product release within companies. Academia is also addressing these concerns, particularly through the establishment of Ethics Review Boards (ERBs).
ERBs in Academia
ERBs have traditionally played a critical role in reviewing research involving human subjects in academia. With AI becoming a prominent research area, the need for an equivalent of ERBs became apparent due to concerns about biases and potential harms.
Multidisciplinary Research at HAI
HAI, under Fei-Fei Li’s leadership, facilitates multidisciplinary research to address the ethical and social impacts of AI. The goal is to ensure that even technical researchers and students consider these aspects in their work.
Interdisciplinary Research and Ethical Review
Stanford HAI established an Ethics and Society Review Board (ESR) to review all grant applications for research involving AI. Researchers are required to articulate their understanding of the social and ethical implications of their work and propose potential mitigation solutions.
Collaboration with Policymakers and Civil Society
HAI works closely with regulators, policymakers, and civil society organizations to communicate the implications of AI and facilitate dialogues. The institute brings together industry, civil rights organizations, and government entities to discuss innovation, guardrails, and responsible AI development.
National Research Cloud
HAI recognizes the importance of a healthy AI ecosystem that extends beyond a few large companies. The National Research Cloud initiative aims to provide compute, data, and talent resources to universities and other entities to enable broader participation in AI innovation. A task force is currently working on defining the scope and implementation of the National Research Cloud.
Human Amplification through AI
GBD3 and DALI, two AI systems developed at Stanford, have demonstrated the potential of AI to amplify human creativity. These systems can generate new ideas, explore different possibilities, and facilitate collaboration among humans. AI can help humans become more creative and productive, leading to advancements in various fields.
GPT-3’s Creative Capabilities
GPT-3 demonstrated surprising abilities in generating creative and evocative poetry, combining scientific concepts with poetic styles. Users experimented extensively with GPT-3’s poetry generation, appreciating its unique blend of scientific and artistic elements.
DALI’s Democratization of Creation
DALI’s image generation capabilities democratized high-quality image creation, making it accessible to a wider audience. People enjoyed experimenting with DALI, leading to hours of image generation and exploration. Tools like DALI have the potential to push creative boundaries and foster collaboration between humans and AI.
Human Reaction to AI-Generated Creation
Concerns about the impact of AI on human creativity and originality have arisen. Historically, similar concerns emerged during periods of cultural exchange and democratization of artistic creation.
Co-Creation and Appreciation
With tools like DALI and GPT-3, a more nuanced appreciation for co-creation is developing, valuing both human and AI contributions. Original human creations can still be valued differently, even as AI-generated content becomes more prevalent.
Globalization and Cultural Exchange
The impact of AI technologies on creativity and cultural exchange is comparable to the effects of globalization. Globalization promotes an exchange of ideas, leading to increased diversity and prosperity. Historical examples, such as Western Europe in the 19th century, illustrate the positive outcomes of cultural liberalization.
Concerns and Mitigation
Concerns about the potential negative effects of AI technologies on society and culture are acknowledged. Simulating these effects may not be fully possible until large-scale deployment. Technology companies have a role in mitigating potential risks and addressing second-order effects. Governing boards can provide oversight and ensure responsible development and deployment of AI technologies.
Striking a Balance Between Innovation and Regulation in AI
Mira Murati emphasizes the importance of governance systems for AI, considering it a global issue that requires input from various fields and stakeholders. Fei-Fei Li believes that innovation and regulation are not mutually exclusive but should be balanced. Extreme regulation can stifle innovation, while a lack of guardrails can lead to failures. The example of smart camera machine learning in healthcare demonstrates how regulations can incentivize innovation. Privacy concerns led to the development of differential privacy algorithms, pushing the boundaries of machine learning technology. Li believes numerous examples in AI show how regulations can drive better innovation, benefiting everyone involved.
The discussion highlights the need for a balanced approach to innovation and regulation in AI. It fosters dialogue and collaboration among researchers, organizations, and governments to strike the right balance and encourage responsible and beneficial AI advancements.
Notes by: oganesson