Rodney Brooks (Robust.ai Co-founder) – Steps Toward Super Intelligence and the Search for a New Path (Sep 2019)
Chapters
00:00:00 Development of Robotics, AI, and Superintelligence
Rodney Brooks’ Background: Rodney Brooks, a renowned scholar, has a remarkable history of challenging conventional wisdom and making significant contributions to various fields, including computer vision, robotics, and artificial intelligence. Brooks began his career at Stanford in the early days of computer vision and robotics, where he developed the acclaimed ACRONYM system. He then moved to Carnegie Mellon and MIT, where he introduced the innovative subsumption architecture, which significantly influenced robotics for a decade.
Rodney Brooks’ Achievements: Brooks founded the International Journal of Robotics Research (IJRR) and was involved in the development of Lucid Lisp. He co-founded iRobot, which produced the world’s best-selling robot, the Roomba, based on his research and ideas. Brooks appeared in Errol Morris’s film “Fast, Cheap, and Out of Control,” where he shared his unique perspective on research and the importance of challenging assumptions. He played a pivotal role in promoting humanoid robotics, leading to the development of robots like ASIMO and Atlas.
Rodney Brooks’ Contributions to Robotics: Brooks’s radical idea of collaborative and human-compatible robots led to the establishment of Rethink Robotics, which focused on creating safe and user-friendly robots. He consistently challenged prevailing notions and pushed the boundaries of robotics, leaving a lasting impact on the field.
Rodney Brooks’ Current Focus: Brooks is currently challenging the concept of artificial intelligence and the idea of superintelligence, seeking to explore new frontiers in these domains. He is known for his willingness to question established beliefs and his relentless pursuit of innovative approaches to complex problems.
Brief History of AI: Alan Turing’s 1936 paper “Uncomputable Numbers” and 1950 paper “Computing Machinery Intelligence” were influential in the development of AI. In 1948, Turing wrote a paper called “Intelligent Machinery,” which discussed the possibility of building intelligent machines using the human brain as a guiding principle. Turing’s lab head, Sir Charles Darwin, grandson of Charles Darwin, forbade the paper from being published until 1970 due to its perceived flakiness.
Marvin Minsky’s Steps Toward Artificial Intelligence: In 1961, Marvin Minsky wrote a paper titled “Steps Toward Artificial Intelligence.” The paper broke AI down into five areas: search, pattern recognition, learning, planning, and induction. Three of these five areas were related to search, highlighting the importance of search in AI at that time.
Steps Towards Superintelligence: The speaker presents their version of Minsky’s paper, titled “Steps Toward Superintelligence.” This section of the talk focuses on five topics related to achieving superintelligence.
00:09:12 The History and Evolution of Artificial Intelligence
AI Historical Overviews: McCulloch and Pitts (1943) started the discussion on whether brains could be simulated by machines. Turing (1950) proposed the Turing test as a measure of machine intelligence.
Turing’s Predictions: Turing predicted that by 2000, machines with 10^9 bits of storage would pass the Turing test 30% of the time. He estimated that this would require 60 workers working steadily for 50 years, which is a relatively small amount of effort.
Learning as a Solution: Turing suggested using learning to overcome the challenges of AI.
Four Approaches to AI Since 1956: Symbolic (John McCarthy): Using first-order logic to represent and reason about the world. Neural Networks (Marvin Minsky): Inspired by the brain, neural networks consist of interconnected nodes that can learn from data. Reinforcement Learning (Donald Mickey, Chris Watkins): This approach involves learning through interactions with the environment, receiving rewards for positive actions and penalties for negative ones. Conventional Robotics (Larry Roberts): This approach involves building physical robots that can interact with the world and perform tasks.
Robotics Milestones: 1963: Larry Roberts processed an image of a block, demonstrating the feasibility of computer vision in robotics. 1981: Eric’s PhD research involved processing three images, showcasing advancements in computer vision. 1960s: Demonstration of robots picking up and moving blocks, showcasing the capabilities of early robotic systems.
Background: Gray Walter, an American in Bristol, conducted groundbreaking work on robots in the early 1950s. Walter’s robots, called “tortoises,” used vacuum tubes as controllers and exhibited complex behaviors due to non-linear dynamics. Walter published his findings in Scientific American in 1950, showcasing the learning capabilities of these tortoises.
Subsumption Architecture and Behavior-Based Approach: In the mid-1980s, Eric Turner developed a digital version of Walter’s tortoises, known as the subsumption architecture or behavior-based approach. This approach involves connecting multiple simple behaviors, represented as boxes with arrows, to create complex behavior. The behavior-based approach was published in Neural Computation and became influential in robotics, leading to its use in the Mars rovers and Roomba vacuum cleaners.
Behavior Trees and AI in Video Games: Damien Isler further refined the behavior-based approach in 2000, leading to the development of behavior trees. Behavior trees are widely used in video game engines like Unity and Unreal to control AI characters. Behavior trees are credited with driving the largest number of robots with long-term existence and all AI characters in video games.
Comparison of AI Approaches: Eric Turner presented a cartoon illustrating four main AI approaches: symbolic, robotics, behavior-based, and neural. Symbolic and robotics approaches are deliberative, while behavior-based and neural approaches have both reactive and deliberative elements. Turner compared the four approaches based on their performance in five areas: composition, grounding, spatial, sentience, and ambiguity. He concluded that symbolic approaches excel in composition, while neural approaches provide symbols and are grounded in the world but not in action.
AI and Human Cognition: Turner compared the performance of the four AI approaches to human children and found that AI is still far from achieving human-level cognition. AI systems are not yet capable of handling tasks that require ambiguity, spatial reasoning, or sentience, as even squirrels outperform current AI approaches in food caching tasks.
Challenges in Predicting the Future of AI: Turner discussed the difficulty in predicting the future of AI and attributed this to several factors, which he referred to as the “seven deadly sins of AI.” These factors include overestimating the capabilities of AI, underestimating the complexity of the world, and neglecting the importance of human-AI interaction.
00:21:47 AI Performance vs. Competence: Avoiding Misconceptions and Adversarial Examples
Performance Versus Competence: Humans can generalize from observing performance to understanding competence in a certain area. AI systems lack this generalization ability. Observing an AI’s performance does not reveal its true competence. This leads to incorrect assumptions about an AI’s capabilities.
Adversarial Examples: Adversarial examples are inputs that are intentionally designed to fool AI systems. These examples can cause AI systems to make incorrect predictions. Adversarial examples can be generated using techniques such as hill climbing and evolutionary algorithms. Natural adversarial examples occur in the real world and can also fool AI systems.
Examples of Adversarial Examples: An AI system that labels images may mistake a guitar for a penguin if presented with an adversarial image. An AI system may classify a snowplow as a school bus due to its orange color and association with snow and roads.
00:24:35 Pitfalls in Measuring Artificial Intelligence Progress
Terminology and Overgeneralization: Misconceptions arise due to the broad use of terms in AI, such as “learning,” which encompasses various distinct types of learning. Each type of learning, like playing tennis, riding a bicycle, or learning Latin, involves unique skills and processes. The term “learning” encompasses a diverse range of activities, from playing an instrument to learning the code. Overgeneralization leads to incorrect assumptions about the capabilities of AI.
The Turing Test as a Metric: The Turing test assesses whether a machine can engage in a conversation that is indistinguishable from a human. Alan Turing introduced the concept as a rhetorical device, not as a definitive test. The Turing test has been challenged due to the ability of chatbots to “hack” the test.
Limitations of Current Metrics: Current metrics, such as the Turing test, do not adequately measure human-like intelligence. A true test of artificial general intelligence (AGI) should encompass a broad range of tasks that humans can perform. AGI should not be limited to a specific task but should exhibit a wide range of abilities.
00:29:24 Future AGI Standards for Elder Care Work and Service Logistics
AGI Test Standards: The current tests, such as the Turing test, are inadequate for measuring AGI. We need new tests that can assess an AGI’s ability to perform complex tasks and interact with humans in meaningful ways.
Elder Care Worker Example: An AGI elder care worker should be able to: Understand human relationships and expectations in the household. Provide physical help to the person, including whole body manipulation. Understand how their language degrades over time. Provide for human needs and balance the needs of the person with being annoying to the person.
Service Logistics Planner Example: An AGI service logistics planner should be able to: Perform logistics tasks without prior knowledge. Do geometric reasoning, including perhaps meeting standards. Do quantitative physical simulation. Understand human needs and fears. Understand human psychology if they’re setting up something for them.
Current Limitations: Current AI systems cannot perform the tasks required of an AGI elder care worker or service logistics planner. Special purpose AI systems could be built to perform some of these tasks, but a general purpose AGI that can perform any of these tasks does not yet exist.
Hard Problems in AI: Real perception in complex scenarios is challenging for AI systems. Services logistics require understanding local customs, workflow, space planning, and public transportation integration. Elder care involves providing physical help, meaningful interaction, and coordination with family and other service providers.
Real Perception: AI systems can make mistakes in object recognition due to adversarial attacks or misleading visual cues. Humans can perceive and compensate for different lighting conditions, but AI systems may struggle with this without explicit training. Top-down and bottom-up processes in human perception help infer properties like color constancy, which may not be easily learned from labeled datasets.
Services Logistics: Army colonels can handle complex tasks like planning and executing military operations. Services logistics involve understanding local customs, workflow, space planning, and public transportation integration. AI systems currently lack the ability to handle such complex tasks effectively.
Elder Care: Elder care requires providing physical help, meaningful interaction, and coordination with family and other service providers. Billions of people worldwide provide elder care without formal training, demonstrating the complexity of the task. AI systems are not currently capable of providing comprehensive elder care services.
00:36:59 Understanding Common Sense and Object Recognition
Robot Manipulation: In 1978, Eric designed a robotic hand, called the blue arm, with a parallel jaw gripper. 40 years later, in 2018, his company’s robotic hand had the same design and features, indicating a lack of progress in manipulation technology.
Reading Books: AI systems are often tasked with reading books and videos to acquire knowledge. However, the ability of current AI systems to comprehend language and extract information from text is limited.
Challenges in Language Comprehension: Many AI systems struggle to understand the subtleties and context of language, such as pronouns and implied meanings. Common sense and background knowledge are essential for comprehending language, which AI systems currently lack.
Common Sense Research: DARPA has initiated a $2 billion research program in AI, with an initial investment of $200 million dedicated to common sense research. The program focuses on understanding different domains, objects, agents, and places, similar to the developmental milestones of children.
Practical Goal: While many challenges remain, a feasible goal for AI researchers is to develop systems that can achieve the object recognition capabilities of a two-year-old child.
00:41:27 A Historical Overview of Computation: From Logarithms to Turing Machines
Progress and Remaining Challenges in AI: AI has made significant strides in tasks like object recognition, language understanding, and manual dexterity. Current AI systems can achieve the capabilities of two-year-olds in object recognition, four-year-olds in speech and listening, six-year-olds in physical manipulation, and eight-year-olds in social understanding. Challenges remain in replicating more advanced human capabilities, such as common sense reasoning, complex sentence comprehension, and the ability to articulate one’s own beliefs and desires.
General Intelligence and Superintelligence: The pursuit of general intelligence and superintelligence presents a significant challenge. It is unclear whether humans have the intellectual capacity to create artificial intelligence that surpasses their own. The limits of human intelligence may prevent us from achieving general intelligence or superintelligence.
The Role of Computation in AI Progress: The availability of massive computational power has contributed to the rapid progress in AI. The increase in computation has allowed AI researchers to train models on vast datasets, leading to improved performance. However, it is uncertain whether computation alone can drive further progress in AI, or if more fundamental breakthroughs are needed.
The Historical Development of Computation: The history of computation can be traced back to efforts to compute astronomical events. John Napier’s logarithmic tables, Johann Kepler’s use of logarithms, and Charles Babbage’s analytical engine were instrumental in the development of computation. Ada Lovelace’s translation of Babbage’s work provided valuable insights into the capabilities of computational machines.
Turing’s Definition of Computation: Alan Turing’s 1936 paper “On Computable Numbers” provided a formal definition of computation. Turing defined computation as a mechanical process that can be carried out by a person using paper and pencil. Turing’s definition was motivated by the need to determine whether mathematics is complete, consistent, and decidable.
Hilbert’s Problems and Turing’s Contribution: David Hilbert proposed three problems related to the completeness, consistency, and decidability of mathematics. Turing’s work on computability was influenced by Hilbert’s problems, specifically the question of decidability. Turing’s definition of computation was intended to demonstrate that mathematics is not decidable, thus challenging Hilbert’s belief in the completeness and consistency of mathematics.
00:52:37 Exploring Computation's Role in Origin of Life and Human Intelligence
Computation and its Historical Evolution: Turing’s work focused on defining computation through mechanistic rules, but his approach was criticized for being too narrow. Minsky explored computation as building machines from a finite number of simple parts, highlighting the beauty of theoretical foundations. Knuth and Hopcroft and Ulman attempted to define computation based on human actions and procedures, leading to diverse interpretations. Computation’s definition remains socially constructed, varying across different contexts.
Computation in Space Exploration: SpaceX’s success in rocketry and satellite communication exemplifies the need for a broad approach to problem-solving beyond mere computation.
Computation’s Role in Understanding Human Origins and AI: Computation has become a dominant paradigm in addressing fundamental questions about human origins and artificial intelligence. However, computation’s limitations become evident when considering the complexity of these questions.
Cognitive Limitations and Embodied Cognition: Barbara Tversky’s work highlights the limitations of human cognition, emphasizing the role of action, spatial thinking, and metaphors in shaping our understanding. Our brains’ limited nature contrasts with the attempt to understand intelligence without limitations. Turing’s model of computation, with its emphasis on places and containers, may not be sufficient for capturing the nuances of human intelligence.
Computation and the Brain: Computation as a model for understanding the brain faces challenges due to the smeared nature of memory and the complexity of neural networks.
Polyclad Flatworm Experiment: A brain transplant experiment with polyclad flatworms demonstrated their ability to adapt and learn even with their brains placed in different orientations, suggesting that development, rather than computation, may be a more suitable framework for understanding such phenomena.
Focus on New Thinking Paradigms: Eric emphasizes the need for a new, non-computational approach to understanding concepts like intelligence and the development of life. Computational methods may have limitations in these areas, requiring a fresh perspective.
Embracing Risk for Groundbreaking Discoveries: Eric encourages researchers to take risks and pursue unconventional ideas, even if it means sacrificing tenure. The potential rewards, such as fame and recognition, outweigh the risks for those dedicated to solving these complex problems.
Openness to Questions and Diverse Perspectives: Bob invites the audience to ask questions and engage in a dialogue about the topic. He encourages participation and acknowledges the value of diverse opinions.
Examples of Promising Research Directions: Katarina Fridisky’s work on improving the quality of data sets for machine learning is highlighted. Her approach involves simulating real-world scenarios to generate more natural and diverse data. Eric suggests that some simulation work may be beneficial in this area.
Embodied Thinking and Abstract Thought: A question is raised about the relationship between our embodied cognition and our thinking capabilities. Eric believes that expanding our physical capabilities, such as space travel, may not necessarily expand our thinking abilities. He advocates for a shift towards more abstract and mathematical thinking to overcome the limitations of physical metaphors.
01:06:23 Discerning Reality from Science Fiction: Exploring the Challenges of Achieving Artificial General Intelligence
AI Complexity and Symbolic Systems: Universal algebra requires a different way of thinking about the world, a more abstract approach. It’s not about dealing with physical systems but symbolic ones, demanding a unique mindset.
Intelligent and Superintelligent Systems: Superintelligence is described as surpassing human intelligence to the extent that a human cannot deceive or outsmart it. Human intelligence serves as a reference point for comparison, but AI complexity goes beyond a single scale. Tasks in the real world can be used as metrics to evaluate AI performance.
Computation vs. Development: Computers are useful for simulating aspects of systems, like rocket engines, but they operate at a different complexity level. Understanding computational systems may not lead to understanding biological systems that self-construct their environments. There’s a need for something new beyond current computational models.
Stuart Kaufman’s Book on Biological Systems: Stuart Kaufman’s book emphasizes how biological systems construct their own environments. The first two chapters provide insights into this concept, while the subsequent chapters lack depth. The author suggests that immunological systems’ self-construction and learning inspired Nobel Prize winners in immunology to pursue AI research.
Challenges in Slicing Up the Brain: Perfectly slicing up the brain and simulating its processes is a distant goal. We still lack a fundamental understanding of the brain’s functions. Recent discoveries, like the importance of small molecules and glial cells, highlight our limited knowledge. Scanning and cutting the brain without understanding its workings is not a viable approach.
Media Coverage of AI: The media’s coverage of AI has oscillated between hype and skepticism over the years. Outrageous claims about AI’s capabilities have led to funding crashes in the past. The Lighthouse Report in 1973 and the neural networks hype in the 80s serve as examples of such cycles.
DARPA’s Robot Ban: Eric was advised against using the term “robot” at DARPA due to its negative connotation. Instead, he was encouraged to use the phrase “programmable machines.”
Negative Press of Robots: Robots have received bad press because they often fail to deliver on their promised capabilities. This results in a cycle of hype and disappointment.
Eric’s Cautionary Approach: Eric has been explicit about what AI and robots cannot do in the near future. He does this to avoid criticism from the press for making unrealistic claims.
Elon Musk’s AI Advocacy: Eric mentioned that he often receives criticism from Elon Musk’s fans, who believe he underestimates the power of AI.
Superintelligence and Archived Conversations: Bob humorously acknowledged the possibility of a superintelligent entity in 2400 understanding and commenting on their archived conversation.
Audience Interaction: Bob informed the audience that Rodney Brooks had a limited amount of time for interaction before he had to leave. He invited those interested in speaking with Brooks to approach him after the presentation.
Abstract
The Future of AI: Balancing Ambition and Realism
Harnessing the Past and Present to Navigate the Future of Artificial Intelligence
In a compelling exposition blending history, current trends, and future potential, Prof. Ken Goldberg, a renowned robotics expert, unravels the complex tapestry of artificial intelligence (AI). His insights, rooted in a rebellious spirit, challenge conventional wisdom and highlight the dichotomy between AI’s theoretical aspirations and practical achievements. From Turing’s foundational work to the advent of behavior trees in video games, and addressing the “Seven Deadly Sins of AI,” Goldberg’s narrative is a profound reminder of AI’s journey, its current limitations, and the pragmatic path forward.
In 1948, Turing authored a paper titled “Intelligent Machinery,” discussing the possibility of building intelligent machines using the human brain as a guiding principle. However, Sir Charles Darwin, his lab head and grandson of Charles Darwin, prohibited the paper’s publication until 1970 due to its perceived flakiness.
Marvin Minsky’s 1961 paper, “Steps Toward Artificial Intelligence,” classified AI into five areas: search, pattern recognition, learning, planning, and induction. Notably, three of these five areas were related to search, emphasizing the significance of search in AI at the time.
The Legacy of Turing and AI’s Evolution
Alan Turing’s groundbreaking work in the mid-20th century, particularly his “Uncomputable Numbers” (1936) and the Turing Test (1950), set the stage for AI’s development. Turing envisioned machines mirroring human intelligence, guided by the principles of the human brain. His successors, including Marvin Minsky, expanded AI into distinct domains such as search, pattern recognition, and learning. The evolution continued with diverse approaches: Symbolic AI, Neural Networks, Reinforcement Learning, and Conventional Robotics, each contributing uniquely to AI’s progression.
Gray Walter’s groundbreaking work on robots in the early 1950s significantly influenced robotics. His robots, called “tortoises,” utilized vacuum tubes as controllers and exhibited complex behaviors due to non-linear dynamics. Walter published his findings in Scientific American in 1950, showcasing the learning capabilities of these tortoises.
Eric Turner further advanced Walter’s work in the mid-1980s by creating a digital version of Walter’s tortoises. This concept became known as the subsumption architecture or behavior-based approach. Turner published his findings in Neural Computation, and the behavior-based approach gained prominence in robotics, leading to its implementation in the Mars rovers and Roomba vacuum cleaners.
Damien Isler refined the behavior-based approach further in 2000, resulting in the development of behavior trees. Behavior trees are widely adopted in video game engines like Unity and Unreal for controlling AI characters. They are credited with driving the largest number of robots with long-term existence and all AI characters in video games.
The Practical and Theoretical Aspects of AI
Goldberg critiques the hyperbolic perception of AI, emphasizing the “Generalization Gap” and the overestimation of AI capabilities. He notes the importance of distinguishing between AI performance and competence, underscoring the susceptibility of AI to adversarial examples and its struggles with reasoning and context. Meanwhile, DARPA’s $2 billion initiative aims to instill common sense in AI, targeting human developmental milestones as benchmarks.
Eric Turner presented a thought-provoking comparison of four main AI approaches: symbolic, robotics, behavior-based, and neural. He categorized symbolic and robotics approaches as deliberative, while behavior-based and neural approaches have both reactive and deliberative components. Turner evaluated these approaches across five areas: composition, grounding, spatial, sentience, and ambiguity, and concluded that symbolic approaches excel in composition, while neural approaches provide symbols and are grounded in the world but not in action.
Turner also compared the performance of these four AI approaches to human children and found that AI is still far from achieving human-level cognition. AI systems are not yet capable of handling tasks that require ambiguity, spatial reasoning, or sentience, as even squirrels outperform current AI approaches in food caching tasks.
In the quest to assess AI performance versus competence, Turner emphasizes that observing an AI’s performance does not reveal its true competence. This leads to incorrect assumptions about an AI’s capabilities. Additionally, AI systems are susceptible to adversarial examplesinputs designed to fool them into making incorrect predictions. These examples can be generated using techniques such as hill climbing and evolutionary algorithms, and natural adversarial examples occur in the real world.
Robotics and AI: A Reality Check
Despite significant advances, AI and robotics face substantial challenges in areas like manipulation, real perception, and service logistics planning. Goldberg points out that modern grippers still resemble their 1970s counterparts, and AI systems grapple with complexities that a two-year-old can easily navigate. This disparity underscores the need for a more nuanced understanding of AI’s capabilities and limitations.
Rodney Brooks, a renowned roboticist, has continuously challenged conventional wisdom and explored new frontiers in AI and robotics. He co-founded iRobot, which produced the world’s best-selling robot, the Roomba, based on his research and ideas. Brooks also played a pivotal role in promoting humanoid robotics, leading to the development of robots like ASIMO and Atlas.
In 1978, Eric designed a robotic hand, called the blue arm, with a parallel jaw gripper. 40 years later, in 2018, his company’s robotic hand had the same design and features, indicating a lack of progress in manipulation technology.
Rethinking Computation and Intelligence
Goldberg expands the discourse to the philosophical and conceptual underpinnings of AI. He reflects on Turing’s original motivations, the social constructions surrounding computation, and the limitations of our current computational models. He also introduces Katarina Fridisky’s work, which addresses AI’s data set limitations, and discusses the potential of abstract thinking and universal algebra in advancing AI.
Progress and Remaining Challenges in AI: AI has made significant strides in tasks like object recognition, language understanding, and manual dexterity. However, challenges remain in replicating more advanced human capabilities, such as common sense reasoning, complex sentence comprehension, and the ability to articulate one’s own beliefs and desires.
General Intelligence and Superintelligence: The pursuit of general intelligence and superintelligence presents a significant challenge. It is unclear whether humans have the intellectual capacity to create artificial intelligence that surpasses their own.
The Role of Computation in AI Progress: The availability of massive computational power has contributed to the rapid progress in AI. However, it is uncertain whether computation alone can drive further progress in AI, or if more fundamental breakthroughs are needed.
Turing’s Definition of Computation: Alan Turing’s 1936 paper “On Computable Numbers” provided a formal definition of computation. Turing’s definition was motivated by the need to determine whether mathematics is complete, consistent, and decidable.
The Media, Perception, and Future Trajectories
The media’s sensational portrayal of AI contrasts starkly with its actual achievements, leading to funding inconsistencies and skewed public perceptions. Goldberg calls for a balanced approach, focusing on achievable tasks and addressing real-world problems pragmatically. He cautions against overhyped claims and advocates for a deeper understanding of AI’s true potential and boundaries.
Computation and its Historical Evolution: The history of computation can be traced back to efforts to compute astronomical events. John Napier’s logarithmic tables, Johann Kepler’s use of logarithms, and Charles Babbage’s analytical engine were instrumental in the development of computation.
Hilbert’s Problems and Turing’s Contribution: David Hilbert proposed three problems related to the completeness, consistency, and decidability of mathematics. Turing’s work on computability was influenced by Hilbert’s problems, specifically the question of decidability.
Computation and its Limitations: Turing’s model of computation, with its emphasis on places and containers, may not be sufficient for capturing the nuances of human intelligence.
Rodney Brooks founded the International Journal of Robotics Research (IJRR) and was involved in the development of Lucid Lisp. He co-founded iRobot, which produced the world’s best-selling robot, the Roomba, based on his research and ideas. Brooks appeared in Errol Morris’s film “Fast, Cheap, and Out of Control,” where he shared his unique perspective on research and the importance of challenging assumptions. He played a pivotal role in promoting humanoid robotics, leading to the development of robots like ASIMO and Atlas.
Media Coverage of AI: The media’s coverage of AI has oscillated between hype and skepticism over the years. Outrageous claims about AI’s capabilities have led to funding crashes in the past. The Lighthouse Report in 1973 and the neural networks hype in the 80s serve as examples of such cycles.
AI, A Journey of Balanced Aspirations
As AI continues to evolve, Goldberg’s insights serve as a beacon for navigating its complex landscape. His emphasis on understanding AI’s historical roots, current limitations, and future possibilities highlights the importance of a balanced, informed approach. By reconciling ambition with realism, the AI community can forge a path that honors Turing’s legacy while remaining grounded in practical, achievable advancements.
Rodney Brooks revolutionized AI and robotics by focusing on behavior-based systems and challenging traditional representations of intelligence. He co-founded Lucid and iRobot, making significant contributions to the field and bringing robotics into everyday homes....
Rodney Brooks, a pioneer in robotics and AI, expects gradual introduction of self-driving cars with limited capabilities and downplays fears of super AI, emphasizing the need for robots in various sectors to address labor shortages....
Rodney Brooks emphasizes balanced AI research, advocating exploration and exploitation while considering ethics, safety, and human-centric design. He challenges the computational intelligence paradigm, raising questions about the limitations of deep learning and advocating for a deeper understanding of compositionality and work in AI systems....
Rodney Brooks discussed the challenges and possibilities in AI and robotics, predicting AGI by 2300 and ASI by 2400. He emphasized the importance of physical interaction and understanding of the world in robotics, advocating for a learning approach that goes beyond mere human interaction....
Rodney Brooks offers a balanced perspective on robotics, highlighting their potential benefits while urging caution regarding ethical implications and AI overestimations. He emphasizes the importance of a measured approach, continued exploration, innovation, and ethical vigilance in the field of robotics....
Rodney Brooks challenges the dominant view that cognition is inherently computational, suggesting it might be a societal construct rather than a universal property....
Rodney Brooks' contributions redefined robotics, focusing on adaptable and user-friendly robots like Baxter for industrial automation and GestoNurse for healthcare assistance. Robotics trends include collaborative robots, localized manufacturing, and addressing socioeconomic challenges in aging populations....