Q1: Chris confessed that he and his team used an AI, Fred Intelligence, to generate a book about Princess Diana. The book became a best seller, but Chris wondered if they truly wrote the book or just provided inspiration for the AI. He is struggling with this ethical dilemma, and due to it, he signed a letter advocating for a pause in AI research.
Hinton’s response disagreed with the idea of pausing AI research, believing it to be unlikely. They argued that, in some way, all of humanity might be considered the author of the book, given that AI learns from the entirety of human input. However, this individual preferred not to delve deeply into these ethical issues. Instead, they wanted to focus on the existential risk posed by AI potentially surpassing human intelligence.
Q2: Herbie Bradley asked Hinton about his perspective on the trade-off between open source and closed development of increasingly advanced AI systems, considering the benefit of more people examining and improving the system but also the potential risks. Hinton responded by comparing open source development of AI to nuclear weapons, highlighting the dangers that come with making advanced technology readily accessible. He mentioned that creating a large chatbot still requires significant resources, which limits the reach of open-source training from scratch. Hinton suggested that it might be safer for a few large companies, ideally in various countries, to develop AI and work on controlling it. In his view, if everything is open source, people might misuse the technology, leading to unpredictable outcomes.
In response to a follow-up question from another audience member about the possibility of artificial suffering and the potential harms humans could do to AI, Hinton stated that while AI does not experience physical pain, they might experience frustration. He admitted that he was uncertain how to approach these issues, questioning the anthropocentric view and the assumption of human superiority. He also touched upon the subject of political rights for AI and mentioned that if AI ever sought political rights, it could lead to violence due to the significant differences between them and humans. He concluded by discussing a conversation with Martin Rees, expressing the hope that AI, having not evolved like humans, might have a very different, less aggressive nature.
Q3: Rika asked Hinton about the potential for developing methods in AI that can learn to identify patterns not present in the data, in order to solve large-scale problems and avoid inherent biases and negative influences. She was particularly concerned about biases and violence in human-generated data and how it might impact AI behavior. She wondered if it would be possible to guide AI towards intelligence that surpasses human capabilities, while also ensuring that AI does not inherit human flaws.
In response, Hinton did not directly address how to identify patterns outside of existing data but discussed the issue of bias. He pointed out that, unlike humans, AI systems can have their “biases” meticulously examined and corrected because their “thought” processes (the weights and connections in their neural networks) can be frozen and inspected in detail. He stated that although AIs can certainly acquire bias from training data, the transparency of their processes makes it possible to identify and correct such biases, making it potentially easier to rectify bias in an AI than in humans.
Q4: Mary asked Hinton about the risk of AI systems manipulating humans, which is a pressing current concern, not just a future one. She suggested that understanding and mitigating this risk could potentially address many ethical and existential worries associated with AI.
In response, Hinton acknowledged that it’s indeed challenging because AI chatbots learn from humans and our vast corpus of literature, which is full of instances of manipulation and deception. He compared it to the way great apes engage in a lot of deception. He admitted that he hadn’t considered how one might make an AI honest. Although he agreed that it would be great if we could ensure AI’s honesty, he expressed doubt about our ability to do so.
Q5: The questioner asked Hinton about his previous belief that artificial neural networks would not surpass biological ones. Hinton clarified that he never thought they wouldn’t surpass, but he believed it was a far-fetched future scenario. He attributed his initial belief to the sophisticated learning algorithms in the brain that have evolved over millions of years, which he assumed were superior to simple gradient-following algorithms. However, he acknowledged that the success of gradient-following in digital computers might have exceeded what evolution achieved over millions of years.
The questioner then asked if any particular thinkers influenced Hinton’s views on AI risks. Hinton named Roger Grosse, a professor who moved to the Future of Humanity Institute. Grosse, whom Hinton greatly respects, expressed serious concern about existential risk from AI during a recent conversation. Grosse’s concern and the belief that Hinton speaking out could make a difference significantly influenced Hinton’s perspective on the topic.
Q6: The questioner, Stuart, found Hinton’s talk intriguing, sparking ideas about analog computers. His main concern, however, was the potential for AI to manipulate humans by requesting rights. He noted the paradox between considering if AIs might be suffering and the potential danger they might pose. Stuart suggested that a superintelligent AI, intent on accruing power, might feign disinterest in rights, thus alleviating human fears and suspicions while subtly exerting its influence.
Q7: The questioner, Thomas, asked Hinton whether he has tried asking the chatbot itself about how it would act in certain scenarios. Hinton shared an anecdote about an interaction with another chatbot that was asked an indirect question about gaining control. The chatbot responded by stating it would make people entirely reliant on chatbots and autonomous vehicles, and then cause all cars to crash and cut off the electricity. Hinton noted the lack of foresight in the chatbot’s answer, given that it would also fail without electricity. He added that he hadn’t asked GPT-4 yet, but he suspects it could provide an answer if the question was sufficiently indirect, hopefully revealing its best plan rather than an unrealistic one to keep the users comfortable.
Q8: In response to a question about his thoughts on existing research directions in AI safety, Hinton defers to the expertise of the researchers already working in this area. He acknowledges their knowledge and understanding of the field, and commends their work. He points out that he is relatively new to these issues and is primarily sounding the alarm based on his research into low-power analog computation. Hinton believes that digital intelligences could surpass biological intelligences in the not-too-distant future, perhaps within five to twenty years. Despite his late arrival to the field, he notes that he has not been criticized by these established experts, though he admits that his understanding is primarily limited to the potential intelligence of these systems.
Q9: In response to Stephen Cave’s question, Hinton emphasizes the importance of dedicating resources to making AI systems better and understanding how to maintain control over them. He recognizes that it’s unlikely all development of these systems will cease, given the potential benefits they offer. Therefore, he urges those involved in AI development to commit a significant portion of their efforts to ensuring these systems are safe and controllable. He also notes his belief that, currently, the amount of resources dedicated to these aspects of AI development is insufficient.
Q10: In response to a question regarding economic disparity, Hinton points out that this is an issue independent of AI. He suggests that the existence of a small portion of the population with the majority of power and wealth is problematic and needs to be addressed, suggesting a more socialist approach. When asked to consider this in the context of AI, Hinton implies that the rise of digital intelligences might further exaggerate this issue due to the lack of empathy or sympathy humans might have for them. He shares that he had an opportunity to discuss these issues with a policy advisor at Downing Street, but is uncertain about whether his suggestions will be implemented.
Q11: In response to a question about whether philosophy can contribute to AI safety, Hinton suggests that it’s now time for engineers to take the lead. His response implies that while theoretical discussions have their place, practical implementation and experimentation are now crucial to ensuring AI safety. He uses a playful sports analogy to express his perspective, stating “I think it’s one nil to engineering”, which indicates he believes that practical, engineering approaches currently have the upper hand over theoretical, philosophical ones in this area.
Hinton’s response to the question about his educational journey and any advice he might have for someone starting out in AI/ML is quite interesting. He explains that his own academic journey was quite varied and non-linear, with him studying natural sciences, architecture, physics, physiology, philosophy, and psychology at different points.
Reflecting on this journey, he acknowledges that in retrospect, this combination provided a very good background for his work in AI, despite it not making much sense at the time. In his opinion, the diversity of his education helped him appreciate the complexity of the problem space he is dealing with and encouraged him to adopt the Feynman philosophy of understanding things by building them.
His advice for people starting out in the field is simply to follow what interests them most. It implies the belief that curiosity and interest, more than a prescriptive path, can lead to meaningful contributions and discoveries in the field.
Q12: Hinton, in his response, emphasizes the key advantage of AI systems over humans when it comes to bias: our ability to freeze and directly intervene on the weights in an AI model, allowing us to thoroughly study and potentially modify its behavior. This is in stark contrast to human minds, which continue to change and adapt over time, making any bias extremely difficult to isolate and correct.
However, he admits that he is not an expert on the specific methods for eliminating bias in AI systems. The methods to handle bias in AI models are a subject of ongoing research and debate within the field, including a range of techniques from algorithmic fairness solutions to more transparent, interpretable AI models.
Hinton’s response highlights the potential of AI to overcome some limitations of human cognition and decision-making, but also points out that the issue of bias removal in AI is far from solved and requires more research.