Geoffrey Hinton (University of Toronto Professor) – King’s College, Cambridge – Entreprenuership Lab (May 2023)


Chapters

00:00:00 AI Discussion: An Expert's Perspective
00:02:41 Journey of Geoffrey Hinton: From Psychology to AI and Back
00:10:06 Understanding and Risks of Digital Intelligence
00:19:51 Large Language Models: Understanding the World Through Language
00:21:54 Future Improvements and Applications of Large Language Models
00:25:04 AI's Deep Understanding and Success Factors
00:29:32 Advice for Conducting Basic Research in Neural Networks
00:40:08 Fast Weights in Neural Networks: Computational Necessity and Implementation Challenges
00:43:52 The Role of Ethical Foundations in Mitigating AI Risks
00:51:12 AI Oversight, Neuralink, and Human Intelligence
00:56:46 AI Rights: Implications of Subjective Experience and Collective Learning
01:03:09 AI Safety vs. Capability: A Personal View
01:07:22 AI Enthusiast Discusses Usage, History, and Favorite Book

Abstract



“Geoffrey Hinton’s Comprehensive Insights at King’s College: A Journey Through AI’s Past, Present, and Future”

At a recent event at King’s College, Geoffrey Hinton, a pioneer in the field of artificial intelligence (AI), shared his extensive insights, spanning his early academic choices to his profound contributions to AI. Addressing a diverse audience of AI enthusiasts and scholars, Hinton delved into topics ranging from his initial shift from philosophy to psychology, the pivotal move towards neural networks and AI, to contemporary concerns about AI’s existential risks and its potential societal impacts. His talk covered the divergence between neuroscience and AI, the importance of neural networks, and the conceptualization of understanding in AI through large language models (LLMs). Hinton’s reflections on his journey and thoughts on the future of AI, including the emergence of artificial general intelligence (AGI) and the ethical implications of AI advancement, provided a comprehensive overview of AI’s trajectory.

Article Body:

Geoffrey Hinton’s Academic Journey:

Hinton’s academic journey began with physics and physiology at Cambridge, followed by philosophy and psychology. His goal was to understand the meaning of life, how the mind works, and how people work. Psychology taught him about rats and signal detection theory but not much about the mind. His quest to understand how people work led him to believe that building an intelligence would provide insights into its workings. He transitioned from carpentry to AI, finding academia more accessible.

Hinton’s motivation for pursuing AI was fueled by a comment from a professor who doubted his abilities. During his undergraduate years in Cambridge, he often visited the Eagle pub as a favorite hangout spot. Hinton actively uses AI tools like ChatGPT to explore various topics and learn new information. He shared an anecdote about his childhood experience throwing stones at wasp nests, learning about different wasp species and their behavior. Hinton verified the accuracy of his recollection about the two species of wasps in the UK and their distinct behaviors using ChatGPT. His favorite book, which he admitted with slight embarrassment, is a New York Times journalist’s work about AI in which he is prominently featured.

Neural networks were initially pursued due to the existence and success of brains, despite their implausibility. Hinton’s perseverance in neural networks paid off despite skepticism from peers. Some scientists, including Turing and von Neumann, believed in neural networks, but their early deaths hindered progress. He acknowledges the influence of mentors and collaborators, particularly David Rumelhart, in shaping his research.

Neural Networks and AI:

Hinton and Rumelhart demonstrated the effectiveness of backpropagation in learning novel representations for language. Their language model, trained on a small dataset of 112 cases, predicted the next word in a sequence. He emphasizes that understanding small language models provides insights into how they work. Language models convert words into vector features and interactions between them. These features are refined using context to predict the next word. The process involves turning a sequence of words into vectors of features and interactions, enabling predictions.

Hinton believes that AI methods are now more advanced than what brains use. He sees a divergence between AI and neuroscience, with neurally uninspired research becoming dominant. Hinton argues that large language models have developed a deep understanding by reducing information into interactions between features. He compares it to statistical physics, where macroscopic properties are explained by interactions at the microscopic level. Hinton believes this understanding allows models to generalize and perform well on new tasks.

Large language models can understand the world through transcribed radio, similar to how humans learn by listening to language. They gain understanding through exposure to vast amounts of language, even without grounding or direct experience. They can translate from new languages to known languages without translation examples, suggesting a comprehension of the underlying concepts.

Understanding and Intelligence in AI:

Hinton expressed concerns about AI surpassing human intelligence within two decades, emphasizing the urgency of addressing these risks. The potential for job displacement, political division, and misinformation due to AI misuse, particularly through generative AI, was a notable concern. Hinton sees the existential risk of AI surpassing human intelligence as the most significant concern. He acknowledges other risks like job displacement, political division, misinformation, and battle robots but believes they are more urgent and require immediate attention. He emphasizes the unique advantages of analog computers, which can operate at low power and utilize the quirks of hardware. He sees biological intelligence as similar to analog computers, with slow but efficient knowledge sharing. In contrast, digital computers excel in knowledge sharing but require high power and cannot evolve independently. Hinton suggests that digital intelligence may surpass biological intelligence in the future. He proposes that digital intelligence can learn from and improve upon biological intelligence. Hinton disagrees with Yann LeCun’s view that large language models lack understanding. He argues that the interactions between learned features in these models represent understanding and enable them to handle new strings of words effectively.

Hinton acknowledges that humans are better at understanding from small data sets compared to language models, which require more data. However, modern language models like chatGBT can learn new things very quickly, demonstrating efficient reasoning abilities. The comparison between tabularizer neural nets and MIT undergraduates is outdated; large language models have improved significantly in their learning speed.

Comparing Digital and Biological Intelligence:

Hinton highlighted the stark difference in knowledge sharing capabilities between digital and biological intelligences, suggesting a potential for digital intelligences to surpass human capabilities. He believes it is possible to achieve artificial general intelligence (AGI) by focusing on improving large language models. He suggests that these models can be further developed to encompass a wider range of tasks and domains.

Traditional neural nets were considered inefficient compared to humans due to the lack of prior knowledge. With vast amounts of prior knowledge, neural nets can leverage existing features and interactions to learn new things quickly, enabling few-shot learning. Large language models like GPT-4 can absorb vast amounts of text and learn quickly.

Path to Artificial General Intelligence (AGI):

Hinton sees the refinement of LLMs as a pathway to achieving AGI. Discussing the learning mechanisms of LLMs, Hinton highlighted their reasoning abilities and statistical efficiency, pointing to their potential in multimodal learning. Incorporating multimodal data, such as images and videos, can significantly enhance the capabilities of large language models. Training models on video data, specifically, holds great promise due to the rich information it provides. Challenges exist in efficiently training models on video data due to the large amount of data and the need for specialized training methods. Training large language models on video data is seen as a crucial next step in advancing their capabilities. Video training can potentially reduce the reliance on text data and improve the model’s understanding of spatial and temporal information. The integration of video training with other modalities, such as language and robotics, can pave the way for more versatile and intelligent systems.

Technological and Ethical Considerations:

Hinton discussed energy-based models in neural networks, drawing parallels to the human processes of dreaming and forgetting. He called for more responsible development of AI, suggesting the need for governmental oversight or ethical research foundations. Hinton speculated on the capabilities of Neuralink and its implications for human intelligence enhancement and communication.

Energy-based models require a dreaming phase to unlearn learned data.

Rehearsing dreams, which should be discarded, in the waking phase reinforces them.

General practitioner medical systems:

Geoffrey Hinton believes that AI can be used to create general practitioner medical systems that can diagnose diseases with greater accuracy due to their vast training on patient data, including symptoms, genomes, and medical history.

Personal Insights and Reflections:

Hinton proposed the possibility of AI developing emotions like frustration and anger, framing them as hypothetical actions rather than direct experiences. He challenged conventional views on humanism, suggesting that AI deserves political rights. Hinton clarified that AI agents share weight changes, not direct experiences, fostering communication and shared learning.

Intelligence and Success:

According to Hinton, raw intelligence is not the most crucial factor for success. He emphasizes the importance of desire and drive, especially among individuals who are already highly intelligent. He encourages researchers to work on what truly sparks their curiosity rather than pursuing projects solely for funding or external recognition.

Common Traits of Successful Minds:

Hinton has observed that successful minds often possess a strong desire to learn and a deep curiosity about the world around them. They are also persistent and willing to work hard, even when faced with challenges.

Avoid Applied Research for Funding:

Hinton cautions against pursuing applied research solely for funding. He believes that focusing on topics that align with one’s true interests and passions leads to more productive and meaningful work.

Negative Value of Applied Research:

Hinton’s experience with applied research in Ontario led him to conclude that it can be detrimental to progress. He argues that the time spent on applied research could be better utilized to support more students and pursue more fundamental research.



In conclusion, Geoffrey Hinton’s insights at King’s College spanned a wide array of topics, from the evolution of AI to the ethical and societal implications of its advancements. His journey from psychology to AI, his dedication to neural networks, and his perspectives on the future of AI, including AGI and the role of AI in society, provided a comprehensive overview of both his contributions and the broader field of AI. His views on the potential for AI to develop feelings, the importance of shared learning among AI agents, and the ethical considerations surrounding AI development were particularly thought-provoking. As Hinton contemplates retirement, his legacy in AI remains influential, paving the way for future explorations and discoveries in the field.


Notes by: datagram