Geoffrey Hinton (Google Scientific Advisor) – Heroes of Deep Learning (Aug 2017)
Chapters
Abstract
Geoffrey Hinton and the Evolution of Deep Learning: Bridging Brain Insights and AI Breakthroughs
This article delves into the remarkable journey of Geoffrey Hinton, a pioneer in the field of artificial intelligence, whose work has been instrumental in bridging insights from brain research to the advancements in deep learning. Hinton’s intellectual odyssey, from his early fascination with the brain’s functioning to his groundbreaking contributions in neural networks and deep learning, has shaped modern AI. His theories and inventions, including backpropagation, early word embeddings, restricted Boltzmann machines, capsules, and the concept of thoughts as vectors, have revolutionized our understanding of AI. We explore his academic journey, key contributions, and the paradigm shift in AI thinking, offering insights into the future of this transformative field.
Early Inspirations and Academic Journey
Geoffrey Hinton’s interest in AI was sparked by a high school discussion about the brain’s holographic nature and memory distribution, eventually leading him to pursue diverse disciplines like physiology, physics, philosophy, and psychology in search of unraveling the brain’s mysteries. His academic journey included a brief detour as a carpenter before venturing into AI at the University of Edinburgh.
The Pursuit of Neural Networks
Despite clashing with his supervisor, Longit Higgins, who favored symbolic AI, Hinton’s steadfast belief in neural networks led to significant achievements, notably his PhD in AI. His relocation to California, facilitated by a Sloan Fellowship, placed him in a more receptive research environment, catalyzing his work on neural networks.
Groundbreaking Contributions
Hinton’s most notable contributions began with the backpropagation algorithm, co-developed with David Rommelhart and Ron Williams. This work, published in Nature in 1986, unified concepts of knowledge representation and was a cornerstone for modern natural language processing. Backpropagation may have a biological implementation in the brain, given the complexity of cells and the plausibility of an algorithm similar to backpropagation. The Recirculation Algorithm, developed with Jay McClelland, offers an alternative training method for autoencoders without backpropagation. His invention of Baltimore machines and restricted Boltzmann machines further advanced deep learning. Hinton’s idea of using reconstruction error to derive discriminative performance in stacked restricted Boltzmann machines was initially overlooked but later gained attention. Fast weights for recursion, proposed by Hinton during his graduate studies, enable short-term memory and true recursion. His recent work with Jimmy Barr demonstrated this concept.
Innovations in Neural Networks
Hinton’s work on sigmoid belief nets and deep belief nets offered new perspectives on inference and learning in neural networks. His developments in variational methods, particularly variational Bayes, expanded Bayesian learning capabilities in neural networks. The introduction of rectified linear units (ReLUs) and his exploration of fast weights for recursion in neural networks marked significant strides in AI.
The Brain and Backpropagation
Although Hinton’s current views on the relationship between backpropagation and the brain aren’t discussed in detail, his work has always sought to bridge insights from brain research with AI technology.
Capsules and Future Directions
Hinton’s ongoing work on capsules represents a significant departure from traditional AI approaches. Capsules, which aim to structure and hierarchize object representation, highlight his continued commitment to advancing AI. His focus on routing by agreement within capsule networks promises improved generalization and efficient training. Capsules rely on the idea of distributed representation of features, where multidimensional entities are represented using small vectors of activities. A capsule acts as a feature instance with multiple properties, unlike traditional neural networks’ neurons. Routing by agreement is a crucial mechanism for feature grouping and generalization. Discriminative training with iteration enables supervised learning in capsule networks. Hinton leads a research team in Toronto to refine capsule networks.
Philosophical Shifts and Advice for Researchers
Hinton’s journey in AI has evolved from a focus on discriminative learning to embracing unsupervised learning, influenced by Andrew Ng. He emphasizes the importance of unsupervised learning and advises aspiring AI researchers to cultivate contrarian thinking, replicate studies for practical understanding, and maintain hands-on involvement in programming. Hinton’s intellectual journey included an early focus on backpropagation and discriminative learning, a shift towards unsupervised learning, and a recent emphasis on supervised learning, recognizing the importance of unsupervised learning in the long run. He criticizes excessive emphasis on sparsity and slow features, advocating for features that change predictably. Hinton’s guiding principle for modeling involves transforming observables to a state vector for effective linear operations. He advises aspiring researchers to engage in critical thinking, replicate published papers, and maintain hands-on programming experience.
Observations on Deep Learning in Academia and Industry
Hinton observes a lag in academia’s adaptation to deep learning, contrasting with the proactive role of big companies like Google in training individuals. He commends Andrew Ng for pioneering the first deep learning MOOC on Coursera, acknowledging its impact on AI education. Hinton’s advice for learners is to find an advisor whose beliefs align with their own, to consider working on topics such as capsules and unsupervised learning, to engage in critical thinking, replicate published papers, and maintain hands-on programming experience. Hinton criticizes computer science departments for being slow to recognize the significance of deep learning, arguing that they need to adapt to the changing landscape of computing, where showing computers is as important as programming them. He expresses hope that universities will eventually catch up and provide adequate training in this field.
Paradigm Shift in AI Thinking
Traditional AI’s focus on symbolic representations and reasoning stands in contrast to modern AI’s emphasis on neural networks and vector representations. Hinton has been at the forefront of advocating for a vector-based approach, arguing that thoughts are large vectors of neural activity, extending beyond symbolic expressions. Early AI pioneers like von Neumann and Turing favored brain-inspired AI over symbolic AI. Symbolic AI dominated early AI research, with a focus on logic-based representations for intelligence. Hinton challenges the symbolic AI view, proposing that thoughts are not symbolic expressions but rather “great big vectors of neural activity.” This perspective contrasts with the idea of thoughts as strings of words or symbolic representations. Hinton emphasizes the causal powers of these thought vectors, asserting that they cause other thought vectors. This differs significantly from the standard AI view that thoughts are symbolic expressions. Despite the emergence of the new view, Hinton acknowledges that many in the AI community still adhere to the symbolic AI paradigm.
Conclusion
Geoffrey Hinton’s contributions to AI are not just technical but also philosophical, leading to a fundamental shift in how we understand and develop AI technologies. His journey from exploring the brain’s mysteries to shaping the field of deep learning underscores the importance of interdisciplinary thinking and the relentless pursuit of innovative ideas in advancing AI.
Notes by: WisdomWave