Jaron Lanier (Microsoft Research) – Keynote Future of Text Conference (Sept 2022)
Chapters
Abstract
In a recent talk, Jaron Lanier, a luminary in the tech industry and a scientist at Microsoft, tackled a spectrum of topics highlighting the interplay between text and computational algorithms, especially in machine learning models like GPT by OpenAI. Lanier examined how these technologies could profoundly affect our understanding of language, creativity, human agency, and even consciousness. He also delved into the contradictions inherent in current tech culture’s perspective on artificial intelligence (AI), questioning the industry’s optimistic belief that AI will solve all human problems while simultaneously devaluing human creativity and consciousness.
—
Computational Text Analysis and the Limits of Understanding
Jaron Lanier’s presentation took a deep dive into the realm of machine learning models, particularly focusing on OpenAI’s GPT. He noted that while these programs are astonishingly proficient at various tasksranging from passing math exams to language translationthey exhibit occasional glaring failures. Lanier suggests that these missteps arise from the models’ lack of internal “understanding” of the topics they tackle. Their capabilities may signify more about the redundancy in human language than about the machine’s competence.
—
Text, Sequences, and the Scale of Language Models
Lanier raised the point that text is fundamentally a sequence of words. He remarked that the success of machine-generated language depends on the scale of the computational model. Smaller models are ineffective, while excessively large models could be counterproductive or even useless. This observation echoes Jorge Luis Borges’ philosophical concept of the “infinite library,” suggesting that while technology can process and generate text, its understanding may be qualitatively different from human comprehension.
—
Global Redundancy and Questions of Human Creativity
Lanier also noted that the large-scale computational resources required to run these algorithms are accessible only to a handful of organizations. He stated that the success of machine-generated language reflects a lack of creativity in human language on a global scale, raising questions about the role of algorithms like GPT in eroding human creativity. Lanier challenged the notion in tech culture that creativity is simply a complex form of recombination, similar to Darwinian evolution, indicating a tension with the industry’s optimistic views on the creativity of these systems.
—
Darwinian Evolution vs. AI Models
Building on the issue of creativity, Lanier distinguished between Darwinian evolution and the workings of current AI models like GPT. In Darwinian evolution, there’s a purposebe it survival or aesthetic selection. However, AI models are not intrinsically “about” any particular subject, thereby lacking the same intrinsic creative quality. This differentiation draws into question the ethical implications of relying on algorithms to simulate creative or even conscious behavior.
—
Ironies and Contradictions in Tech Culture
One of the more compelling aspects of Lanier’s talk was his insight into the irony that pervades tech culture. On the one hand, there’s the notion of the “singularity”a future so altered by technology that it becomes incomprehensible. On the other hand, there’s a strong belief that AI and machine learning will solve all of humanity’s problems, from climate change to infectious diseases. Lanier expressed skepticism towards this utopian viewpoint, suggesting it might be misplaced.
—
The Philosophical and Ethical Concerns
Lanier touched upon the deeper philosophical aspects, arguing that a societal shift is needed in how we understand and deploy algorithms. He stressed that these algorithms become harmful when combined with flawed economic incentives. He also warned against transferring mystical attributes from humans to machines, emphasizing the irreplaceable value of human consciousness and agency.
—
Visual Models and Imperfect Systems
Lanier briefly discussed the limitations of visual foundational models like Dali. Although they can generate visually impressive images, their ability to handle complex structures is still limited. This serves as another example of the discrepancies between machine and human learning, adding to the growing list of caveats that accompany these technologies.
—
Conclusions and Further Reflections
Jaron Lanier’s talk was a call for deeper contemplation on the complex interrelationships between technology, language, and human cognition. He highlighted the need for a philosophical and ethical reassessment of how we incorporate algorithms into our lives. While he acknowledged the potential benefits of these technologies, his primary focus was on their limitations and potential pitfalls, especially concerning their impact on human society, cognition, and even spirituality.
In a world increasingly reliant on AI and machine learning, Lanier’s nuanced insights serve as a cautionary tale. They invite us to question not just the capabilities of these remarkable technologies but also how they fit into our understanding of what it means to be human.
Notes by: professor_practice