David Cheriton (Stanford Professor) – Age of Incompetence (Nov 2017)
I don’t think, in some ways, this ends up being a major dislocation because if you believe all this stuff I’ve said, over time we developed tools that have made society more efficient. And to some degree, you know, it’s we’re not in command of our own existence as much as we think we are. I mean, the society’s not designed by people, it’s designed by evolution. And so I think this is just more of the flow, but I think that the thing that is a little different is that the magnitude of risk, I think, and the magnitude potential goes together with automation. So I think it’s incumbent on us in computer science to recognize we’re dealing with an extremely powerful technology, which poses a significant risk to how we survive. So I think that’s where maybe the nuclear guys have already been there. I think that’s really the trajectory.
– Cheriton @ 52:32
Chapters
Abstract
As we witness the increasing dominance of automation and artificial intelligence (AI) in various fields, we find ourselves navigating what David Cheriton calls the ‘age of incompetence.’ Cheriton highlights the exponential growth in technology and the escalating impact of automation on human skill sets, which have led to a widening gap between human and machine abilities. Moreover, he points to the prevalence of human error as a significant concern in life-critical or business-critical areas. Meanwhile, Cheriton also emphasizes the changing landscape of software development, notably with its growing trend towards automation. This article will delve into these phenomena and their implications for our society.
In the ‘age of incompetence,’ humans are becoming increasingly outpaced by technology, their limitations in speed, error tolerance, and efficiency laid bare. Cheriton uses examples from fields like chess, Go, acrobatic maneuvers, and aviation to illustrate this point. In particular, he cites aviation as an area where operator error, due to confusion and poor decision-making, can have catastrophic consequences. This trend, Cheriton argues, underscores a broader decline in human competence, as automation exhibits higher accuracy and reliability in task performance, often with “five nines” or “six nines” of dependability.
Cheriton asserts that technology’s evolution is outstripping human ability. This progress isn’t about replicating human behavior but rather optimizing the utility and effectiveness of machines in completing tasks. Cheriton uses the example of the 747 airplane, which, despite not mimicking a bird’s flapping wings, revolutionized travel with its efficiency and speed. He also engages with Ray Kurzweil’s concept of the singularity—where human and machine intelligence merge—questioning the relevance of human contribution in such a scenario.
In the context of software development, Cheriton proposes his ‘laws of automation’: everything that can be automated will be automated, and everything can be automated. This stance could lead to an increase in dependence on automation and a corresponding decline in human proficiency. Cheriton counters the idea that because humans program computers, they can’t take over, arguing that humans can and have developed software that outperforms them in specific tasks.
Cheriton further likens modern software development, such as test-driven development and continuous integration, to evolution. Most changes made to software can be detrimental—akin to harmful mutations in evolution—and the challenge lies in filtering these out. This process has been labeled “monkey-oriented programming.” Despite ongoing software evolution, Cheriton acknowledges that innovation has plateaued in some areas, with existing operating systems, programming languages, and technologies seeing only superficial changes.
While contemplating the future of software development, Cheriton introduces the concept of the ‘last algorithm,’ signifying an endpoint to significant inventions in the domain. Simultaneously, he highlights the rise of machine learning and unsupervised learning, which involves systems learning and improving without direct human programming or intervention. This trend leads to an intriguing possibility—automation of programming, which is already occurring to some degree through machine learning and genetic programming.
Cheriton emphasizes that in many areas, humans are already outmatched by computers, not just in specialized areas like games where machines outperform world champions, but also in broader domains like teaching computer science. Cheriton suggests we’ve transitioned to a world predominantly run by computers, urging us to adapt and find contentment in this “automated zoo.” Drawing parallels with Darwinian evolution, he presents the continual progress of technology as a survival of the fittest scenario and identifies our challenge as effectively navigating these developments to prevent human obsolescence. Cheriton emphasizes that acknowledging our current reality is vital as we prepare for this “brave new world” of superior software.
Lastly, Cheriton discusses the implications of technological evolution and automation on human society. Despite concerns around control and potential risks, he posits a future where automation could help solve complex problems, improve living standards, and provide more leisure time, implying a potential for a high-quality human life in an automated future. He further argues that we must identify vulnerabilities in our software, adopting an aggressive approach akin to monkeys in a zoo looking for weak points in their cages.
In conclusion, Cheriton’s insights into the evolving landscapes of automation and software development pose significant questions and challenges. Navigating the ‘age of incompetence’ requires a keen understanding of our limitations as humans and the potential of automation and AI. Balancing the scale between human skills and automated efficiencies may well determine the quality of our life in the future.
Notes by: Systemic01