Mustafa Suleyman (DeepMind Co-founder) – New technology and the future of public policy | Bennett Institute of Public Policy launch (Apr 2018)
Chapters
Abstract
A Comprehensive Guide to Ethical AI and the Future of Technology Policy
In an age where artificial intelligence (AI) and digital technologies are rapidly transforming our world, the call for ethical governance and responsible innovation has never been more urgent. This article synthesizes key insights from thought leaders like Mustafa Suleyman, co-founder of DeepMind, and Martha Lane Fox, alongside discussions at the Bennett Institute of Public Policy launch. It provides an in-depth analysis of the challenges and opportunities that AI presents to society, emphasizing the need for ethical standards, diverse and inclusive progress, and a collaborative approach to addressing the complex societal issues of our time.
Mustafa Suleyman’s Vision: Ethical AI for Societal Challenges
Mustafa Suleyman highlighted the critical need for ethical AI and effective policy institutions in addressing complex societal challenges like climate change. He emphasized the necessity of new institutions that merge academic rigor, social justice ethics, and the efficiency of the corporate world. Suleyman stressed the importance of ethical standards and governance in AI, advocating for vigilance and innovation in governance models to ensure AI benefits society.
Mustafa Suleyman is an entrepreneur and AI expert who co-founded DeepMind, an AI company acquired by Google. He emphasizes the need for new institutions that combine academic rigor, social justice ethics, and rapid execution to address complex global challenges.
Suleyman believes that AI has the potential to solve society’s most pressing challenges, but it also risks exacerbating inequality and social disenfranchisement. He questions whether AI will accelerate these trends or offer an opportunity for a new direction.
Suleyman stresses the importance of holding AI’s design and use to the highest ethical standards and innovative forms of governance. He discusses DeepMind Ethics and Society, a research unit dedicated to studying these issues, with Diane Coyle as an independent fellow.
Suleyman notes the recent sea change in rhetoric around the importance of ethics and governance in technology, especially AI. He observes the increased attention given to issues like fairness, discrimination, algorithmic control, accountability, and data governance.
Suleyman welcomes the Bennett Institute of Public Policy (BIP) and its interdisciplinary research agenda as vital for addressing the ethical and societal impacts of technology. He identifies three roles for BIP:
1. Ensuring that technology companies consider the diversity of users when building products, addressing the disconnect between tech companies and the communities they serve.
2. Explaining and translating how technology works and its impact on behaviors and attitudes, fostering collaboration between companies and research institutes.
3. Proposing new goals and incentives for the technology industry that yield better outcomes for society, moving beyond singular focus on business achievement.
Suleyman emphasizes the need for real-time engagement between policy institutions and technology companies to anticipate and share the impact of technologies on people’s lives. He highlights BIP’s crucial role in facilitating this engagement and addressing the societal challenges posed by technology.
Martha Lane Fox’s Emphasis on Accountability and Standards
Martha Lane Fox expressed concern over the rapid technological changes, particularly in AI and the internet. She emphasized the need for accountability and standards to ensure responsible use of these technologies. Fox called on corporations for ethical responsibility, criticized the lack of accountability in firms like Facebook, and highlighted the crucial role of civil society in shaping technology’s impact. She urged the importance of digital literacy and public control over data and rights, referencing Churchill’s vision to underscore the need for accountable leadership in building a better future.
Martha Lane Fox emphasizes the need for a comprehensive approach to understanding the impact of the internet on society. She highlights the importance of addressing issues such as gender, class, and race bias, as well as the accountability and transparency of algorithms.
Fox reflects on the astonishing speed at which artificial intelligence and other technologies have evolved in recent years. She emphasizes the need for rigorous accountability and standards to ensure that these technologies are used responsibly.
Fox stresses the importance of upskilling legislators and policymakers in digital literacy. She believes that public figures should be as well-versed in technology as they are in other aspects of their roles.
Fox commends companies like DeepMind for prioritizing ethics in their work but calls for stronger accountability and transparency from corporations. She believes that companies need to rebuild trust with users and empower them to understand and exercise their rights.
Fox emphasizes the crucial role of NGOs and charities in addressing the societal challenges posed by technology. She believes that these organizations need to be equipped to deal with the complexities of the modern world.
Fox expresses her belief that the UK has the potential to become the most digitally literate nation on earth. She sees this as an opportunity to shape the future of technology and give people more control over their digital experiences.
Fox concludes with a quote from Winston Churchill, highlighting his vision of rapid technological progress and the need to embrace it responsibly. She calls for collaboration and accountability to build a better future shaped by technology.
Challenges in Regulating AI and Data Protection
Companies need to be proactive in addressing the risks and potential harms associated with emerging technologies. The rapid pace of technological advancement is outpacing our ability to understand and regulate these technologies effectively. The lack of explainability and accountability in software and AI systems poses challenges for oversight and regulation. The need for semi-permeable membranes around day-to-day operations to allow for oversight and shaping of technologies proactively.
The grip of companies like Facebook on social data and their potential impact on democracies. The complexity of unpicking the Facebook problem and the potential need for breaking up the company or separating its subsidiaries. The ongoing regulatory efforts in Europe, such as data regulation and social media regulation, as examples of progress towards addressing these issues.
Experts and the public are increasingly advocating for proactive regulation of AI and data protection technologies. Technical solutions like verifiable data audits are crucial for transparency and accountability. A convergence of regulatory approaches is emerging, driven by the global nature of these technologies. Political will is essential for effective regulation, with recent trends indicating a shift towards proactive measures. The urgency surrounding climate change serves as a model for addressing AI challenges, requiring global cooperation.
Barriers and Recommendations for Effective Sector Collaboration
Humans struggle to connect with consequences that are distant in space and time, making it difficult to emotionally engage with environmental issues like climate change. The urgency created by the Facebook affair presents an opportunity for tech leaders to address these challenges.
Achieving complete agreement among nations on issues like privacy is unlikely, and consensus may not be necessary. The relationship between companies and the state in China should be approached with caution, as it differs from Western norms. Balkanization of the web may become a reality due to differing regulatory approaches between regions.
Barriers to effective communication between public, private, and civil society sectors include insufficient public investment in AI, a lack of technical expertise in public agencies, poor coordination, challenges for small charities and entrepreneurs, and outdated public procurement processes. To overcome these barriers, governments should increase AI investment, foster technical expertise in public agencies, promote cross-sector collaboration, and streamline public procurement processes.
The Path Forward
The discussions at the Bennett Institute of Public Policy launch highlight the interconnected nature of global issues and the necessity for coordinated regulation. While achieving consensus is challenging, incremental progress and national regulatory approaches can drive positive change. The future of technology policy hinges on ethical AI, inclusivity, and collaborative efforts across all sectors of society. This approach is essential to harness the potential of AI for the greater good, ensuring that technological advancements enrich and advance humanity as a whole.
Notes by: OracleOfEntropy