2023 Update: Pivoting to Global AI Governance
Lessons from the early internet, Web3 and Canada's history of global diplomacy
The last year involved more change in my life than I can remember: On the one hand, a beautiful marriage thanks to my wife and our family and friends' hard work and amazing hearts; in addition to a month spent exploring Europe (see Instagram highlights scattered throughout this post and a summary of great stays here).
On the other hand, the disheartening reversal of global economic growth and the crash of tech and particularly edTech. Perhaps most disappointing, the dominance of deceit, self-interest and unfettered capitalism over human rights, equality, inclusion and democracy. As examples, see the moral and financial destruction by and of SBF and so many crypto- and blockchain-related companies; the bankruptcies of SVB and Credit Suisse; the evaporation of value from so many frothy edTech companies. Or Russia's criminal war with Ukraine; increased tensions with China; and the backsliding of global trade and governance, and therefore stability.
Yet more than any other change, I've been rocked by AI and our desperate need for its regulation. Below I explain why I'm pivoting my research to focus on global AI governance.
Davos, Digital IDs and Verifiable Credentials
I began 2023 in awe of Switzerland's mountains and efficiency and left disillusioned by Davos where nations appeared to be failing to compete with a promenade lined with big tech and crypto, and where parties were full of a global elite that seemed to have missed a crippling crisis.
I spent much of my time focused on two technologies that I have long believed crucial to a better internet and improvements in education, communication and wellbeing: 1. user data ownership and 2. software interoperability. If WhatsApp and iMessage or Slack and Teams were interoperable, like different email clients, our communal mental burden of task and app switching would be reduced dramatically and our economic and social efficiency increased. If we owned our own data and were educated about how and when to share it, our democracies and minds wouldn't be threatened by ads and disinformation that capitalize on our addictions at the speed of light and the scale of millions.
And while recent EU legislation is promising in both respects (see the Digital Media Act and Digital Services Act), I worry that lobbying will result in these regulations being as impotent as the GDPR. This long awaited and now firmly established privacy legislation has, in my opinion, as much benefited incumbent giants and inconvenienced end users with Cookie pop ups, as it has increased privacy education or related rights.
More specifically I spent significant energy at the World Economic Forum on digital IDs and badges and specifically Learning and Employment Records. Progress on these technologies is advancing rapidly. They were catapulted back to the fore by the Web3 movement and, like democratic participation technologies, they are some of the tech that was positive and will remain prevalent after the crash of Web3. Secure verification of humans has similarly been reinforced by recent developments in AI as fears of deep fakes become omnipresent. Nevertheless, even my enthusiasm for these technologies has been tempered by lackluster usage and disorganization: the US landscape of competing standards and organizations is so complex that MIT, JFF, the US Chamber of Commerce and others continue to struggle to map it.
Like digital verification, improved regulation of data ownership is another crucial response to recent developments in AI. AI relies on the data it is fed. This means that not only is the usefulness or quality of AI determined by this data, i.e., garbage in, garbage out, but also that by regulating who owns our data, we can in turn regulate AI. In the words of my friend Adam Leonard, “Axiom 1: whoever owns the data controls the AI.”
AI and the end of the world as we know it
AI is a technology that has been on the periphery of my expertise for many years. Yet it has now dramatically supplanted data ownership and interoperability as the foremost priority for global regulation and development.
I see three overarching reasons for rapidly shifting my and our attention to the regulation and development of AI echoed in countless articles, including the recent letter calling for a great pause:
AI is being developed, deployed and used (including by relatively uneducated end-users) exponentially and faster than any technology we've built
if we continue AI development at our current pace it could end human existence
AI will change everything and could for the better: education (my Lighthouse Labs co-founder presciently asked me months ago if AI would disrupt all of education), health, climate change, war, business, etc.
Each of these reasons have been long discussed by experts. See for instance, still my favorite piece summarizing AI, from 8 years ago, The AI Revolution: The Road to Superintelligence.
Quotes from a recent interview with "one of the three godfathers of AI" eloquently capture his fears:
"...we're about to sleepwalk towards an existential threat to civilization without anyone involved acting maliciously."
When a networked computer "learns something, all of them know it, and you can easily store more copies. So the good news is, we've discovered the secret or immortality. The bad news is, it's not for us."
"I don't know many examples of more intelligent things being controlled by less intelligent things"
"imagine something more intelligent than us by the same difference that we're more intelligent than a frog...and its going to read every single book that's ever been written on how to manipulate people"
"my confidence that this wasn't coming for quite a while has been shaken by the realisation that biological intelligence and digital intelligence are very different, and digital intelligence is probably much better."
"This development, [Hinton] argues is an unavoidable consequence of technology under capitalism...“[Google] decided not to release [generative AI] directly to the public. Google was worried about all the things we worry about...but...in a capitalist system, if your competitor then does that, there's nothing you can do but do the same."
Hinton raises examples like nuclear disarmament during the Cold War and Y2K as positive:
"It was nothing like this existential risk, but the fact that people saw it ahead of time and made a big fuss about it meant that people overreacted, which was a lot better than under-reacting."
China's arrest of the only doctor to ever create a clone is another example: we can prevent our own demise, if we act together.
This urgency for collaboration over competition and Hinton’s association of us having in part lost control of AI due to unbridled capitalism echoes my experience at Davos with web3 and crypto: we need to establish agency over AI via rules around its capabilities and uses. If we let AI be driven purely by the markets and our bank accounts, not only is democracy at risk, I would argue that our very existence is at stake.
A Role for Canada in Global AI Governance
I'm particularly excited to work on global AI governance as a Canadian with a background in international affairs. This is because Canada:
has amongst the most storied AI histories of any country
has a history of soft diplomacy and international convening on the most important issues of our time: from the International Declaration of Human Rights to world trade, landmines, acid rain treaties
is a model of good governance and diversity with amongst the largest populations of new immigrants residing in a stable democracy with a high standard of living
Yushua Bengio and Geoffrey Hinton are amongst the foremost contributors to machine learning and large language models and the Edmonton, Alberta, Toronto, Ontario and Montreal, Quebec-based institutes they founded remain global AI leaders. Yet Canada's ambitious history has not been matched with similarly ambitious foreign policy. Trudeau and Macron's Global Partnership on AI remains too slow moving as developments in AI exponentially outpace national regulation.
Nevertheless, we remain one of the most trusted countries internationally. While this trust is in part the result of our small economic and population size, it also allows us to punch above our weight as middle and soft power. See for instance, the fact that Canadian passports are among the most powerful in the world, granting visa-free entry to 115 countries. Canada is uniquely capable of pulling together the key actors in AI governance:
nations: the OECD, the BRICs (start with India and force China's hand if you can’t show enough value for their participation before then...), the global south
companies: OpenAI, Google, Microsoft, Meta, etc.
civil society: Partnership on AI, top academic institutions, human rights and advocacy non-profits, unions, etc.
A new international governance regime that can keep up with the pace and scale of AI
Internet 1.0, decentralization, web3 and particularly the new forms of governance these technologies facilitate offer the promise of real-time governance by the people and for humanity - giving those without power agency is the only way we can regulate AI to save ourselves. We have offered technology and those that build it a sense of agency in our collective consciousness and in society that they have not earned and should not maintain. By definition we create technology, we program AI and we may still have time to regulate it before, as most believe is inevitable if we continue on our current path, we no longer can. In debates about Uber and advertising technology, we have long offered technologies and the companies that sell them a false sense of authority and agency: Uber could not disrupt societies unless we built it to do so and then allowed it do so. Similarly, young minds can only be corrupted by Instagram or democracies threatened by disinformation, if these technologies that we built remain programmed in this way.
To catch up, we need new forms of governance that take advantage of the latest advances in govtech and democracy-tech. New civic engagement tools like citizen panels and quadratic voting provide an opportunity to more effectively involve citizens in the decision-making process around AI regulation.
Know more about related policies and technologies? Hit me up!
Why me
I'm still working on my personal role here drawing on my experiences:
supporting the development of research in international criminal law through the creation of McGill's Clinic on the same
advancing impact bonds and access to medicines by changing the way developed nations pay for some pharmaceuticals
evolving how technology is taught in Canada through the co-founding of Lighthouse Labs, and
more recently leading new ways for LinkedIn and Microsoft to partner with government in higher education and workforce development.
My latest thinking is that I could assist in being some form of formal or informal Sherpa on behalf of Canada's AI institutes or the government of Canada, trying to bring together governments, the most important actors in the private sector and civil society to build a new international governance regime that can keep up with the pace and scale of AI.
I struggle to understand why cables get entangled in my pocket let alone quantum mechanics, neural networks or string theory. I will never be able to support this movement through the depth of my understanding of its technology. But I can convene like few others and hope that our collective intelligence can overcome the perils of the artificial intelligence we have created.
Join me?
Epilogue: This is My perspective, not that of my Employer
Even more than in previous years I want to be clear that I'm writing on my own behalf. My LinkedIn work on workforce development continues apace including:
LinkedIn Talent Solutions Partnership with the US National Association of State Workforce Agencies compliments our long standing nation-wide partnership with NASWA on the National Labor Exchange
Virginia Career Works Northern leads an industry by helping job seekers into the workforce
Government of Ontario partnership to support its unemployment services. Press release LinkedIn CEO’s post
I've also contributed to LinkedIn's recent work on verification and verified credentials and skills based hiring and learning.
In addition, I published a book chapter with my colleague Maria Langworthy. Check it out: (2022) Learning 3.0: Bringing the Next Education Paradigm Into Focus.” in New Models of Higher Education: Unbundled, Rebundled, Customized, and DIY
See more on his instagram here and mine here.
Great insights as always Brother. I share your concerns and the urgency for regulation. I also agree Canada and Canadians are well positioned to be leaders of this action. Please let me know how I can support. All my love to the family! - Muraly