From Meds to Machines: Why I'm Rethinking Innovation in the Age of AI
or More Humans, Less Harm: How to Keep AI Ethical and Open
Throughout an unorthodox career spanning law, technology entrepreneurship, and public-private partnerships, my common thread has been expanding genuine human prosperity through system change. Whether enabling access to vital medicines, using data analytics to connect disadvantaged individuals with meaningful livelihoods, or advising governments on technology's role in education and workforce development, my focus remains on bettering society and expanding opportunity. But the emergence of potentially society-altering technologies like generative AI demands careful governance rooted in ethics aligned first with human dignity to ensure they remain directed toward humanity's benefit, not centralized power or profit. For the first time, I am questioning the impact of the adoption of a new technology or the pace of innovation rather than encouraging it. We face a choice about whether to "innovate first, ask questions later" or proactively build guardrails guided by human rights.
There is understandable excitement but also justified fear around the rapid development of generative AI models like GPT. While some hype is overblown, I'm heartened that renewed debate is prompting society to seriously reckon with our relationship with technology and its risks. Even more specifically, the genAI hype cycle is bringing up long-debated and yet far-from-resolved questions about data privacy, reliability and bias, about technological power and centralization, and about information and misinformation. COVID pulled back the curtain on an overly tech-dependent world, from Zoom fatigue to failures of remote learning. As the techlash grew, psychologists and governments began questioning Big Tech's influence on mental health and relationships.
Generative AI adds greater urgency to the need to address tech's existential threats in part because of the opaque nature of how it works – the creators of the technology understand it conceptually but are unable to predict or precisely explain the answer a GenAI system will give to a prompt. That lack of transparency is made worse when the systems are concentrated in the hands of a powerful few. Web 2.0 companies refused to reveal how their algorithms worked for competitive reasons. Web 4.0 companies don’t have to refuse because they don’t know how their products really work.
But dystopian futures are not inevitable if we build human dignity-aligned ethics into AI's architecture from the ground up. Core to this is recognizing that she who controls the data controls the AI. Responsible data regulation must balance openness to fuel innovation against preventing harmful uses violating human dignity. We cannot see AI itself as an autonomous force removed from human accountability, but a tool shaped by our values. We have agency over technology if we claim it.
One model is convening an IPCC-like organization to monitor AI's risks, along with shared public compute resources to develop beneficial applications serving the public interest. Regulation should be flexible policies within a universal legal framework, not blanket bans - keeping AI development as open as possible while firmly limiting dangerous uses, much as we allow free speech unless it directly harms others. New decentralized governance models and distributed deliberative and participatory democracy technology could allow faster and broader consultation and democratic input. Examples include citizen assemblies, Pol.is, and Remesh and quadratic voting.
Of course, no current AI possesses nuanced judgment over what is "good" or "bad" - this is where humans must remain firmly in the loop through oversight mechanisms ensuring consensus aligns with ethics and human dignity, as it does with Wikipedia. More humans in the loop, and a focus on the care economy, not just technical progress alone, will lead to generative AI that uplifts our communities. If developed and applied wisely with care and intention, AI could profoundly empower humanity's collective potential. But only if guided by our highest shared values.
This post was made with an AI tool from Casper Studios, developed by my friend Jay Singh. You can message him if you want to do a 30min live chat with him. He’ll take the interview and create content like this for you. Thanks to Adam Leonard and my dad for their edits and additions.
.