Everyone’s talking about ChatGPT—but what if the real threat from AI is happening elsewhere?
Kristian Rönn is the co-founder of Lucid, a company helping governments track the movement and usage of AI chips globally. He previously founded Normative, the carbon accounting platform that helped shape EU and UK climate disclosure rules—and began his career researching existential risk at Oxford’s Future of Humanity Institute.
He’s also the author of The Darwinian Trap, a bestselling book that explores how misaligned incentives—and short-term thinking—can push systems toward catastrophic outcomes. It’s a framework he now applies to AI, arguing that without global coordination, the very infrastructure powering this technology could spiral out of control.
In this conversation, he explains why AI’s biggest risks aren’t in the models—but in the chips, supply chains, and silent diffusion happening behind the scenes.
We explore:
• Why frontier models are a distraction from the real governance challenge
• The one global policy move governments must make before it’s too late
• What a chicken sandwich teaches us about AI’s hidden complexity
• And what carbon accounting taught Kristian about building systems that actually scale
If you care about who’s really in control of AI—and how we avoid losing the plot—this is essential listening.
Chapters
00:00 – Intro: Welcome to High Net Purpose
00:29 – Kristian Rönn: From Philosopher to AI Safety Pioneer
01:31 – Early Life: What Shaped Kristian’s Worldview
04:29 – The Impact of Peter Singer and Utilitarian Thinking
08:31 – Exploring Existential Risks and the Road to AI
13:20 – Building Normative: The Startup That Changed Carbon Accounting
21:29 – Why Kristian Left Climate Tech for AI Governance
23:22 – The Darwinian Trap: Why Good Incentives Lead to Bad Outcomes
25:36 – Darwinian Demons vs. Cooperative Systems
34:47 – The Challenge of Global AI Governance
35:23 – Centralization vs. Decentralization in AI Control
37:33 – The Chicken Sandwich Analogy: AI and Hidden Value Chains
39:28 – Can Decentralized Governance Actually Work?
40:49 – Enlightenment Thinking and Biological Drives
45:11 – AI Assurance, Risk Management, and Value Chain Complexity
48:27 – Who Should Govern AI? Security, Policy, and Global Standards
53:09 – What the Future of AI Regulation Could Look Like
01:05:18 – Final Thoughts: Purpose, Power, and What Comes Next
Hosted on Acast. See acast.com/privacy for more information.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More