Patrick Levermore
Projects
Research Direction: AI Policy and National Security
Whether and how to update investigatory powers laws adapt to AI agents as sources or targets of intelligence
A critique of mutually assured AI malfunction - what conditions and assumptions are needed for the Mutually Assured AI Malfunction ("MAIM") balance to hold, e.g. that datacentres will remain very visible/monitorable?
Should the UK government accelerate or deter MAIM?
Why top secret compute capacity could be a valuable chip the UK can develop to maintain influence about decisions over AGI globally.
Why top secret compute could be good for the world.
The top ten people the UK government should hire/contract to mitigate catastrophic AI risk.
What I’m looking for in a Mentee
I'd like to work with someone who really prioritises impact - willing to think through which projects will be especially crucial for the world. An ideal match would probably include someone with background research skills, as this is a relatively weak area for me, but that doesn't seem essential.
What I’m like as Mentors
I'm happy to adapt to you. I'd be up for spending 1-3 hours a week working with you. By default, I especially like: you to have ownership and drive for your project; to spend most of my time on strategic questions and facilitate links between you and others; to give each other positive helpful both-ways feedback. But, I'll adapt to your preferences!
Bio
Patrick previously worked at the Department for Science, Innovation and Technology, where he managed AI risk advisors in the Central AI Risk Function and set up the Alignment Project, an international consortium funding alignment research. He also worked as a policy lead on frontier AI legislation. Before this, he researched AI policy at the Institute for AI Policy and Strategy and the Centre for the Governance of AI.
