Edward Kembery
Projects
Research Direction: International Coordination on AI Risks
How could states negotiate and verify an international agreement based on capability thresholds?
Should the US and China coordinate on minimum security standards for frontier AI companies?
Where can geopolitical rivals collaborate on technologies that increase resilience to AI-related risks?
What I'm looking for in a Mentee
I’m primarily looking for candidates who can learn fast and reason clearly under uncertainty. I’d also be excited to mentor candidates with a strong technical background related to AI verification or security, or with some experience working in diplomacy or foreign affairs.
I benefitted a lot from fellowships early on in my career. In general, I’d like to pass this on by helping mentees develop their intuitions for tractable research directions and connecting them with career opportunities.
Bio
I’m a Research Fellow with the Safe AI Forum, where I work on concrete proposals for international agreements.
I previously worked with the UK’s Advanced Research and Invention Agency on societal resilience, and did fellowships with GovAI and ERA. I have an MPhil in AI & Ethics from the Center for the Future of Intelligence, Cambridge.
