Edward Kembery
Projects
Research Direction: Identifying prospective US-China agreements regarding AI risks
This workstream approaches international AI governance from a pessimistic, political-realist perspective.
Specifically, it assumes that 1) the diplomatic relationship between Washington and Beijing remains volatile, 2) economic and military competition means that actors are reluctant to take costly action to limit their AI capabilities, 3) warning shots fail to change this, and consequently 4) states are interested in mitigating catastrophic risks only insofar as it serves their national security interests (by averting catastrophic terrorism, for example, or by mitigating risks from unnecessary war).
The primary goal for this workstream is to identify, scope and roadmap concrete proposals for mitigating AI risks that the US and China might agree to under this worldview.
Projects we might work on include:
Fine-graining clauses for agreements. You might work with me to specify an exact phrasing or clause structure for an agreement prohibiting models with advanced biocapabilities from being publicly released.
Scoping new agreement ideas. You might explore whether the US and China should engage in information sharing about control protocols, or build up their security standards to prevent non-state actors from obtaining model weights.
Building better models of the strategic context. You might seek to shape SAIF’s strategic direction by conducting deep-dives into how rare earth metals, UAVs, different AI takeoff scenarios or nuclear deterrence will impinge on key issues regarding AI diplomacy.
The success criteria for work from this project will be heavily biased towards delivering concrete, actionable insights that can direct international AI governance work in 2026.
What I'm looking for in a Mentee
You like to learn fast and reason under uncertainty.
Bonus points if you have: published work on international AI governance, a technical background related to AI verification or security, experience working in diplomacy or foreign affairs, or speak Mandarin.
As a mentor, I have three main goals: to help you do work that will advance the field, to improve your ability to identify promising research directions, and to support you in moving towards a productive career in the field after the fellowship should you choose to do that.
Bio
I’m a Research Fellow with the Safe AI Forum, where I work on concrete proposals for international agreements.
I previously built an R&D agenda for societal resilience with the UK’s Advanced Research and Invention Agency, built a loss of control demo for the UK government with GovAI, and researched open-source policy at ERA. I have an MPhil in AI & Ethics from the Center for the Future of Intelligence, Cambridge.
