Robert Trager & Charles Martinet

Projects

Research Direction: International Governance of Advanced AI

We're especially keen to mentor on one of the following:

Middle-power collaboration on frontier AI. Middle powers (such as the UK, EU member states, Japan, South Korea, Canada, Singapore, India) could decisively shape the international AI order if they coordinate, but the tractable coalitions, instruments, and points of leverage are under-mapped. A fellow could analyse historical analogues for lessons on how non-superpower coalitions shape the behaviour of leading states, or propose a concrete middle-power initiative.

Industry-led governance of frontier AI. Voluntary commitments, coordinated safety practices, and proposed frontier-lab agreements have proliferated, but we lack rigorous analysis of when such mechanisms produce real gains. Projects could include analysing how industry coordination can survive competitive pressure, or mapping the pathways by which industry agreements become de facto international norms or feed into binding regulation.

International standards development for frontier AI. Standards bodies (ISO/IEC JTC 1/SC 42, NIST, CEN-CENELEC) are doing governance work that will shape AI development for decades. Fellows could assess which standards matter most for frontier, examine how standards interact with existing legal frameworks, or propose specific standards that would meaningfully reduce frontier risks.

Tractable US–China bilateral agreements on frontier AI. The US and China are the two leading frontier-AI states, yet bilateral cooperation on AI safety remains minimal. A fellow could, e.g., develop a typology of possible bilateral mechanisms, conduct case studies of adjacent US–China agreements, and produce a "feasibility matrix" ranking agreement types by tractability and impact, with concrete near-term recommendations.

What we’re looking for in a Mentee

We want mentees to take full ownership of a research question and propose their own sub-questions and methods. Strong analytical writing, the ability to read across disciplines, and a "radar" for what's realistically achieveable in AI governance are essential. Prior work on AI governance, international relations, arms control, or technology policy is valuable; technical backgrounds are welcome where relevant to the project.

What we’re like as Mentors

Robert will provide strategic direction and joins at key decision points and regular check-ins. Charles is the day-to-day point of contact and will meet with each fellow weekly. We expect fellows to own the question and drive the work. We communicate via email and Signal.

Bio

Robert F. Trager is Co-Director of the Oxford Martin AI Governance Initiative, International Governance Lead at the Centre for the Governance of AI, and Senior Research Fellow at the Blavatnik School of Government at the University of Oxford. He is a recognized expert in the international governance of emerging technologies, diplomatic practice, institutional design, and technology regulation. He regularly advises government and industry leaders on these topics.

Charles Martinet is a researcher at the AI Governance Initiative, where he coordinates the international governance workstream. His work focuses on AI risk management, international cooperation, and the institutional frameworks needed to govern advanced AI systems. He was previously an independent expert for the EU’s GPAI Code of Practice process, and worked as a summer fellow at the Centre for the Governance of AI, at France’s Ministry of Economy, at the European Parliament’s Directorate-General for External Policies and at the German Marshall Fund.

Previous
Previous

Patrick Levermore

Next
Next

Logan Riggs Smith & Thomas Dooms