Q3 2025 Mentors
-

Alan Chan
Governance of AI Agents
GovAI -

Erich Grunewald
US export controls
IAPS -

Isabella Duan
International Coordination on AI Risks
Safe AI Forum -

Herbie Bradley
Implications of a Highly Automated Economy
University of Cambridge -

Joshua Clymer
AI Control
Redwood Research -

Eli Lifland
AI Forecasting, Governance and Strategy
AI Futures Project -

Lewis Hammond
Multi-Agent Safety
Cooperative AI Foundation
Find Out More -

Stefan Heimersheim
Mechanistic Interpretability
Apollo Research -

Tobin South
Trustworthy AI infrastructure
MIT & Stanford -

Tyler Tracy
AI Control
Redwood Research -

Alexander Strang
Inductive Bias in LLMs
University of California Berkeley -

Jesse Hoogland
Developmental Interpretability
Timaeus -

Ben Bucknall
Technical AI Governance
University of Oxford (AIGI) -

Jacob Lagerros
AI Hardware Security
-

Jasper Götting
AI & Biosecurity Intersection
SecureBio -

Peter Barnett
Technical AI Governance
Machine Intelligence Research Institute -

Logan Riggs Smith
Mechanistic Interpretability
-

Thomas Larsen
AI Forecasting, Governance and Strategy
AI Futures Project -

Alexander Gietelink Oldenziel
Inductive Bias in LLMs
Timaeus
Mentors from past fellowships include: Konstantin Pilz (RAND), Renan Aurajo (IAPS), Marius Hobbhahn (Apollo Research), Nicolas Moës (The Future Society), Dane Sherburn (OpenAI), Oliver Guest (IAPS), Kevin Wei (RAND), David Manheim (ALTER), Stanislav Fort (Google DeepMind), Asa Cooper-Strickland (UK AISI), Saad Siddiqui (SAIF), Mary Phuong (Google DeepMind), Matt Burtell (CSET), Michael Parker (Georgetown University), Elliot Thornley (GPI), Christian Ruhl (Founders Pledge), Max Dalton (Forethought), Matt van der Merwe (GovAI), Lee Sharkey (Goodfire), Jonas Schuett (GovAI), Alexandre Variengien, Jonas Sandbrink (FHI), Seb Krier (Google DeepMind), and many more.
