Projects

Research Direction: AI-Driven Extreme Power Concentration

  • Write threat models and scenarios of the most plausible ways power gets concentrated (similar to AI 2027)

  • I expect that governments will become AGI-pilled by late 2027. This means there will be huge overton window shifts and policy windows. Project: write an ambitious policy proposal for the ideal laws that prevent extreme power concentration and generally reduce AI x-risk

  • Congressional oversight of the executive branch's AI usage

  • Figuring out how to govern the model spec and constitution

  • Figuring out how to prevent military coups after the military has been automated

  • Figuring out what an automated government would look like and how to prepare for this

What I'm looking for in a Mentee

  • I slightly prefer to work with mentees with a technical background (Physics, CS, math) but am open to all backgrounds

  • The ideal mentee is high agency and is extremely curious. I am particularly excited to work with fellows who try really hard to get to the bottom of gnarly questions

  • I would like to mentor someone who is truth-seeking

What I’m like as a Mentor

  • Research meeting: 1hr/wk

  • Reviewing docs and giving feedback: 1hr/wk

  • I am available async over slack and generally respond quickly

  • I am very hands on and will help my mentees become situationally aware on AGI progress and the general field

Bio

Dave is a Research Associate at the Institute for AI Policy and Strategy. His research focuses on AI-enabled coups, AI security (preventing model weight theft and tampering), and compute governance. His background is in cybersecurity and ML.

Previous
Previous

Abra Ganz & Karl Koch

Next
Next

Stefan Heimersheim