Lewis Hammond

Project

Research Direction: Cooperative AI

  • I'm generally interested in supervising projects on multi-agent safety, cooperative AI, and/or governing AI agents. Specifically, major themes I am currently interested in are:

  • Detecting collusion, the emergence of ‘collective agents’, and novel, dangerous collective goals or capabilities;

  • Analysing the importance of differing capability levels in strategic interactions between AI agents;

  • Understanding how to measure the ‘cooperative intelligence’ of AI agents, as well as differential progress on cooperation;

  • Creating new proposals for governing AI agents, potentially inspired by existing domains such as trading algorithms in financial markets;

  • Developing new AI tools and datasets for high-stakes human cooperation, coordination, and institutional design.

What I'm looking for in a Mentee

In general, I look for mentees who have a solid technical background, but the precise skills needed will differ by project (e.g. depending on if it is more theory-heavy or engineering-heavy). Given the constraints of a time-bounded program like Pivotal, I also prioritise mentees who can iterate quickly and can commit the large majority of their time to the project during the fellowship. Finally, I appreciate mentees that are self-organised and with good communication skills, as this makes everyone's lives easier.

Bio

I am co-director of the Cooperative AI Foundation, a DPhil candidate in computer science at the University of Oxford, and am also affiliated with the Centre for the Governance of AI. My research concerns safety and cooperation in multi-agent systems, motivated by the problem of ensuring that AI and other powerful technologies are developed and governed safely and democratically.

Previous
Previous

Prof. Robert Trager

Next
Next

Tyler Tracy