Ben Bucknall
Projects
Research Direction: Model Authenticity Guarantees
I'm currently interested in exploring methods for providing users with model authenticity guarantees when interacting with AI systems through APIs or chatbot interfaces. In other words, answering the question of 'How can users be given a guarantee that the model they're interacting with is the one they think it is?'.
Specific research projects could focus on fleshing out the motivation for such guarantees. For example:
Empirically investigating the extent to which models undergo 'silent updates' without public awareness;
Investigating how system behaviour changes under different modifications to the underlying model.
Alternatively, a research project could aim to propose and test a method for providing such guarantees.
What I'm looking for in a Mentee
The most important attributes are solid technical foundations, strong writing skills, and the capacity to work independently. Some amount of exposure to ongoing AI governance and policy discussions is a bonus.
Bio
I am a DPhil (PhD) student in engineering at the University of Oxford, where I'm affiliated with the Oxford Martin AI Governance Initiative.
My work centres around technical AI governance -- that is, technical research and development that can enable or inform AI governance and policy.
Previously, I was a research scholar at the Centre for the Governance of AI and a technical advisor at UK AISI. I have master’s degrees in maths and computational science from Durham and Uppsala Universities, respectively.
