Ben Bucknall
Projects
Research Direction: Model Authenticity Guarantees
I'm currently interested in exploring methods for providing users with model authenticity guarantees when interacting with AI systems through APIs or chatbot interfaces. In other words, answering the question of 'How can users be given a guarantee that the model they're interacting with is the one they think it is?'.
I'm particularly interest in projects aiming to construct a proof-of-concept implementation of a cryptographic scheme for proving model identity, though am open to other specific projects in this area.
What I'm looking for in a Mentee
The most important attributes are solid technical foundations, strong writing skills, and the capacity to work independently. Some amount of exposure to ongoing AI governance and policy discussions is a bonus.
Bio
I am a DPhil (PhD) student in engineering at the University of Oxford, where I'm affiliated with the Oxford Martin AI Governance Initiative.
My work centres around technical AI governance -- that is, technical research and development that can enable or inform AI governance and policy.
Previously, I was a chapter lead for the International AI Safety Report, a research scholar at the Centre for the Governance of AI, and a technical advisor at UK AISI. I have master’s degrees in maths and computational science from Durham and Uppsala Universities, respectively.
