Product Liability as a Model for UK AI Security

Gaurav Yadav, mentored by Peter Wills, researched product liability as a tool for AI safety.

Summary

Negligence, as a legal framework, will struggle to meet two 'core objectives': prompting frontier AI developers to internalise and mitigate the risks posed by their systems, and offering a robust avenue of redress for those harmed by them. This paper argues that a 'Frontier AI' Bill in the UK should implement a liability regime to supplant negligence law, giving stakeholders-such as courts, the public, and the developers themselves-greater clarity over where liability lies. To achieve this, the regime should adopt four mechanisms from product liability-duty to warn, consumer-expectation tests, developmental-risk defences, and supply-chain liability-as foundations for governing frontier AI developers.


Previous
Previous

Understanding the learned look-ahead behavior of chess neural networks

Next
Next

Forging the Biological Weapon Convention: A Brief History of the Creation of the BWC