Skip to main content
02nd February 2023 Phoebe Mumby (2020, English)

Professor Michael Osborne speaks on the governance of AI at Commons Committee Hearing

Fellow in Engineering Science Professor Michael Osborne gave expert testimony about the governance of AI at a Commons Science and Technology Committee Hearing.

Last week Exeter Fellow in Engineering Science Professor Michael Osborne appeared at a Commons Science and Technology Committee hearing to speak about the governance of artificial intelligence (AI). Osborne is a Professor of Machine Learning at the University of Oxford and co-director of the Oxford Martin Programme on Technology and Employment. His technical expertise in Bayesian optimisation and probabilistic numerics underpins recent advances in automated and interpretable machine learning pipelines, and his research on the future of work has resulted in sustained media coverage and policy impact.

Acting as witness to the House of Commons Science and Technology Committee on Wednesday 25 January, Professor Osborne was posed a number of questions largely concerned with the risks and regulation of AI in the modern world. Reverting often to simple and amusing animal analogies, Professor Osborne discussed both the benefits and dangers of current advancements in AI technology and its uses in our everyday world. He sees its place in the labour market not as replacing or supplanting human labour, but as an augmentation technology. He predicts its usage will be seen increasingly in routine work, completing repetitive, low-level decision making rather than in social or creative work.

One particular, repeated concern discussed during the committee hearing was the effect of bias in AI on everyday life. Professor Osborne noted that the introduction of bias is a huge issue in machine learning, as AI often doesn’t achieve the right goals in the way we desire; it meets the goals we say but not necessarily the goals we want. Professor Osborne used an amusing story about a dog who rescued children from the Seine in 1908 to expand his point further: one day a dog was seen to rescue a child who had fallen into the river in Paris, and was rewarded with a piece of steak. However, it became apparent that, after this initial reward, the dog was seen rescuing children from the Seine on a frequent basis, almost every couple of days. Eventually it was discovered that the dog was patrolling the Seine and pushing children in so that it could save them and gain the reward of the steak. Interestingly, Professor Osborne highlights how the training of AI is a lot like the training of an animal, working on a similar positive feedback model to rewarding a dog with a piece of steak. Primarily this method works, but the danger of AI lies in the possibility that it could fulfil the positive feedback loop itself, bypassing human intervention, in much the same way as, if a dog discovered the treat cupboard, it would rather help itself than sit for its owner.

Despite the potential risks, complexities and uncertainties surrounding Artificial Intelligence, the one certainty seems to be that in coming years there will continue to be rapid progression and development of AI technology, making regulation key for AI’s future. Perhaps the most immediate risk to mitigate, according to Professor Osborne, is the development of an AI arms race, which he sees as already in progress in the current geopolitical climate.

To find out more, you can listen to Michael Osborne at the Commons Science and Technology Committee hearing here.

Share this article