The AI Safety Summit 2023 is a major global event that is taking place on 1st and 2nd November at Bletchley Park, the site of the UK’s codebreaking work during World War II, 40 miles from Oxford. (Reuben students were given a tour of its museums during the summer of 2022.) The AI Safety Summit brings together governments, leading AI companies and civil society groups to consider the risks of AI and to discuss how they can be mitigated through internationally coordinated action.
Reuben College President, Professor Tarassenko, was one of the experts who reviewed the briefing papers for the Summit. He also chaired a meeting in Oxford which brought together researchers from engineering and computing with experts from the social sciences and the humanities, including Professor Kerasidou, an Ethics & Values Fellow of Reuben College, to discuss the risks associated with the latest developments in AI (‘Frontier AI’).
Professor Tarassenko commented: “Mathematics, which underpins all AI, teaches us that extrapolation is a very inexact process, leading to results which are often no better than random. It is therefore very doubtful that much can be gained by thinking about long-term risks without any supporting data.
One of the key risks with AI is endowing these systems with a degree of autonomy: data is currently being accumulated about the dangerous behaviours occasionally displayed by autonomous vehicles (robotaxis) in San Francisco (and other places). All this data should be investigated by teams of AI researchers (under the aegis of the ‘IPCC for AI’), independently of the autonomous vehicles companies. If the latter will not share their data, there may be a need for regulation.
We should be putting more resources into collecting and analysing data from other examples of existing and nascent risks arising from the application of today’s advanced AI systems. Learning about short-term risks and thinking about how to mitigate and contain them will enable us to develop strategies which may be usefully applied to longer-term risks.”
Professor Kerasidou added: “AI holds a lot of potential for good, but this potential will not be realised unless the risks are mitigated, and the harms minimized and properly addressed. What we need is an open and honest global discussion about what kind of world we want to live in, what values we want to promote, and how AI can be harnessed to get us there. I hope that this AI Summit is the beginning of that discussion.”
Professor Tarassenko gave a 30-minute TED talk (as part of the Reuben College academic programme) on the inner workings of Large Language Models such as ChatGPT.
Reuben College students have also been thinking about the risks; for example, Aleks Petrov, whose DPhil involves work on AI safety and limitations, as well as fairness issues with the accessibility of language models. Recently, he also contributed extensively to the Alan Turing Institute’s response to the House of Lords’ recent inquiry on Large Language Models.