Tuesday Talks: AI and human empowerment

Let’s begin with a confession: I am an Artificial Intelligence & Machine Learning Fellow here at Reuben, but for a while I have been sceptical about whether AI is being used for good. I had feared I was alone in this belief, but based on Professor Chris Summerfield’s enthusiastic, engaging and informative talk, it does appear I was wrong. What was heartening to hear was that there is something that we can do about it.

A flash in the pan, or here to stay?

Whilst it may seem that AI has appeared from nowhere, at least through the sudden omnipresence of Copilot, Grok and Gemini, Professor Summerfield ran through the earliest days of what is a rather old field of study. Beginning in the 1950s, there have been researchers, like Summerfield, who have wanted to understand how AI and human beings interact. With the development of deep learning around 2010, the progress of AI skyrocketed in a way that reminds me deeply of Moore’s law. However, it was likened more to the exponential phenomenon we are all familiar with from 2020. The progress of AI systems has hugely increased to the level where even highly complex tasks can be attempted by AI systems with a reasonable chance of success.

“I'm sorry, Dave. I'm afraid I can't do that.” - 2001: A Space Odyssey

The central thesis of Professor Summerfield’s talk was simple. Technology can be broken down into two broad categories: tools or artefacts. They generally empower people (think about trying to drive a nail without a hammer) and organisations, which generally empower themselves (imagine trying to change the mind of a government) to change society.

At the moment, it is quite clear that AI is a tool. It’s also clear that a great many people are using it to save themselves time in producing “boilerplate” code, in generating slides to pad out a sales pitch or, in the case of some undergraduates, probably writing an essay or two. Whilst there may be side-effects, it is obvious that the case for empowering people can be made. Where our current frameworks for AI systems will fail is if AI, or access to it, becomes entrenched within organisations. Or, if AI itself becomes empowered to make decisions about society. If we allowed an AI to be involved in governance, would it make the right decisions? Can we ever even know what the right decisions are?

You, Me and ChatGPT?

The talk was wrapped up with a shocking look (another confession: despite being asked to, I didn’t close my eyes) at how much more convincing AI can be than a human being. It turns out that, even if you are a master debater, current state-of-the-art models can outperform you. 

Regarding the future of AI and humanity: I will leave you with one paraphrased thought from a student on my table: “It takes 18 years for a human to become intelligent enough to be interesting. GPT-1 is only turning 8.”