Knowledge Distillation with Helen Byrne
Knowledge Distillation is the podcast that brings together a mixture of experts from across the Artificial Intelligence community.
We talk to the world’s leading researchers about their experiences developing cutting-edge models as well as the technologists taking AI tools out of the lab and turning them into commercial products and services.
Knowledge Distillation also takes a critical look at the impact of artificial intelligence on society – opting for expert analysis instead of hysterical headlines.
We are committed to featuring at least 50% female voices on the podcast – elevating the many brilliant women working in AI.
Host Helen Byrne is a VP at the British AI compute systems maker Graphcore where she leads the Solution Architects team, helping innovators build their AI solutions using Graphcore’s technology.
Helen previously led AI Field Engineering and worked in AI Research, tackling problems in distributed machine learning.
Before landing in Artificial Intelligence, Helen worked in FinTech, and as a secondary school teacher. Her background is in mathematics and she has a MSc in Artificial Intelligence.
Knowledge Distillation is produced by Iain Mackenzie.
Knowledge Distillation with Helen Byrne
Miranda Mowbray - Honorary Lecturer in Computer Science, University of Bristol: AI and Ethics
Miranda Mowbray is one of Britain’s leading thinkers on the ethics of Artificial Intelligence.
After a long and distinguished career as a research scientist with HP, she is now an Honorary Lecturer in Computer Science at the University of Bristol where she specialises in ethics for AI, and data science for cybersecurity.
In our wide-ranging conversation, Miranda breaks down the definition of AI ethics into its many constituent parts – including safety, transparency, non-discrimination and fairness.
She tells us that there’s probably too much focus on the dire predictions of AI ‘doomers’ and not enough on the more immediate, but less apocalyptic outcomes.
On a lighter note, Miranda reveals her personal mission to change the world, and shows off a sculpture that she had commissioned, based on the imaginings of generative AI.
You can watch a video of our interview with Miranda here: https://youtu.be/tbnHxbM5ZR8
Helen Byrne
Welcome to the Knowledge Distillation podcast with me, Helen Byrne. If you're interested in AI, that it won't have escaped your notice that the conversations about the ethics of artificial intelligence and the policies governing its development and deployment, get as much media coverage as the technologies themselves. In this episode, we're joined by Miranda Mowbray, who lectures on AI ethics at the University of Bristol. She gives us some background on the various ethics principles, tells us who should be more involved in building and regulating AI systems, and the applications where she predicts we'll see the biggest benefits from AI. And for those that get all the way to the end, you can hear about a piece of art that Miranda co-designed with a sculptor and generative AI. I hope you enjoy it. Hi, Miranda, thank you for joining us. Great to see you. So I was lucky enough to meet you earlier this year at a meetup in Bristol, where you were presenting, and you gave a great talk. And at that point, I was already planning to do this podcast. So I was very excited to meet you. And thank you for staying in touch and agreeing to join us today. So I know you're affiliated with the University of Bristol, where you are lecturing on AI ethics. But I also know you've had an illustrious career leading up to this point. So could you start us off by telling us about your background.
Miranda Mowbray
My background, as an undergraduate and graduate was in maths, I did a maths PhD at Cambridge. And then I looked around for jobs that would employ mathematicians. And it was clear even at that time, this was before the web that the most interesting stuff was going to be in IT. And I joined an IT research lab for a large company. And I stayed there for decades. And I worked on pretty much anything that a mathematician could be useful at that I thought was interesting. And when machine learning came up, I started doing machine learning for cybersecurity. And also I got interested in the ethics of it. And one reason for this was I was in charge of a huge and really sensitive database. And I looked at what I was allowed, both by law and by company policy to do with this data. So I, we did have a policy within the project of what we would do and what we wouldn't do, and how we would look after it carefully. And I rewrote that policy and made it more easily usable. And then I started doing that for other groups and at the company level and for other companies too. So that that's how that happened. And then a few years ago, the security lab was closed down and I went to join the University of Bristol, first as a lecturer, I'm now an honorary lecturer, which is lovely because it means I do the bits that I really like: teaching research, and not the bits that bore me. So at Bristol, I teach ethics for AI at undergraduate level, masters level and PhD level.
Helen Byrne
I'm interested because I was actually a teacher in a past life. My first job was as a maths teacher, did you always think that you might want to head into the teaching side? Was that always an interest in the back of your mind? Or did it just happen?
Unknown Speaker
Firstly, congratulations. Thanks to you for doing maths teaching. I would hate that, at school level I assume. I always thought being an academic was my fallback plan if the lab ever closed or my job turned nasty. And the reason was that I had a rather unusual job for a commercial job, which was pretty much I could research anything I liked. I just had to make a case that in 10 years time, it might make the company lots of money. And that's, you know, 10 years ahead, It's jolly easy to do usually. And I was unusual even in the company I was working for to have that much freedom. And you don't generally get that in the commercial world. But you do! That's how it is for academics. And that's why it was obvious to go to university.
Helen Byrne
That's so interesting. I used to work in research as well. And when we were trying to... for industry, and when we were trying to kind of win candidates to come from academia, that was one of the biggest worries that "but I'm used to choosing exactly what I want to work on. I don't know if I want to work on these lines of research that really only affect your your business". So that's amazing that you found a place that you could do both,
Miranda Mowbray
I was very lucky, I found a great company, and particularly a great manager who protected me.
Helen Byrne
So let's talk about AI ethics. So I tend to say AI ethics and AI safety and sort of interchange the two but I'm not sure if that's correct or not, I'd love you to teach us. So could you introduce the landscape of AI ethics, slash or/and safety, and maybe try to categorize the different areas, if you can.
Unknown Speaker
I'm going to call up a list, if you don't mind, because I think it's, it's helpful to make sure that I don't miss something. And this is the list of the requirements from the EC in their guidelines on AI. Okay, so they say technical robustness and safety. And that's the first of the ethics guidelines, so they have other ones. So they would say that safety is a subset of ethics. And that's making sure - this is the alignment issue - this is making sure that your program does what you wanted it to do, and is robust, and of course, security. Then there's privacy and data governance, which I guess you could think of as part of safety but tends to be rather specialized. It's not about making sure that the AI does what you intended it to do. It's more that the data is looked after, as it should be. Then there's transparency. And an issue with some types of AI is that they're really rather opaque. It's unclear what's going on, and there's lots of interesting research on how to make it more transparent. And that's useful, both for the designers and for the users if something happens that that they're unhappy with. Diversity, non discrimination and fairness, I think that's usually not included in safety. And it's a subject where there's been a lot of interest, a lot of focus, and a lot of problems, then they say societal and environmental wellbeing. I would say that that's the whole point. The whole point of this is societal envronmental mental wellbeing. But I guess that it's within this bucket that the question arises of, is what you're trying to do a good idea in the first place. If it's not a good idea in the first place, don't do it. And then they say, accountability. And for me, accountability is accountability for all the things that we've just mentioned. So it's a means to an end. And they also mentioned human agency and oversight, actually, I think they say that first, which is the idea that you shouldn't have too much.... so you should have oversight for automated tasks. And this applies to more things than just AI.
Helen Byrne
So if we were to dig into those areas, I guess which of those areas or which one is do you think the most popular in research and why is that? And which... I mean, would you dare say that it's over, over popular over researched... a particular area, and which is the least.
Unknown Speaker
There's a lot of research in non discrimination and equity, and I think that's good. There's a lot of research, because it's very well funded in long term existential risks: is AI going to become conscious? Will it kill us all? And I think there's maybe too much emphasis on that. I'm not saying you shouldn't have research... long term research or shorter term research, but I think because of the way that funding has gone, there's maybe an over balance on the existential stuff, which frankly, I don't really believe in and we have problems right now. But yeah, that's interesting. So I use the word alignment, for example. So the alignment problem is here right now it's making... because when you... part of the idea of machine learning is that you just specify what needs to be optimized, in some kinds of machine learning, and so you're only indirectly specifying what you want your code to do. Whereas traditional code, you can go through step by step. And so there is a problem where what you ask to be optimized is different from what you really want. Partly because of difficulties of specification. And so that's, that's a genuine problem with any high level language, but particularly with the very high level instructions that we're giving to AI. But sometimes when people talk about the alignment problem, what they're talking about is a future in which AI is conscious and has different goals to us, and how do we make sure that they/it has goals that are close to human goals? And, you know, I don't think that that's the most helpful way of thinking about it.
Miranda Mowbray
Is this linked to the, you know, the paperclip maximizer problem that Nick Bostrom, I think it was originally?
Unknown Speaker
Will our listeners know what the paperclip maximizer problem is?
Helen Byrne
Maybe just explain it for the listeners just in case.
Unknown Speaker
It's from the book Superintelligence, I believe. And the idea is that a company gives their AI instruction to maximize the number of paper clips in the office. So it turns everything that's made of metal in the office into paper clips. Melts them down and turns them into paper clips, it stops people doing work that is doing anything but producing paper clips, it goes and digs up the earth to mine more metal for paperclips. Yeah, etc. Well, this is silly. But it's it's making an important and sensible point. Which is that something that seems a fairly benign optimization may have side effects that you don't want. So the reason it's silly, I think, is I can't see that you would let your your AI that's managing the office to have a mining contract, I can't see that you would give it any weapons to kill people in a company who are not making paper clips. So any description of what any specification of what an AI can do has the optimization, but also has the actions that are allowed. And you're not going to say you're allowed to do anything.
Helen Byrne
Interesting. So that leads me on to the question that I have about agents. So AI agents is for me, where I start to see a greater AI safety risk. So AI agents, I mean, AI systems that can take actions. So we've seen these amazing capabilities with LLMs. And we're all really excited about this. But LLMs on their own can just give you a recommendation for how to do something. Whereas if you have an LLM agent, they can actually take actions and kind of complete some open ended task for you. I did actually read recently Open Philanthropy have just given out some grants, they're supporting some grants for research into the benchmarking of LLM agents. Now, you mentioned constraining the actions that the agents could do. But I guess it's a reasonable worry that you won't constrain all of the possible actions or the agent can take a different action that would allow it to get around your guardrails in in some way.
Unknown Speaker
I am concerned with the use of agents and that's for cybersecurity reasons. So once you get much more complex interactions, there are many, many more places where an attack can happen by humans. And it's going to be more difficult to ensure that what's happening is okay, so I think some of the stories about attacks by AI, I don't believe them but I am concerned about humans using... because I've worked in cybersecurity... using AI as a tool for attack which is already happening as well as a tool for defense. Having said that, because I teach this I've kept to look out for problems that have happened in real life, and almost all of them are unintentional. One that springs to mind immediately was Twitter's program for cutting pictures. So pictures for... this was Twitter before it became x. So if you upload a picture to Twitter and it's not the right shape, then it has to be cropped somehow. And I think they bought in a cropping mechanism from another company and they didn't test its effect. And it was discovered that if you put a black celebrity and a white celebrity in the same picture, wherever they are, it crops out the black celebrity. Twitter did a really good thing. So firstly, they fessed up, they said, Yes, we have a problem. They also said we will release the data and the program for researchers to have a go at it, and we will give a prize to best research on it. Amazing. And the price went to a paper that led to the marvelous headline. "Twitter's sexist algorithm is also ageist, ablest and Islamophobic, researchers find". So this was completely unintentional, nobody wanted it, it would have been good to do some testing beforehand. It might have been spotted beforehand if their workforce was a bit more diverse, maybe. But they did the right thing. And it's an example of how it can lead to really embarrassing headlines if you're not careful.
Helen Byrne
So this is a nice segue to talk about the policy side. So certainly we are reading a lot right now about the different nations plans to regulate AI. We have the European AI Act, the US executive order. The UK have hosted the AI safety summit recently. What do you think about all the different policies from the various nations? And I guess what differences do you see between them?
Unknown Speaker
Well, very briefly to give a caricature, The Europeans were really the first to get stuck into this. And they produced this very nice risk based approach, which I like. So you look at the risk of the application, and some applications are not particularly risky, some are really quite risky, and you have different levels of requirements. But they got rather floored when rather general purpose AI models came up. So now they're talking about help, what do we do about the general purpose stuff. So that's a disadvantage of being the leader. The UK wants to pose more lighter restrictions, be lighter touch regulation than Europe, I think, in order to attract firms to the UK rather than Europe. The US has this directive, which one thing I really like about it is the use of standards. I think the NIST standards body is a terrific, very reliable organization, having them involved with standards for AI is a very nice idea. The risk is some things can't really be done by standards, but we'll see. And the Chinese, they talk about ethics, and they talk about ethical guidelines, and they want to make sure it's under control of the Chinese government, within their country, for example, the proposal is that data that's going to be trained on is going to be looked at, and if it contains a certain proportion of either toxic or politically unacceptable data, then it won't be used. And we know what the results will be because of that, because there's a fascinating paper that tried training sentiment analyzers on either data from within the Chinese firewall, or without, so from Chinese Wikipedia or Baidu Baike, and then looked at the sentiment of various sentences. And it was exactly as you would you would imagine that democracy was a hooray word in one of them, and a boo word in the other and various other things. And there is some suspicion that Falcen LLM. I don't know whether you've heard of Falcon LLM. That's, that's from the UAE. And there's some suspicion that it's being rather pro UAE, possibly just because it was trained within data from the UAE.
Helen Byrne
Yeah. So I guess pro? Yeah, that's not necessarily I mean, I know Aleph Alpha is a German LLM startup and they are building European sovereign, AI. So they've built LLMs, sorry foundational models, they're multimodal, that are trained on lots of European data and are not going to be skewed to North America. In one very funny kind of example they had from their marketing years ago, they probably don't use this anymore. But it was asking the LLM you know, what's the best sport? What's most popular sport? And it comes out instead of you know, if you ask ChatGPT or Claude or whatever, it will be NFL or... so yeah, I can imagine how a pro-UAE, I don't know what pro really means. But if it's trained on a lot of data from that region, that could be useful.
Helen Byrne
So thank you for the kind of the introduction on the different policies and the different nations and regions. So who do you think should be making these policies? So if you could kind of have a blank sheet of paper and decide and think about who should be involved in kind of pulling in these policies, these regulations? Who do you think, what sort of people should be involved? I know that you co-signed a letter ahead...
Miranda Mowbray
I was one of the signatories, yes.
Helen Byrne
Yeah. So ahead of the AI Safety Summit that was held in the UK. And part of that was about including more people that are actually going to be affected by these AI systems into the discussion on the policy. So who do you think should be involved? And what perspectives should they, or would they bring?
Unknown Speaker
Yeah, so I think to make decisions about AI, you need - governance of AI, which is what we're talking about - you need people who know about policy, and what can be passed as policy. You need people who know about the technology, obviously. And I think you do need some representation from the people who are most likely to be affected. Because one problem that this letter pointed out is that the summit was well, it was a summit, so there were only a few organizations represented. And they did tend to be not exclusively, but they tended to be technical organizations, institutes looking at the very long term, and politicians, obviously, but there wasn't much from people who are thinking about the current effect of AI on citizens. I'd like to point out that AI Now was there who've done a really great job on this. But it did seem to be a bit imbalanced. AI Now was not the only one. There was also the the Alan Turing Institute who have an ethics board, who have been thinking about that sort of thing. But but really, you mentioned Open Philanthropy earlier. Open Philanthropy had given grants to, I counted eight of the organizations that were there recently, on either existential AI or biosecurity it seemed to be, and AI as a biosecurity threat seem to come up surprisingly often. And Open Philanthropy were there too. So it really did look like it was a lot of Open Philanthropy, and a lot of their chums, plus some other people. And I think there might have been more of a balance. It wasn't as bad as I first thought. So the first publicity for this summit was all about existential long term risks. Whereas there was discussion of other things, which is good.
Helen Byrne
Yeah. More generally, how important is it to have people from different diverse backgrounds involved in the building and regulation of AI, do you think?
Unknown Speaker
This is something that I really care about. Something I tell my students, all of them is if they're doing an AI project, they should do an ethical assessment beforehand, it's good to do that. And within this ethical assessment, I show them, give them tools for doing this. They should invite people who are not part of their team, and who are not either techies or managers. Ideally, if you can have representation from stakeholder groups who will be affected by the operation of what they're doing, then that's wonderful, but you don't always have that's quite heavy handed for a small project. You don't always have the time or the money to do that. But at least you should have somebody who's not already bought into the project. And what matters is, I think, is people talk about gender diversity, ethnic diversity. That's good. But what really matters is people who think differently, who will be able to see things in ways that that are different, who will have different perspectives.
Helen Byrne
I believe that that's true for for every team that builds anything ever as well.
Unknown Speaker
I'm with you there. And my own experiences when I've worked with very diverse teams. It's been fabulous. It's been so good. So I would recommend this just for your own personal pleasure as well.
Helen Byrne
I'm with you. I'm with you. So I wanted to ask you about how you feel about the binarization, the splitting of... the polarization, maybe of the groups of the pro, the techno optimists as they're called, and the worriers, the ethics, worriers.
Miranda Mowbray
The accelerationist and the Doomers.
Helen Byrne
The Doomers. Yes, exactly.
Miranda Mowbray
I don't know that it's a split as that. I mean, if you look at the recent soap opera with OpenAI, the soap opera seems to have been because a lot of people in Silicon Valley both think that AI will solve every problem in the world, and that it might destroy the world. Right. So is that pro-AI, anti-AI. It's both. My own position is a bit milder than that on both extremes. I think there are wonderful things that have come out of AI already, there are more that will come in the future. But there are things that we need to worry about. I don't believe that it'll destroy the planet anytime soon. But there are things, there have been difficulties with it already.
Helen Byrne
Do you think that the portrayal of the polarization in the media, in popular media is having an effect?
Miranda Mowbray
Yeah, absolutely, I do. So there's an issue that if you're a journalist, "AI will save the world" or "AI will destroy the world" are both better headlines than this particular application is a bit pants or this application will improve things a little bit. And it's had an effect in the real world in that some very smart people who have gone to Silicon Valley because they think either they will create a substitute for humanity, or that they need to work on this thing before it destroys us all. And if you look at some of the people who are working on the foundational models, that we're getting, that there are significant benefits from these. That's why they're doing it. They they think the stakes are that high. I don't believe the stakes are that high, but it's had an effect. It's also has the effect that the conversations that are not at the existential level, about smaller benefits, but still significant, smaller risks, but still significant. hasn't had so much airtime as it might.
Helen Byrne
Shall we talk about something more positive? You've mentioned a few... more the extreme ideas of where AI could change the world. What do you think... What do you predict will be the applications of AI where we'll see huge benefit? And if you could also tell us if you have any thoughts on the ethics on side of those applications?
Miranda Mowbray
Okay, so firstly, my intention is to change the world, it always has been I don't know about you. And I do think that AI is going to be an important tool if you want to. So areas that I can see as having very significant benefits. One is health and medicine. So we've already had machine learning finding contra-indications between different pharmaceuticals for example, that's that's a really old issue, old example. And I think AlphaFold is just marvelous, although I'm certainly not an expert in that area. One ethical issue there is whether we will use AI in elder care in a way that enhances the ability of humans, human carers, does jobs for them that really can be automated very easily without getting rid of, and supports them in jobs where you do need still need a human or whether will be used to get rid of humans in eldercare or to displace them even more than they are. Then there's agriculture, people don't tend to talk about agriculture for AI. But I think the use of AI in agriculture has tremendous potential. Knowing exactly when when it's best to plant and to pick. There are some very nice applications of it already. And a lot of knowledge is sort of folk knowledge of farmers that's been handed down, but hasn't been tested in a scientific way, and we can find out which of that is correct. Question. Interesting ethical question that comes up in agriculture is agriculture is one of the few places where I have seen collective ownership of data by the farmers, rather than by the technology companies. And you can do that because you have farmers organizations, you can have farmers, collectives, who do all the farming in a particular area with a particular subject. And I think that's a really interesting model to make sure that, they're not really the data subjects, I guess the plants are the data subjects in this case, but that the people who are using AI rather than providing it, get a good share of the value surplus. It's interesting. Third area is transport logistics. So making sure that we move things around efficiently, and machine learning is going to be one of the tools that we are going to need for climate science. I have a great student who's been working on that. And the question for that is, is that going to counterbalance the energy use by AI. So AI programs tend to use more energy than ones that don't just as a rule of thumb, and at the moment we're substituting things that could be done in a simpler way by AI, in some cases, increasing the energy use, this is not good. Having said that, I'd like to give a shout out to Google, for example, who were a pioneer in reducing the energy use for their data centers, they've done a lot of work on that, and that's the sort of thing you should be thinking about. And the fourth area that I think it's really great is AI as a tool for artists. I can see wonderful things happening. I know that's controversial. And of course, the ethical issues around that are copyright law, it's not clear whether it still works very well. Maybe we need adaptations for it. I'm thinking both of the copyright of data going in and of artworks going out. And really, we need to make sure that artists still have a way to make a living... human artists, which is not necessarily freezing everything so that it can't be done by AI. I don't think that that will be the right approach. I think the right approach is for artists to use AI as this magnificent tool.
Helen Byrne
You mentioned art and AI as a kind of positive tool for generating art. So at the meetup that I met you at earlier this year, you actually presented, showed something to us. And I wondered if you had it with you today, if you could tell us about it, and present it to people that are watching on the video and maybe describe it to people that are listening. Because I thought this was a really yeah...
Miranda Mowbray
I got some of the AI text to image tools. I asked them for an image of a small metal sculpture depicting harmony between AI and nature. And as you do when you prompt such engines, I did it multiple times I chose my, I think it was my top 10 or 20 that I liked, and then I sent them off to a sculptor and the sculptor and I decided that we liked two of them. So we decided to combine the two and she made a sculpture which I now own. So I'm holding out for those of you who can see the video and I will try and describe it so I should say, it's mainly like one of the pictures that came up except that it has, real moss - dead moss but real moss on it, which was something that happened in another sculpture, real lichen. Okay, so this couldn't have been 3D printed not with real lichen, it's made of sort of shiny steel metal. And you can see, there's a bit that looks a bit like a computer. It's on wheels, I'm not quite sure why it's on wheels, but that was in the image. And sprouting... the computer is, there's moss growing either side of it... and sprouting from it. There's a flower made of stainless steel. And it's holding out its leaves in a very optimistic, happy way, sort of greeting the future.
Helen Byrne
Brilliant. Love that. Thank you. Thanks for sharing that. I think it's nice to have a very positive. Yeah, AI and art, or collaboration between ourselves and AI, which you've shown us.
Miranda Mowbray
I should say, it may not be clear, because partly, my job is to talk about problems. But I'm really positive about this technology. I think it's really exciting.
Helen Byrne
Which very nicely leads me on to the last question that I have, which is really, more generally. And I know you've talked about the ways in which you see AI will be a huge benefit. But more generally, what are you really excited about AI at the moment, or in the news or research that you've read? What do you think is really exciting?
Miranda Mowbray
Oh, for a long time, I've thought that... so there are traditional old fashioned AI, was logic based. And didn't have huge amounts of data. But used logic. What's happened in the the recent upsurge is data based where you use a amount of data and you don't try and work out what's going on, you just analyze this data. And I've thought for a while that, if you could combine the two, you would get something really cool. And it looks like some of the steps they are having just now are people, organizations, researchers trying to combine the two in order to get around some of the problems that we know about the current foundational models. And that is really exciting. That's so that's a very techie answer. But that's that's what's excites me.
Helen Byrne
Great. Thank you, Miranda, thank you so much for today, and thanks for all of your answers. And your really interesting take on on everything so I really appreciate it.
Miranda Mowbray
Thank you very much for inviting me, Helen.
Helen Byrne
Thanks again to Miranda. AI ethics is such a big subject and there's so much to talk about. So I'm sure we'll need to come back to this topic again in future episodes. If you enjoyed this episode, then please leave us a review, share it and subscribe to the podcast. And make sure you follow us on social media @ distillationpod. Thanks for being here. Please join us again next time.
Transcribed by https://otter.ai