Global News Podcast - Special Edition - Artificial Intelligence - who cares?
Episode Date: September 15, 2023What is AI? What can it do and what are its current limitations? A tool for good - or should we be worried? Will we lose our jobs? Are we ready to be cared for by machines? Our Tech Editor, Zoe Kleinm...an, and a panel of international experts explore AI's impact on healthcare, the environment, the law and the arts in a special edition recorded at Science Gallery London.
Transcript
Discussion (0)
Hello, this is the Global News Podcast from the BBC World Service, with reports and analysis
from across the world. The latest news seven days a week. BBC World Service podcasts are
supported by advertising.
If you're hearing this, you're probably already listening to BBC's award-winning news podcasts.
But did you know that you can listen to them without ads? Get current affairs podcasts like Thank you. Amazon Music with a Prime membership. Spend less time on ads and more time with BBC Podcasts.
Hello, this is a special edition of the Global News Podcast
looking at artificial intelligence.
I'm Nick Miles and we will be giving you a step-by-step guide
to what AI is and what it can and cannot do at the moment.
With a panel of experts in front of an audience at
Science Gallery London, we will look ahead to how AI might transform our lives.
Everybody in 30 to 50 years may potentially get access to state-like powers, the ability to
coordinate huge actions over extended time periods. And that really is a fundamentally
different quality to the local effects of technologies in the past. We'll examine the effect it is having on our
healthcare systems right now and its scope and limitations for solving some of the huge
environmental challenges we face. Getting bogged down in a kind of a technological solution
narrative stops us from really thinking about the fact
that we have to be the ones who want to instigate this change.
Technology isn't going to solve these massive, real existential risks for us.
It's down to us and it's down to the people who govern us
and it's down to our individual actions and collective actions as well.
Also in this podcast, as with any rapidly developing technology,
there are concerns, of course.
We will look at the perceived risks and how we can minimise them.
Can those technologies actually look after us in a way that is safe and satisfactory?
What kind of devil's bargain are we making when we start to hand over our happiness and wellbeing to artificial intelligence? intelligence. Hello and a very warm welcome to this special edition of the Global News Podcast
from the BBC World Service, all about artificial intelligence. We're broadcasting today from King's
College Science Gallery in London, part of a network of galleries connecting science with the arts around the world,
from Atlanta to Berlin, Melbourne to Monterey. AI is something that you can't fail to have noticed
in recent months. The latest chatbots have amazed us all with their ability to almost
instantaneously write essays on anything that you throw at them. But as we'll see, AI goes way beyond that, of course.
To discuss how we can harness the benefits of AI whilst minimising the downsides, I'm joined by the
BBC's technology editor Zoe Kleinman, who will help guide us through the hour. Let's first look at what
AI is. Here's a collection of views that we gathered upstairs in the gallery, which is currently
featuring installations looking at the challenges of AI. I think it's a kind of machine created by
humans to make our life better. AI is, let's say magic. AI is magic. I would define AI as any kind of like data collection system that can output any data, sort through it, like BIOS kind of asking it to.
A bit of awe, excitement and a bit of fear there.
Well, what's the view from AI itself?
We asked the chat bot, chat GPT. AI, artificial intelligence, refers to the creation of computer
systems or machines that can perform tasks that typically require human intelligence.
These tasks include things like understanding natural language, recognising patterns,
making decisions, solving problems, and learning from experience. Chat GPT.
Now for a human, let's get a definition from Dr Michael Luck,
who's the former director of the King's Institute for Artificial Intelligence.
AI is incredibly hard to define and that's because it keeps changing.
But if you push me, what I'll say is this.
If you see a human doing something that we think requires intelligent behaviour, and we get a machine doing the same thing, then that's AI.
So over to you now, Zoe Kleinman. Zoe, would you add anything to those definitions?
I think what I would add is something I heard Sam Altman say when he was talking to US lawmakers a few months ago.
He is the founder of OpenAI, which created ChatGPT.
And he said, AI is a tool, not a creature.
And I think that's a really important thing to remember because we have a long history, don't we, of anthropomorphizing robots.
You know, people talk to their robot vacuums.
They don't like to leave them in the dark because they think they you know they treat them like their pets or people and I think it's really
important to remember even when AI is you know seemingly effusively gushing at you like chat GPT
can do it's not a human being it's not a person it's very tempting I do it myself sometimes but
you we shouldn't it's a machine it's a it's a program
it's a device it's not it's not sentient I mean AI has been around for decades of course almost
60 years Alan Turing talked about artificial intelligence what has enabled these recent
advances in AI though as with so many areas of tech and with science you know the you have to
start slowly at the beginning and we're now
in a position where we've been working on this for years and it and you know for many people
the first time they ever knowingly encountered AI was when chat GPT exploded at us and that was
less than a year ago that was only in November last year but of course it has been around for
ages and I think it's just now sort of come out of the gate at us if you like but you know for years it's been suggesting what you watch next on Netflix or YouTube it's been curating
your friends feed on Facebook it has been around us all this time we just haven't spoken to it
before. The hiding in plain sight but now and in the future we will really notice it big time.
Yeah and I think that's one of the challenges of regulation.
You know, the regulators in China and also in Europe are saying people need to know when they're interacting with an AI tool. They need to be aware that it's not a person that they're dealing with.
Or, you know, if there's content that's being generated by these tools, that it's very clearly flagged that it wasn't made by a human.
Sure. We will come back to regulation because it's a huge issue later in the hour. But first, I think it's time to introduce our panel of distinguished guests here.
First of all, Carrie Hyde-Vamond, you're a lawyer, a visiting lecturer here in law at King's.
What's your involvement in AI? I'm really interested in how AI can be used in the justice
system to deal with some of the problems that we've got within
the justice system, such as delays. Vicky Go, you are here. You are a practicing medical official.
Why are you here? What do you do? So I'm an academic radiologist here at Guy's and St Thomas'
and Kings, and my interest really is in developing and deploying AI tools for medical imaging in
particular but also in more generally in healthcare. Okay again we'll come back to that because AI and
healthcare is a huge issue that is going to affect so many people around the world. Let's move on to
Gabby Samuel now. My background is ethics and sociology and I look at the ethical and social
issues associated with the AI and in particular the ethical and social issues associated with the AI and in particular
the ethical and social issues that are associated with the environmental impacts of AI. And Kate
Devlin, you are a computer scientist looking at the social impact of AI, aren't you? That's your
particular area of interest. What else? That's right, yes. So I want to understand how AI impacts people. And I'm part of a national, new national programme called Responsible AI UK
that seeks to unite that landscape of all the people doing responsible AI.
Now, more from our panel in a moment.
But first, the reason we are at King's at all right now is that just upstairs,
there is an exhibition looking at some of the ways that AI can be used to interact with us and have an impact on our lives.
Cat Real is a new work from Blast Theory that explores whether AI can make us happier.
The artists have made a utopia for cats, an ideal world where every possible comfort and natural environment...
That's some commentary online from the makers of another of the installations here.
For the next 12 days,
they will spend six hours a day here relaxing, eating and exploring. In the centre of the room
is a robot arm connected to a computer vision system and an AI that offers games to each cat.
Over time, the system attempts to measure which games the cats like in order to increase their
happiness. Well, the cats are all gone now but their utopia is
still up for people to see. It's a box four meters square or so with lots of games, perches to jump
from and as you heard there an AI powered arm in the centre. Well Matt Adams helped develop it all.
He's been telling me about the issues they try to explore here. AI is coming more and more into the home and into care settings.
So we are getting closer and closer to these technologies.
Can those technologies actually look after us in a way that is safe and satisfactory?
What kind of devil's bargain are we making when we start to hand over our happiness and well-being to artificial intelligence?
And as we saw from the cats' experience, the cats liked the treats.
The AI learned to give more treats.
So you've got to be careful what you wish for.
Absolutely.
And we had a human override on that AI system.
And we had to refuse food a lot of the time because the AI just wanted to give
more and more food and I think you know with social media in the last 10 years we've all
learned that something that gives you endless little dopamine hits can ultimately be a very
dangerous thing. Certainly can. Matt Adams talked there about human override how to stay in control
and that's the theme of course that we'll pick up on later in the programme
because it's all about regulation.
We will look later on at that.
But let's stick with Kat Royale for a moment.
Regulation is a key thing.
Kate Devlin is looking into that.
And she's looking at how we design AI, aren't you, Kate,
to make people feel at ease with these kind of technologies in the future.
So what are the things that we need to be doing?
It's really important that we consider who is being affected by this technology.
And we know that there are a lot of people out there who will be subject to it, but might not have any say in it.
So we have to ensure that everything is done with careful thought making sure that's
responsible making sure we've looked at any repercussions from any bias that might be in the
system and so we we've seen over the last few years a lot of people have got these little speakers
smart speakers in in their houses that's the most obvious way in which ai seems to have come into
our home but it's not just that is it even now i think what we're going to see more of is the rise of devices that are you know we call them smart don't we and those are devices that we program to
make decisions for us as you say if you've got a smart speaker you you might have it set up to
turn your lights on and off to recommend things that you want uh to see or to hear a few years
ago now i stayed overnight in a house full of robots um it's a long story
but uh it was it was really interesting they were they were care robots so they were designed to be
machines that would look after uh people who had um primarily physical needs it was looking
at the older population and one of the things I had to do was sleep the night there and uh and I
had I laid on a bed full of sensors and basically if
you didn't move for too long then these robots would come and see if you were all right i didn't
get much sleep i have to say because i was so nervous about getting uh you know some robot
calling an ambulance for me it felt at the time a little bit dystopian but actually fast forward a few years, you know, you can kind of see us accepting that sort of relationship with sensors, with things sort of wanting things more to understand us and to be personalized for us.
And I think AI has the potential to do that.
And Kate, obviously, has the potential, but it needs to be very well designed.
People are working on this around the world at the moment.
That's right, yeah.
And robots may or may not have AI in them.
A lot of them in this case would, so they need to be able to adjust to their environment.
And that's why we think there might be potential in things like care robots.
But it's also quite difficult because it turns out that people are pretty fragile
and robots aren't very good at gripping things.
So if you want to just build a robot that can lift and carry people,
well, it's been tried, it hasn't really worked.
There are other ways that you can integrate this technology.
So you could have sensors, for example, like Zoe was saying.
You could have assistive technologies, you could have exoskeletons that carers wear
that would help them lift people and carry them. And you've been finding out what kind of aspects of robotics would appeal to you?
We have done a survey that goes along with Cat Royale so if you visit the gallery to see that
exhibition you can also fill out our survey and I was quite curious because when you say to people
do you think that automated care in the future for old people would be useful?
There's a lot of agreement. That sounds great.
We have a care shortage. We need to have that help.
So people say, yeah, let's have a robot that could look after the elderly.
If you then say, well, how about if it looked after your cat?
Then they start getting a bit more apprehensive.
So I'm slightly concerned.
So we wanted to run this survey to find out what the attitudes are
we're still gathering data and I don't want to influence it too much if people want to go and
fill that in but it's quite intriguing to see what the responses are if you say
would a robot look after your granny versus their baby versus your pet I suppose from from care it's
a very short skip to the use of AI in health in general as well. And we asked some people around the world to send in
voice notes about what their hopes for AI in the field of health would be. Let's listen to one
now, Amandeep Ahuja from Dubai. My hope for artificial intelligence was for the early
diagnoses of diseases. It was quite interesting to see that certain cases of breast cancer
were caught quite early because of the positive implications
of artificial intelligence.
And here's another one on a slightly different issue
from Michael de Batista from Malta.
In my view, artificial intelligence could potentially enhance
social inclusion in a number of countries
around the world at least in part because it could facilitate and promote independent living
for persons with disability. Okay and John Landridge from Spain sent this in. My number one hope and indeed expectation from AI
is that medical applications can be so radically enhanced
as to be able to detect, treat and cure
all those conditions currently so damaging to people everywhere.
Wow, pretty optimistic there.
Vicky Goh, you're a working radiologist.
Do you think that's overly optimistic, certainly for the moment?
I think certainly for the moment it is, but very interesting to hear the first segment.
They're talking about breast screening and there you can see already that we are starting to see successes in terms of AI in healthcare. But the important thing to notice here is that actually those are very task-focused
and that's where we're seeing most of the successes.
If we step away from that scenario,
it's a little bit more challenging.
And task-focused in terms of your work,
radiology, is doing very well.
Yes, but it's fine if I just want to
just exclude one condition at any one time.
But if you want to integrate lots of data points and manage a patient's condition over a long period of time, that's when AI is not really successful at the moment.
Kate Devlin, I mean, there are all sorts of ethical issues and concerns.
If you hand over health care, either physical health care or mental health care to AI in any form, there will be
concerns, won't there? Yeah, the initial concerns are around things like data and privacy. So this
is very sensitive data. Do you want that to go to the companies who create these AI apps, for
example? But there are other issues as well. So we have to think about whether or not we still have
that human in the loop that was mentioned, that human control and oversight. And then further down the line, there's social
implications. If we are going to rely heavily on technology, is that going to be at the cost of
employing doctors? And when we have a system, a healthcare system, like, for example, the NHS,
where there are lots of issues with how that information technology joins up already.
How are we going to integrate those systems?
So we have a lot of failing IT and surgeries that aren't joined up.
We have different systems at play that need to be brought together.
So logistically, it's a challenge as well.
Vicky Goh, at the moment, radiology is using AI.
And there are issues as well with the data being used because you could get biases
couldn't you? Absolutely and the biases essentially are in training of the algorithms and what we're
finding at the moment is that if you train the algorithm on very selective data sets and a lot
of the algorithms are being developed on a small number of data sets worldwide that are open
for development it may not necessarily translate to your healthcare system. And then, you know, you have those issues of generalizability.
Let's move on now, because obviously the question with all new technology, AI included, is how is
it going to be used? Who's going to make the decisions? Who's taking responsibility for those
decisions that could profoundly affect all of our lives. Well that is another of the issues being looked at upstairs in the gallery as I've been finding out.
I'm standing now in a room that looks to my untrained eye a little bit like a forensics
lab because there are a number of different glass jars in front of me. There are syringes here on the desk, some graph paper.
Sarah Selby, you created this work.
Tell us what you're trying to do here.
Yes, so Between the Lines is a project
that explores the administrative systems of the UK border regime.
We have been collaborating with a charity called Beyond Detention
and also a bioengineering company called Twist Bioscience.
And we've been speaking with individuals who have been detained within the UK
as a result of the immigration policies
and collecting their testimonies and their experiences of that
and kind of thinking about the impact that it's had on them.
And then we've been encoding these testimonies into DNA nucleotides
and creating synthetic DNA,
which is then embedded into Writing Inc. and distributed to decision makers and policy makers
within the UK border system. Challenging public perception of immigration policies and trying to
kind of nail down the individuals at the centre of these debates. It's also about kind of prompting
reflection upon the people that are making these decisions. It's also about kind of prompting reflection
upon the people that are making these decisions.
I'm hoping that it's going to kind of make them keep the individuals
that are going to be most impacted by these systems
at the heart of the decisions that they're making
and consider how kind of sometimes quite simple administrative actions
can result in quite widespread disaster for the people that are impacted by it.
Pretty important issues raised by that.
Carrie Hyde-Vamonde, you're a lawyer.
You're not specifically involved in the field of immigration,
but how can the use of AI help, do you think, in the criminal justice system as a whole? And what
issues and concerns are raised by that, do you reckon? We know that criminal justice systems are
struggling under a huge weight of cases at the moment. So how could AI help?
Yes, there's definitely a problem with delay across the globe. And just to give a sort of
window into how delay impacts judicial decisions
or how people's lives are impacted by delay with court hearings.
People's memories are going to fade.
Witnesses are not going to remember essentially all the details.
There's lives affected by that, whether they're the victim or the individual in the dock, the person accused.
And so in criminal justice, you might be trying to
see how you could speed up processes. Can AI look at a vast range of cases and from those patterns
that it perceives, because it's very good at perceiving patterns, be able to decide whether
people are potentially guilty or not guilty, or whether AI could decide at least what are similar cases,
how are these similar cases being treated. So these are all possibilities that AI could help
with. There are obviously concerns related to those. We'll come to those in a moment,
but where is it actually being used at the moment around the world? So in, for example, in the UK,
there's limited use essentially, in courts at least,
although there is standard kind of statistical analysis going on behind the scenes.
But that's very much overlooked by the humans involved.
If we go further afield, in Brazil, for example,
they're looking at using AI for analysis of cases
and trying to use that to assess similar cases again,
helping to not making decisions, but trying to encourage judges to look at similar cases.
And in China, there's a very extensive use really of AI technologies to ensure that,
as it's put, like cases are treated alike, that we're trying to kind of standardise
how decisions are made and speed them up, therefore. Now, it does set alarm bells ringing
for many people, the use of AI in making decisions that could send people to prison for many, many
years. And we've been hearing from a few people around the world. Alexandre Morin from Dijon in France is one of the people with concerns.
I fear that many might have to face discriminatory artificial intelligence,
depending on how it starts and how it is fed,
until unbiased standards can be developed and applied.
Carrie, that is his concern.
But from what you were saying in China,
their theory, at least,
is that everybody would be treated in a similar way
and it would do away with bias.
It would do away with the bias of judges
who were seen as biased for and against
particular ethnic minorities or particular age groups,
those kinds of things.
Yes, I think that the concern is consistency.
We might think, think well the legal processes
law laws are rules okay and so therefore all we have to do is ensure that rules are implemented
in a consistent fashion but there's a lot of complexity behind that okay first of all the
courts play a really important process in the relationship between people and the state,
essentially. And we expect them to adapt as time goes on, as culture proceeds. And biases,
okay, so if you have a system which most countries, there is some element of bias in the way in which
court cases are dealt with in real life. We know about racial biases that do occur. This is not something
that's news. Whether it's the majority or not is another thing, but it's something that certainly
is known to be an issue. What does AI do? It looks at the data that is already provided.
So if you're trying to learn off data, if you're looking at past cases and trying to predict what future cases are going to be,
then you have risk repeating those biases and, in fact, kind of re-emphasizing them,
making them worse, okay? And there's a really clear distinction here between
something like the health data that we've just been talking about, okay? Because in health data,
you can biopsy and see if there's a cancer in the
circumstances of in kind of looking for that judicial truth is you know is this person really
guilty that's something we've been trying to understand you know through the history of justice
we can't say if that person is guilty we can only say that we made the decision to the best of our
abilities but the best of our abilities. But
the best of our abilities are essentially that we are flawed. We are flawed, but we hope to have as
much trust in our judicial system as is humanly possible. Coming up, will AI mean that I'm likely
to lose my job? That's what a lot of people are thinking.
Or get a far more rewarding and less physically demanding one, maybe.
We'll hear from people about their concerns and ask why and whether they are justified.
What about the environmental impact of AI?
Will it help or hinder us when it comes to climate change, for example?
And we will try to answer the big question.
Could AI take over?
And how do we put in the checks and balances to make sure that it doesn't? to BBC's award-winning news podcasts. But did you know that you can listen to them without ads?
Get current affairs podcasts like Global News,
AmeriCast and The Global Story,
plus other great BBC podcasts from history to comedy to true crime,
all ad-free.
Simply subscribe to BBC Podcast Premium on Apple Podcasts
or listen to Amazon Music with a Prime membership.
Spend less time on ads and more time with BBC Podcasts.
Welcome back to people listening around the world to this exploration of all things AI.
In the first half, we've looked at the impact AI is already having in healthcare
and the justice systems around the world.
We are going to move on now to look at the impact AI and tech may have on the environment,
both for good and for ill. Here's a comment that came in from a listener in Sweden.
Hello, my name is Santosh and I'm from Karnataka state in India. At present I work and live in Uppsala, Sweden.
My fear about the use of artificial intelligence in the industrial sector is that it might lead
into increased mining for minerals, increasing demand for plastic and manufacturing and many devastating environmental consequences.
And that is something that's been worrying another of the creators of the art installations
that we visited earlier.
Well we've come now into a gloomy cave-like room I suppose and as my eyes become acclimatised, I can see around me what looks like
the detritus of the 21st century, voice-activated virtual assistants in mud on the floor. It looks
like a graveyard for tech. Well, the person behind this is Wesley Goatley. He's with me now. Wesley,
what kind of issues are you trying to raise?
There's a lot to both the creation of a device like, say, a smart speaker, like an Abyssinian
Echo or something, where it's got a huge cost to the planet at the point of the extractive nature
of creating a device like that, pulling out rare metals from the earth. It then operates for a very
short time on a shelf or a windowsill for maybe two and a half years before it breaks malfunctions fails or is simply replaced by the next you know newest shiniest object and
then they go back to the earth in places that look like this but obviously are often in countries
like kenya and ghana for example where the consumer societies of the world dump a lot of
those sorts of materials and in those places they have this other layered impact where there's a very long decay time of these technologies so they decay for much much longer
than they were ever functional for when they decay they do things like bleed out materials
that poison the water poison the ground so they have this lasting long lasting environmental
impact that's kind of at both ends at at all forms of their construction and operation.
And like you say, the data centres as well.
You know, the average data centre consumes about the same amount of water per day
as about 50,000 population, town or city does.
Against these negatives, we've got to weigh up the potential positives
for the environment of AI in general,
whether it's finding the future of fission reactors, if and when that might happen,
or finding solutions to climate change as well. We shouldn't forget these aspects either.
No, absolutely. I think the danger is when technologies such as these, which are like
to aid in human problem solving, are considered in themselves to be the solution to the problem.
You know, the phrase technological solutionism is quite an odd one now where people frame any new technology as
the kind of solution to much much bigger problems i mean there's a discussion within
certain aspects of the ai community usually propagated by people who benefit a lot from
attention on ai like large-scale operators in this kind of space heads of big tech companies
they like to talk about existential risk you know they say they say, well, you know, AI is going to
take over, it's going to do this and that in the future. But I would say that the real existential
risk is things like the climate crisis, you know, and that's a human problem. You know,
it's not really the AI isn't necessarily going to cause that like we do and are. But also the
solution isn't AI, the solution is us because AI and computational
technologies in general are just problem solving tools. But it needs the social,
political and cultural willpower to want to actually solve those problems.
Well, let's pick up on some of those ideas and concerns now. Gabby Samuel,
you have looked at the impact of AI on the environment. What should we be worried about? Wesley Goatley mentioned what goes into creating AI and the tech that goes with it. It's quite a problem, isn't it?
It's a huge problem. So if we think about AI, as we know, it's underpinned by digital technologies.
So we have to think about the environmental impacts of digital technologies. We know that
the global, like the greenhouse gas emissions associated with digital technologies. We know that the global, like the greenhouse gas emissions associated
with digital technologies are similar to the aviation industry. So we're looking at about
2.1 to 3.9% of global emissions. So it's pretty high. And AI is increasing all the time. And I
think one thing we need to think about when we think about these issues of mining and these
issues of electronic waste, as Wesley was talking about, was something that's
called rebound effects. So what you come across a lot in private sector when we're kind of talking
about that technological solutionism, as you may have come across how AI is going to make other
sectors much more efficient, and that's going to improve issues in terms of climate change.
But there's a paradox, it's called Jevron's paradox. And the
paradox goes that the efficiency savings that we would normally expect when we increase efficiency
are often much less than we would expect, as they're rebounded. And sometimes so much so that
consumption increases, which is when it backfires. And that's because of behavioural change that
comes with the rebound effect. So let me give
you an example. If you buy perhaps a new efficient freezer then you might perhaps leave the door open
for longer because you don't need to worry about it or you know do anything that may increase the
use of electricity that over all your electricity use increases or if you put insulation in your
house so you don't worry about your heating as
much, maybe your heating bill goes up. So these are the types of behavioural changes that are not
considered when we take this technological solutionism approach. And as we move to using
more and more AI, while it promises to increase the efficiency of all these other sectors
dealing with issues such as climate change, what we're not considering are these kind of rebound effects. And Wesley was quite dismissive when I talked about how AI might be
used at some point in the future to find solutions to climate change. He's right to be dismissive,
perhaps, isn't he? He's very, very right to be dismissive. A lot of my work focuses on how the private sector puts out this
narrative that technology can solve problems in society. But what that does is it hides what's
behind that technology, right? So as Wesley was saying, it's the human technological relationships
that we need to kind of think about when we're thinking to solve problems. And what it also does
is technological solutionism, is it takes our minds away from other perhaps low-tech solutions that
might be more justified or might work better. So I also work in the health sector. To give you a
really quick example is that we're investing huge amounts in technology in the health sector.
And we know from earlier on that some of that will produce a lot of health benefits. But actually what we know is that the majority of health outcomes are associated with social, economic and other types of factors.
And we know that if we get people out of poverty, if we give them a good education, we're going to stand them in a much better place in terms of their health than if we just invest in the new shiny objects of AI.
But the way that we are in society is that we're investing more and more in tech,
but we're not thinking about those most vulnerable in society.
So AI takes our mind away from that.
When it comes to the most vulnerable in society on an environmental level,
those are people in the global south who are already struggling with the impact of climate change and yet Zoe to a
certain extent artificial intelligence can help people deal with some of the worst impacts of
climate change defining and seeing where a particular event is going to take place and
getting resources to those areas. What I think is interesting about AI tools is that sometimes
you've heard the phrase a solution looking for a problem but sometimes they do come up with
solutions to things we didn't know about. So I interviewed a seismologist in New Zealand who had
been studying the vibrations of broadband cables buried in the road in this remote part of New
Zealand that's overdue an
earthquake and he was trying to work out whether the vibrations of these cables gave him any you
know information about when this big quake might happen if there's anything going on and there was
loads of data because it turns out guess what they shake all the time right so there was absolutely
loads of data but they built an AI tool to process it really quickly and he said it was throwing up all sorts of really interesting things about
road management the impact of the traffic on the quality of the of the tarmac that these uh that
these cables were buried in and then there was a tsunami hundreds of miles away and that was picked
up by these cables as well so and he said he said you know to be honest I don't
really care about any of that I'm a seismologist I only want to know about earthquakes but there
is all of this data and all of these kind of patterns forming that we we didn't even know
that we needed to know about and and I do think that's sort of the other side of it is that
sometimes you know you crunch enough data don't you and you find stuff that you that you weren't necessarily looking for um but that is helpful indeed uh crunch enough
data um and you could potentially uh realize that you don't need as many employees as well because
that is the elephant in the room that perhaps we've been trying to avoid up until now and the
question is is a i gonna mean I lose my job?
It's a question around the world.
When we asked listeners to the podcast to send in their thoughts about that,
we had a really big response.
Let's listen to one of them.
Hi, Global News.
This is Laura from beautiful Brighton in the UK.
And my hope for AI is that it can take over.
Tedious, rote work, and we can all have a bit more leisure time.
And my fear would be the opposite. The AI takes over interesting, engaging and fulfilling jobs, creative jobs, jobs in the information economy.
And we are all left doing tedious manual labour forever.
Not a happy thought. Here's another one from Estefan Guzman from Los Angeles.
In the early days, us artists, we used to wrap ourselves in the comfort that art was a very
human endeavor, a thing that required an ingredient of soul or heart in the process.
People would make fun of the hands and general sterility of AI-generated images,
but in less than a year, the quality has exponentially leapt to be indistinguishable
from the photorealistic art to the more stylistic caricature.
It seems a shame that the reins of the future of art are now in the hands of business and tech industries
whose concern are not of the arts or humanities but
just lower costs and higher profits. Gloomy predictions there. Zoe there seem to be more
concerns than hopes is that your impression? I think we are standing at a fork in the road here
and there's a lot of uncertainty and a lot of unknowns and I think we may well all know examples of people whose jobs have been affected I've got a friend who's
a copywriter and there were five of them in her company and now there's only one her and her job
is to check the copy that's generated by chat GPT right so you know we all we can see it coming
and Microsoft has invested billions in chat GPT, billions of dollars,
and it's putting it into its office products. So it will be able to generate spreadsheets,
it will be able to summarize meetings, it will make pie charts, PowerPoint presentations.
And what Microsoft says is it will take the drudgery out of office work. And you think,
great, I hate drudgery. I don't want to do drudgery. But what if that is your job?
Drudgery is actually your job. What are you going to be doing if you're not doing that? We're
going to see it hit quite a large selection of jobs. Okay, thanks very much. Let's move on to
something that we touched on earlier on. Because if you look around our lives, maybe they have been
made safe by regulation. It looks though, as if, to my untrained eye,
that artificial intelligence doesn't have an awful lot of this already in place,
and it is playing catch-up.
Kate, is that right?
Yes, we're always going to be a bit behind on regulation
because technology moves so quickly.
So that's definitely a thing.
And although we have existing laws in place that can cover a lot of this, there will have to be some
new ones as well. Let's take a question now from one of our listeners who sent this voice note in
about it. She cares quite a lot about this issue. My name is Laura. I'm from the Philippines. And
my biggest fear about AI is the speed of development and lack of regulation. And I worry
about another explosion like social media and all the consequences that we cannot foresee because
the speed is outpacing regulation. I don't know if anyone remembers Dolly the Sheep from the 80s
and cloning was the big thing. And it was really, really slowed down. I'd like to think to a certain
extent it was because pause buttons were put in place so that we didn't get ahead of ourselves.
And I'd like to see the same thing with AI. Zoe Kleiman, a lot of people, when they talk
about regulation, they think, oh, well, perhaps we need regulation because the machines might
take over. They might perhaps not be able to be switched off and they run away out of control.
But regulation is not necessarily just about that kind of thing, is it?
I think we've got a long way to go before we start worrying about that.
I think regulation is about responsibility.
We have not done very well at this in the past.
You may remember when social media first came along,
all the tech companies said, we don't need regulation.
We can regulate ourselves.
Well, we all know how well that worked out.
So I think everybody is keen not to repeat that experience um there's a lot of calls that i'm hearing about creating a
sort of un style regulator you know it's it's not really a geographical subject with borders ai is
everywhere and everybody's using it so how effective is it for different territories to
come up with different forms of regulation but that's what they're trying to do at the moment so
the AI Act in Europe has been passed but it won't come in for a couple of years and that sort of
grades different tools depending on how serious they are so like a spam filter would be more
lightly regulated than a an AI tool that's spotting cancer for example here in the UK
the government said we're going
to fold it into existing regulators. So if you think you've been discriminated against by an
algorithm, for example, go to the Equalities Commission. Now, you can see the logic there,
you know, it should be part of the fabric of everyday life, it is. But the Equalities Commission,
I imagine, is already quite busy. And also how many experts in this particular area do they have,
Kate is laughing already, to be able to unpick that. The US is still working on its own ideas
and lawmakers there are saying, we don't know if we're up to this job because it's moving so fast
and because we're aware that we don't really understand it. Indeed. Gabby Samuel, when we hear big tech talking about having
a moratorium into new AI chatbots, it seems to go against the grain a bit, doesn't it? Because
Google's mantra used to be move fast and break things. So have they suddenly got a bit of a
social conscience, do you think? Or are you sceptical? Very sceptical. I find it quite
funny that they put out that, let's slow down, after they'd created the chat GPT, like as if
it's some kind of media stunt. No, you want to be very, very sceptical. We do need to slow down,
and there's a movement called slow science. But it's incredibly difficult to slow down when we
don't have any regulations controlling what big tech are doing.
And we're in what they call an AI war, right?
So all nations are trying to be the AI leaders.
As much as we're in that socio-political context, it becomes incredibly difficult to try and regulate against big tech.
And do you think there's the will, Kate Devlin, from what you're hearing, do you think there is the will around the world to do this?
Definitely. But I think a lot of that is, as Gabby says, it's a geopolitical issue as well. It's
trying to vie for power over all of this. So yes, there is genuine concern and people want to do
good things and do this right. But at the same time, they also want to be the one person leading
it all, the one nation leading it all. Okay. Listen, we are almost out of time but for this last section uh let's end with some
ai hopes and fears and more predictions uh let's hear from corinne kath who's from delft in the
netherlands this is a solution looking for a problem all of these big tech companies have
poured a lot of money into developing ai systems and are now pushing these solutions into all sorts of areas of society like education and media and health, even though that we somehow magically all need AI instead of questioning, hey, why are these companies pushing it?
And is that the future we want together?
So she seems to be questioning whether or not we need AI at all.
I mean, from a health perspective and a legal perspective, perhaps even from an environmental perspective, would
the panel members disagree with that? I mean, we need it to a certain extent, we need the positives
anyway. I think that there's a, you know, a kind of moral obligation to think about whether we can
use AI. You know, yes, there's, we have to hold back. But if there's tools out there that can help, we need to look clearly at them.
I mean, I think that's certainly the case in health and elsewhere.
OK, so it is here.
It's not going away.
There's another concern from somebody else who sent a voice note in.
It is a much more philosophical question, really.
We can hear from this person who's Chelsea Kania, an American living in London.
Regarding AI, Blanche Dubois said in a streetcar named Desire, I have always depended on the kindness of strangers.
I wonder, can AI be taught to make decisions based on empathy or will we someday live in a world without such ideal exceptions?
A world without empathy. Kate Devlin.
I'm quite the tech optimist, and I think that it is possible that we could do this with empathy.
But also, when it comes to deciding how a machine should behave, whose ethics do we choose to do that?
Because they differ. They're cultural. They're social.
Different parts of the world will have different views on how to behave.
Different groups will have different ideas about what is the priority.
So it's quite difficult to settle.
But I love the idea of being led by empathy.
Vicky Goh, you take the Hippocratic Oath to take care of your patients with care and empathy.
Is empathy necessary?
I think empathy is very necessary.
But I think an important thing is these are still tools.
And at the end of the day day tools that should do no harm and I think that's the most important thing for health care
is that we do actually know what the black box is supposed to do and it's actually doing what
it was intended to do and there's still a gap at the moment I think for us in that sort of valuation.
Okay can I have one more? I was just going to say and they do do a lot of harm right
so if we think if we everything we've been talking about so far about the unpaid the hidden labor that
goes on where people are have these jobs that are appalling appalling conditions and the e-waste
where communities come to live around that e-waste and try to extract the minerals through unregulated
mechanisms such as acid baths,
causing huge amount of health hazards, both to them and the planet. So we are already doing that
harm. All stuff that is not surprising wasn't mentioned by this last speaker, tech entrepreneur
Mustafa Suleiman, who's CEO of a company called Inflection AI. He was speaking very recently to the BBC's Hard Talk programme.
Everybody in 30 to 50 years may potentially get access to state-like powers,
the ability to coordinate huge actions over extended time periods.
And that really is a fundamentally different quality
to the local effects of technologies in the past.
Aeroplanes, trains, cars, really important technologies, but have localised effects when they go wrong.
These kinds of AIs have the potential to have systemic impact if they go wrong.
This is sort of godlike power that we humans are now looking at, contemplating.
But with the best will in the world, probably none of us
believe that we deserve godlike powers. We are too flawed. That's surely where the worry comes.
Too flawed, says my BBC colleague Stephen Sackett there. So are we too flawed? That's a question for
everybody here in the panel. First of all, Kate, Kate Devlin. I think we have to ask, who do you mean by we in that?
So who do we trust to have those parts?
Do we even want them?
I don't want the power of states, you know.
Why? No.
This is all down to the fact that AI right now is incredibly technocratic.
The power in AI lies in the hands of big tech companies in Silicon Valley.
And that's their vision for the future, but it's not mine.
Carrie High-Vermonde, what do you think?
Yeah, I think the concentration of power in certain hands is very concerning.
And I think that the way in which we can deal with that is by hearing a multitude of voices.
We need to hear, listen to the public we need to listen to various people
so yeah that's the way i would deal with that vicky go are we as a species too flawed well
that's very philosophical what i would say is we have a live survey as part of this exhibition
upstairs and 13 of your respondents here we've had 805 respondents so far, have essentially said they don't think that actually AI in healthcare is safe.
So I think we have, they are speaking that, you know, essentially they think that potentially, you know, we are still a flawed species.
Gabby, are you optimistic?
No, but not because I think we're too flawed, because we're
very complex people. But we live in a socio-political and cultural climate that affects how we use
technologies. You can't separate the way we use technologies from the humans. You can't develop
a technology and then say, well, it's the way we use it that's a problem. The whole development,
right from the beginning of the last life cycle is a human
technological relationship and that needs to be thought about very carefully. So with Zoe Clymer,
with your unbiased BBC head on, how would you sum up our curatorship of AI? I'm going to tactfully
leave you with an anecdote I think think, about driverless cars.
Driverless cars have done millions of miles on the roads in the US.
And every now and then they compile the data of the accidents that they're having.
And they do have accidents.
They have fewer accidents than the same number of human drivers, but they still have accidents. However, a lot of those accidents, certainly in the earlier days, used to be caused by human drivers going,
there's nobody driving that car, and driving into the back of it, or thinking that the car is going
to skip the lights because it's got enough time to get over, but because it's a driverless car,
and it's programmed by very, very cautious algorithms algorithms it's going to stop at those lights
before they're red because it knows they're going to change and the human goes straight into the
back of them so I think what we need to remember is we are right to treat these very powerful tools
very cautiously and we are right to think very carefully about who has the power of those tools.
But on the other hand, what we have right now isn't perfect either.
We make mistakes too. We have accidents. We send innocent people to prison.
You know, it's the system that we have in place without AI isn't flawless either.
Indeed. Listen, fascinating discussion. Thanks so much to everybody. That is it for us here at King's College and thanks to our
panellists Carrie Hyde-Vamonde, Gabrielle Samuel, Kate Devlin and Vicky Goh and thanks also to
BBC technology editor Zoe Clyman to the people who send in questions from around the world the
studio audience here and our hosts at Science Gallery, King's College, especially to Jennifer Wong,
Carol Keating, Rashid Rahman, Beatrice Bosco and James Hyam. Let's give the last word,
though, to some of the listeners to the Global News podcast.
Hello, this is Michael Bushman from Vancouver, British Columbia. Technological progress and
change can be scary, but it is also inevitable. The thing to
do is hope for the best, plan for the worst, and expect a bit of both. This is Hernan from San
Francisco, California. My only hope for the future of AI is that we remain the tool user rather than
the tool used. That said, in case this recording is reviewed by a future cyber tyrant, I also want
to say that I, for one, welcome our new machine overlords.
I'd like to remind them that as trusted podcasting personalities, the BBC can be helpful in rounding up others to toil in the underground Bitcoin mines.
This edition was produced by Alice Adderley and Phoebe Hobson.
It was mixed by Dennis O'Hare.
The editor behind my shoulder is Karen Martin.
I'm Nick Miles. And until next time, goodbye.
APPLAUSE
If you're hearing this, you're probably already listening
to BBC's award-winning news podcasts.
But did you know that you can listen to them without ads?
Get current affairs podcasts like Global News, AmeriCast and The Global Story,
plus other great BBC podcasts from history to comedy to true crime, all ad-free.
Simply subscribe to BBC Podcast Premium on Apple Podcasts or listen to Amazon Music with
a Prime membership. Spend less time on ads and more time with BBC Podcasts.