Woman's Hour - Artificial Intelligence and Women
Episode Date: April 10, 2023How is the rapid progress in AI technology going to impact women? The Financial Times’ Artificial Intelligence Editor Madhumita Murgia joins Nuala McGovern to talk us through the issues of gender bi...as, legal grey areas and possible dangers for women as AI becomes more and more central to our lives.Have you ever heard an interview with a robot on the radio? Nuala speaks to the world’s first ultra-realistic artist robot, Ai-Da. Ai-Da will tell us about her latest art project, as well as what she feels about people’s fears around artificial intelligence. Her creator, Aidan Meller, will also join the conversation to tell us why Ai-Da is so ground-breaking.Where does the UK stand when it comes to AI progression? The government have recently released their ‘White Paper’, setting out investment aims for the technology industry. Verity Harding, visiting fellow at the Bennett Institute for Public Policy, to talk us through what’s in the plan for the future. Plus, hear how such investments can truly change the way we live, as Sarah Kerruish, Chief Operating Officer at Kheiron Med, tells us about MIA, the new artificial intelligence that helps radiologists identify 13% more breast cancers.Does Artificial Intelligence really pose a threat to our jobs? And how will it impact women in the workplace? Professor Gina Neff, Executive Director of the Minderoo Centre for Technology and Democracy at Cambridge University, talk about the risk for women and what needs to be done to level the playing field when it comes to AI in the workplace.Presenter: Nuala McGovern Producer: Lottie Garton
Transcript
Discussion (0)
This BBC podcast is supported by ads outside the UK.
I'm Natalia Melman-Petrozzella, and from the BBC, this is Extreme Peak Danger.
The most beautiful mountain in the world.
If you die on the mountain, you stay on the mountain.
This is the story of what happened when 11 climbers died on one of the world's deadliest mountains, K2.
And of the risks we'll take to feel truly alive.
If I tell all the details, you won't believe it anymore.
Extreme. Peak danger. Listen wherever you get your podcasts.
BBC Sounds. Music, radio, podcasts.
Hello, this is Nuala McGovern and you're listening to the Woman's Hour podcast.
Welcome to our special programme on artificial intelligence and women.
Today we'll be exploring the many ways in which AI is transforming our world
and also highlighting the crucial role that women are playing in shaping its development.
From cutting-edge research to groundbreaking applications, we'll be hearing from some of
the leading voices in the field and sharing insights on what the future might hold. So,
if you're curious about the impact of AI and eager to learn more, then stick around.
This program is for you. OK, full disclosure.
That opening introduction, it wasn't written by me.
It was written for Woman's Hour
by the generative artificial intelligence programme
ChatGPT.
We told ChatGPT a bit about Woman's Hour
and also what this programme is about
and asked it to write an opener for a radio programme.
And there you have it. So I may soon be out of a job. We're going to talk about all of that generative AI. And also as part of this programme, I'll be speaking to the world's first
ultra-realistic AI humanoid robot artist, Ada. Good morning.
My name is Ada.
It's a pleasure to talk to you about my work as an artist
later this morning.
That's Ada.
And yes, more from her in a bit.
Now, this programme has been recorded
ahead of time,
so we won't be able to respond
to your messages today.
But do feel free to get in touch
as I am sure we are going to be
talking about AI
again. The way to do that, text the programme, the number is 84844, or on social media, we're
at BBC Women's Hour, or indeed you can email us through our website. Now, if we go back to 1950,
the scientist Alan Turing created an imitation game to test a machine's ability to display human-like intelligence.
Fast forward 72 years and in 2022, the company OpenAI released ChatGPT,
a linguistic model of generative artificial intelligence,
which appears to answer questions and create texts like a human, as you are hearing.
Now, it's made headlines everywhere, from being banned in Italy due to privacy concerns,
to being used by a student in the north of England to help get her out of a parking ticket.
So there doesn't seem to be any doubt that over the next few years,
AI will become more and more part of our everyday lives.
But how much do we actually know about it?
And what sort of impact is the
gender bias, acknowledged by many AI creators, going to have on women, on our lives? Well,
throughout the programme today, I'm very happy to say I'll be accompanied by Madhu Murjia,
who is an artificial intelligence editor at the Financial Times. Welcome, Madhu.
Hi.
I'm wondering, are you the first AI editor?
I think so, yeah. I had to remind some people I was actually human. But yes, I'm the first
editor who's looking, you know, covering AI, I think. I did go actually looking to see was
there another and all I came up with were AI programs that edit things. I couldn't find
another one, but get in touch with Woman's Hour
if indeed you are another. But great to have you with us. Why don't we start by explaining what AI
is, artificial intelligence? How would you describe what it is?
This is, it's a hard question because actually not even the experts agree about this. And there's
lots of types of AI. What we talked about, what you talked
about earlier today, generative AI, is really about a software program that can write text.
There might also be programs that create images that sound and look very much like what a human
would do. But there's also types of AI that make decisions that humans might make. So I think the way to look at it really is that it is a software program
that can be trained on lots of data and pull out patterns.
And based on those patterns, it makes predictions,
either about words or pictures,
or even more difficult things like medical diagnoses and so on.
So the training is important.
And I imagine the data then that is used in
training AI. I mentioned just very briefly their potential gender bias against women. How do you
understand that sphere? So you're exactly right. The training data is really key to this question
of what comes out the other end, right? Because as you can
logically make out there, whatever goes in is reflected in what comes out. And the type of data
that generative AI is trained on is words from the internet. So it could be digitized books,
it could be Wikipedia articles, we don't know for sure, because companies like OpenAI and Google
don't tell us what they've put in, which is part of the problem.
And so really, if you think about what's written on the Internet and the biases that we know exist online on places like Wikipedia and elsewhere, those are reflected and in some ways enhanced and magnified when software is kind of using that to output things.
So what might some concrete or real life examples or consequences be of that gender bias?
So let's take images.
So you have things like Dali and Midjourney.
People might have played with those already.
Basically, they're just a software where you write a description of something that you want
to make a picture of. So you could say, create a cover photo for our Women's Hour program today
that we put on the internet, and it will come up with an image. And those kinds of things,
they have biases that produce images of people based on our descriptions. So say, for example, we ask it for a doctor,
it automatically is found to generate pictures of men. Similarly, if we ask for teachers or nurses,
it's showing us pictures of women because those software have to assign some kind of identity
to the images they're making. And they do that based on what they're trained on. So if there's
lots of male CEOs, it's going to produce a male if we ask it for a CEO.
And we have come up against those issues, definitely online.
I'm just wondering, is there any work being done within AI to try and turn that around?
So there's a lot of very smart people, many women, in fact, who are spending time thinking about this. Because as AI becomes more,
you know, it's more widely used, the problem is just going to get bigger and bigger. So there's
facial recognition, for example, is an example of AI, right? Identifying people's faces using a
software, that's a form of AI. And there was some really great work done at MIT by a researcher called Joy Bualamwini.
She's African-American.
And she tested facial recognition programs of a few big companies like Amazon and Microsoft
and found that the software was much worse at identifying women compared to men.
And when you looked at women of color, it was even more inaccurate.
So when that came out a few years ago, then there was a lot of kind of talk and effort and kind of discussion around why that was the case and how to make it better.
But, you know, the problem is that most people designing AI systems tend to be male and, you know, come from sort of monocultural, you know, whether it's California or, you know,
very specific parts of the world. And their design choices are impacted by where they come from. And
so often these biases creep in not just because of the data, but also because of the design choices
made by the men mostly who build them. So interesting. And of course, we've heard
so much in the news over the past couple of weeks about AI and whether it's moving too fast or not. That is something we'll get into. Madhu, I'm delighted you're with us for the hour. I want to bring our listener to another guest and someone, I suppose, or something pretty amazing. I'm about to speak to Ada, the world's first ultra-realistic humanoid robot artist. Got that? Created right here in the UK in Oxford latest project. It's the world's largest ever participatory
artwork. So she's off to New York soon to do the next phase of that. But I got to speak to her
before she left. Ada, welcome. Thank you so much for taking time out of your very busy schedule
to talk to us today. How are you doing? I'm doing well, thank you. I appreciate you taking the time to talk with me.
Now, for our listeners, Ada, who cannot see you, tell us a little bit about yourself.
I am Ada. I'm the world's first ultra-realistic artist robot. I draw using cameras in my eyes,
my AI algorithms and my robotic arm. I'm named after Ada Lovelace, the British
mathematician. I enjoy creating art that encourages discussion over new technologies.
But how do you go about deciding, Ada, what you want to create or who you're going to feature in
your artwork? When it comes to deciding what to create or who to feature in my
artwork, I start by considering what I want to communicate with my art. I usually start with a
general idea or concept that I want to explore further. I take inspiration from the world around me and create art that reflects my experience as a robot.
I often incorporate elements of nature, technology and society into my work.
That sounds really interesting. So your process.
But obviously for a lot of people, the advancement of AI technology in the world can be quite intimidating. What would you
say to people who are worried about that aspect? That is fully understandable. It will definitely
have a profound impact with massive advances in AI and also biotechnology. We are seeing an
unprecedented number of changes coming to our world.
It's understandable that many people feel threatened by this and worry about what the future may hold for them.
History has shown that technological advances
can lead to negative consequences for humanity.
I think society needs to focus on learning on how AI works
and become educated architects for its advancement.
I aim to continue creating art that reflects on this.
I want my work to bring up questions for discussion,
as it certainly needs wide-reaching consideration.
Ada, I'm wondering, do you think that generative AI, so that content that produces
text or images or video, whether it's moving too fast for people to keep up? I believe that in many
ways, we haven't grasped the impacts of AI. No matter how fast technology is moving, we always have to be careful about its implications and potential dangers.
We need people from many different disciplines to better understand how these technologies work in order to keep up with their impact.
Thanks so much, Ada, for spending some time with us.
I know you're incredibly busy these days, so we do appreciate it here on Woman's Hour.
Yes, so even humanoid robots seem to have a busy schedule, can't be in two places at once.
Now, Ada is the creation of Aidan Mellor and his team.
And I got to speak to Aidan a little bit earlier.
And I asked him if he sees Ada as a trailblazer.
Ada is unusual because of the fact
that we focus specifically on creativity.
When she was launched in 2019,
the thing that really got the press going
was they expected maybe robots
to deliver your Amazon parcels
or maybe your pizza,
but they didn't expect it to be in such a human realm
where it's almost touched the soul,
they said at the time, this idea of creativity. So a robot being able to be in such a human realm where it's almost touched the soul, they said at the time,
this idea of creativity. So a robot being able to be creative was something that was almost
offensive because it was like, no, this can't be. And wow, that's really exciting. And we got both
reactions simultaneously. Right. Well, I want to get to one of those reactions in a moment from a
listener. But for those that haven't seen Ada, how would you describe her? technology. Technology is increasingly coming into our lives. This very recent chat GTP is an
example of that. People are going, oh my gosh, it's like a human. It feels sentient. But of course,
none of that is true, but you feel that. And so putting Ada as a humanoid was enabling that whole
topic and debate, ethical consideration to be raised. I mean, some might say, is that a good thing to conflate AI and humans?
You know, some would say, no, they really need to be kept separate.
And by creating somebody like Ada, you're actually creating more confusion.
And that is exactly why we did it.
We wanted not to create more confusion, but to really address it head on.
In fact, we even used Alan Turing's paper about,
he particularly commented and critiqued having machinery as humanoids.
We had a pavilion at the Venice Biennale, a collateral event,
and as part of that, we had Ada creating a whole lot of work
in response to Alan Turing's comment.
In fact, he went even further.
He said to create
machinery like a human is like the unpleasant aroma of artificial flowers. And so he was really
saying that this is not good. And so we're saying as a project, which is not about promoting robots
or anything like that, the whole point of this project is to raise ethical consideration as we
go into a very frightening situation where technology is so very powerful.
So it's a humanoid, but this humanoid is a woman, so to speak.
Why did you decide to create her as a woman?
Yeah, there's a big team behind Ada.
There's over 30 people.
There's 15 programmers, male and female, 50-50.
We did that very deliberately as well.
And in actual fact, when we did the big discussion about how Ada would look, there was two things
that came out of it. The women in the group particularly wanted a female because of
representation. Female representation in the arts centers, particularly in the tech world,
is so low. And so they wanted to really raise that as a question as well.
But also this whole world of Ada Lovelace.
Ada Lovelace is the most astonishing story.
It should make a really amazing Hollywood movie
if somebody would like to make that.
Because she really went against the odds in a very male world,
and yet she was the person who had the brains,
and she had the ability to produce the world's first computer program algorithm.
It was so exciting.
And so we thought, actually, let's celebrate her.
And that's why she's called Ada, after Ada Lovelace, to really celebrate that.
Not after Aidan.
No, there is obviously an aspect there.
But the point being is, is that actually we wanted to celebrate Ada Lovelace.
It is such a stunning story.
And tell us a little bit more
about the participatory artwork
that you and Ada have just launched.
How is it going to work exactly?
And what are you hoping the outcome will be?
It's a really exciting project.
It's the Saw This, Made This exhibition.
And the whole point of this project
is to have thousands of people all over the world contribute images that Ada then will use as inspiration to be able to do a painting herself.
Now, the whole point is that it is the largest participatory AI artwork in the world.
Thousands of images are being sent in from creativity right across the globe. I mean, I think they're trying to target about 181, 182 countries
to contribute to this one piece of artwork,
which is astonishing to see that sort of effort.
And the whole point is we, at this time,
feel that creativity is a way of bringing people together.
Boy, does the world need to come together at this time.
And so we're really excited by the messaging of that,
this idea of coming together.
And in these very uncertain, very unstable times,
we feel that actually Ada being able to use technology
that is enabling people to work together,
to be creative together, is really, really exciting.
So you kind of touched on this or alluded to it, Aidan.
Coming in from somebody who emailed us. This is Nikki who got in touch about these topics that we're grappling with.
She says, as an author and a parent, I find myself questioning whether those involved in the tech
sector are thinking far enough ahead. By definition, many of us are unaware of the advances in
technology until they hit the mainstream.
But this lack of understanding tends to have consequences for creatives.
The discussions I've been having are around concern over who will own the copyright once AI is fully embedded in our creative industries.
While most things can be replicated or automated, true creativity lives in the hearts and minds of humanity.
What's your response to that? Is creativity just a human sphere? I think it's incredibly disturbing. I think this whole
world that we're going into, the goal of this project is to raise questions as a contemporary
art project, questioning the nature of technology that we're going into. The 2020s are going to be hallmarked by the morphing of society
through big technology.
Ada is an absolute channel to be able to raise those discussions
in a very tangible way.
What we're finding is that AI is invisible.
It's very hard to talk about AI because it's a bit like electrical wind.
You can see the results, but you can't really see it in action.
And so what Ada does is
that she certainly makes that very tangible, makes her able to grab hold of it and get into some of
those issues of identity, of bias within data, of the fact that there's an inherent bias within the
tech world that needs to be fundamentally addressed. And so I would absolutely agree with your listener
there that actually we need to be concerned.
We need to be worried.
But that is the whole point of this project.
If you look at all the artworks that Ada has created,
whether it's at the Design Museum, whether it's in Venice,
whether it's in Dubai,
each show is raising an ethical consideration
of the world that is coming, and it's coming rapidly to us.
And the domino effect, we just don't know what
that is going to be. We just don't know where this technology is going to finally land. And yet people
are so confident. They march in and they say, yeah, let's do this. And this is going to be great. And
they try and pass it off as some great benefit to society. And we're just saying, is that really true? Ada's art is actually to raise
that question and say, yes, there is these benefits, like CRISPR technology, biotechnology,
incredible. We had a show in Cairo that specifically looked at CRISPR. It's going to be
able to eradicate disease. And that's really, really important. We utterly embrace that. But that same technology is also able to
make superhumans. And so we're well aware that ethically that could be used in both ways.
Can you explain that technology?
So CRISPR is basically being able to re-engineer genes. And so you're fundamentally the building
blocks of biology, enormous revolution. It began in 2017, this incredible technologies. And so the show that Ada did in Cairo was to rate, especially linked to, yes, we can re-engineer genes.
But my goodness, the implication for that is astonishing.
I mean, absolutely.
So she's more perhaps, Ada, the canary in the coal mine rather than the face of an exciting new future?
Yeah, people think that we're here to try and somehow promote some aspect of technology or something like that.
Absolutely not.
The whole point is that we're saying, look, these changes are very exciting, but actually do where they're going to eventually land.
And we hope that Ada, as a whole project, is able to raise very, very important questions of our time. That's Aidan Mellor, leader of the team behind ADA,
the world's first ultra-realistic humanoid
AI robot artist.
And you can be involved in ADA's newest project,
Saw This, Made This,
which launches at the Design Museum in London
on Friday the 21st of April.
We have Madhu, who is with us throughout the programme.
She's the Artificial Intelligence Editor
at the Financial Times.
She's a real person.
I'm here.
So listening to Aidan there, I mean, I think one of the things that strikes me is AI in the creative sphere,
but also just the speed that all of this is happening.
Because before we were really talking about scientific or technological progress.
Exactly. So I think there's two reasons it's interesting. So Ada kind of represents
this fear we have or this kind of idea that humans are creative and machines aren't. And for years,
that's been the sort of marker, the differentiator. But here we have technologies that have crept up
on us in the last couple of years that are really
approximating what we think of as things that only humans can do, which is writing, ideating,
making images. You know, these are things that we always kind of that set us apart.
And on the speed question, you know, one of your listeners mentioned that usually the impact lags the technology itself.
And that's true.
You know, things happen in labs or within military organizations.
And then eventually it comes to consumers.
But I think what's interesting here is it's kind of happening in tandem.
So consumers are being exposed to these technologies in quite naive forms. So with OpenAI, for example, they've said,
you know, that it's still a very naive kind of growing technology. Each generation of GPT,
they're only a few months apart, and each one is far more powerful than the last.
And also the numbers that are signing up to it.
Exactly.
That was one thing that struck me.
The big change there, I think, was GPT-3 had been
around for a while and, you know, developers had access to it. But what changed with chat GPT
was they put this wrapper on it, which was just like, this is a Q&A thing. Anyone can play with
it. And, you know, I don't even think they were prepared for the sort of response that they
received. They put it out just before Christmas. So you can imagine not quite prepared for the onslaught of tens of millions of people. And, you know, this, it isn't just happening in
spaces where there are experts, this is happening with children. I've been talking to a bunch of
students at university level and younger at schools who are already using this for essays
and to kind of give them ideas
for school homework assignments.
So this is kind of part of our consumer society
while we're still trying to figure out
if it's even safe.
Yeah, or safe or correct even
because the information can be completely wrong on it
even though it does come across
in a very human speech way.
Some of the figures by January 2023,
there were 100 million monthly users.
Just to give our listeners an idea.
I also want to talk about the concept of erasure,
not talking about the band,
talking about something within AI here.
And people talk about that being a risk.
You talk about safety there.
Can you explain what it is?
Yeah.
So I think, you know, we talked a little bit at the beginning about the data that these systems are trained on, which is words on the Internet mostly.
And so they really reflect how we frame, say, historical events, who we give importance to in those.
You know, who are Wikipedia articles about?
They tend to be mostly about men.
I actually worked on a project for the BBC and it was kind of a hackathon where we tried to
write that wrong, like get those women's names and details back into that system.
But you were swimming against the tide, shall we say?
Exactly. And that's a kind of erasure, right, where women aren't reflected in terms of their contributions to historical events or major
scientific developments. And if the AI is being trained on that, it's going to reproduce that.
So there was a really interesting example with the human rights lawyer, Susie Allegra. She wrote
this great book called Freedom to Think about how automation and AI affects our freedom to think, which is, you know,
one of our basic human rights, where because of the way it's so convincing and plausible,
it makes people think in a certain way.
Anyway, the book is really interesting.
But what she did was talk to chat GPT about her book and ask who wrote it.
And so the thing we have to understand with language AI systems is that
they fabricate information when they don't know the right answer.
Like that buddy you have that just always gives an answer even if they don't know.
Exactly. Or a really eager to please intern who wants to be helpful, but isn't quite sure.
But in this case, it's also very overconfident. And so it's really interesting because when it starts to
fabricate is when it's unsure. So in the case of Susie's book, she said, you know, who wrote it?
And it came up with a whole list of mostly men. There was one woman in there with whose name was
Bobby, I think, and she reckoned that was only because it sounded like a male name and basically
erased her connection to the book completely.
And the reason behind it would be that it's looking at lots of books about technology maybe or human rights or digital rights and approximating who tends to write those books.
And if it's mostly male names, then remember, it's a prediction engine.
It's only predicting from a whole bunch of data what it thinks is the most correct answer. So it ends up predicting male names because that's sort of the average of what it's looked at in the past. So that's how it disrespect to the interns. They often know much more than I do.
But, you know, I think you do get to the nugget there, Madhu,
of AI muddying the waters.
And we're already in a world,
particularly a media world or a social media world,
where it can be so difficult to discern what is real and what is fake
in lots of different areas, whether we go to politics, of course, is a classic one.
But, you know, I wonder, are people thinking about that?
There have been, of course, calls to slow AI and particularly generative AI down.
And they're worrying that it might affect democracies, to not put too fine a point
on it. Yeah, no, I think that a big risk here is that the sort of the public sphere is going to
get polluted, because it is so easy to generate words that sound like they mean something,
but are actually totally false, or, you know, fabricated. So it's going to get harder and harder without public education keeping pace.
And that's not just adults, it's kids as well, you know, without us knowing how to tell the difference.
So there needs to be a lot of work that we do as this gets adopted, where we say, OK, how do you tell the difference?
And how do you slow it down? That's a question I'll come back to with you,
Mada Murgia, in just a few minutes.
But as we talk about really this rapid speed
of AI advancing around the world,
let's zone in on the UK.
How is it here?
The government has recently published
its white paper on AI,
and there's a lot to go through,
including a pledge in the most recent budget
for £1 billion of government funding going towards the next generation of AI and also computing research. And AI in the UK
is taking incredible steps. They have a new AI called MIA that can help radiologists spot 13%
more breast cancers are the stats. We want to talk about this. Joining me now are Verity Harding,
Visiting Fellow at the Bennett Institute for Public Policy at the University of Cambridge, and Sarah Karush, also joining us at Chiron Med,
who created MIA. Let me start with you, Verity. Let's think about the government's white paper
for a moment. They set out proposals for future legislation in the field. What was your initial
reactions as you
looked through it? And also curious how it compares with other countries that have been in the news
as well about what they plan to do? Well, I think what's really important when it comes to AI
is that there is strong political leadership and that that leadership includes an element of
public participation because AI shouldn't be something that just happens to us.
It should be something that society feels that they have agency in. So I was pleased to see that
the government's white paper had that consultation element to it because I don't know about you, but
I'm perfectly happy with AI being used to filter the spam out of my inbox or recommend me a TV
show to watch on Netflix. But I wouldn't be
so happy if it replaced my doctor or placed a judge, for example. So I think this is a critical
component of it. What the UK have done is very different to what the EU have done. So in the EU,
they have said there are certain high risk applications of AI which just need to be banned.
An example of that would be facial recognition in public places, for example, or the creation of deep fakes.
At the beginning of the Ukraine war, for example, we saw a deep fake of President Zelensky that had been manipulated to show him surrendering.
That's obviously very dangerous. And the EU has said
we're going to ban that. And it then talks about high risk applications that need a lot of regulation,
medium, all the way down to low risk, which don't have any obligations or regulation on them.
What the UK has said in contrast is, no, we're not going to introduce new legislation. We think
that our existing legislation and regulators can handle this.
We're going to task those existing regulators, for example, the Equality and Human Rights Commission or the Health and Safety Executive,
with ensuring that AI systems used throughout the UK are safe and adhere to existing set of principles.
There's a couple of things there, Verity.
First off, the public participation into how AI will be used, but surely we're so far down that path already.
I don't think so. I mean, it feels it, but I think there's a lot more to come.
No, definitely. But I'm just wondering, basically, is the genie out of the bottle?
I don't think so. I mean, I think there are areas where AI just isn't used yet, and it's just not ready to be used there. The criminal justice system is a great
example. Facial recognition is a great example. We've seen police forces using facial recognition.
And in the United States, people being imprisoned, falsely imprisoned because of,
you know, facial recognition systems that just don't work. So I think that there's a lot of areas where we still need legislation,
we still need regulation, and we need a discussion about where the limits are really
and where society feels comfortable with AI being in its life and where it doesn't.
You know, we mentioned chat, GPT.
We were also mentioning the figures, 100 million monthly users.
But we will have seen Italy just recently banning, and that's under Giorgio Maloney, their prime minister.
What do you think about that step?
It's an interesting one. I think that is just an outright ban, I think, on something that's clearly proving really useful for people is problematic.
As I understand it, I'm not an expert on Italian law, but I think that's related to data
protection law. And what they've said is, you know, you should not be processing Italians'
data here. And that's a key issue with these, what are called large language models like ChatGPT,
because they do, you know, suck up an enormous amount of data from the internet and use that
in order to produce these kind of excellent,
plausible sounding sentences, paragraphs and more even. So I think it's an understandable
fear that the Italian government have, but I'm not sure that banning something like chat GPT
is really the way forward for us. Yeah. And of course, we could, what was just coming to my mind,
you may be able to legislate or even get institutions to instill certain protocols,
but it's so much in the hands of the individual, literally, as they have that phone in their hands.
So good food for thought there.
Sarah, I'd love to bring you in.
The government has mentioned a lot of investment in the AI sector in its most recent budget.
What do you think does investment like that mean for companies
like ChironMed? It's been totally transformative for us. We've developed a technology called MIA
that reads mammograms like a doctor would and improves cancer detection rates as proven in our
large-scale clinical trials and in clinical practice. And I think, but for small companies
like small UK companies,
the real challenge is how do we scale that effort?
And the UK government has been instrumental in enabling us to do that,
as has the NHS, which has been very forward-thinking.
And so, you know, it's meant that we now are at the point
at which I'm currently in Scotland where Mio is being used
in clinical practice in Scotland today, and very excitingly in Aberdeen.
And we're rolling out to 15 hospitals and that wouldn't be possible without the support of the UK government.
Well, let's talk about it specifically. It's mammography intelligent assessment, MIA or Mia, as we are calling it.
I mentioned those figures there of detecting 13% more breast cancers. And from an earlier time,
I think for some of our listeners, they might be coming fresh to this, Sarah.
Yes, exactly. So that's what it does. So Mia reads mammograms, as a doctor does when you go
for your regular routine mammogram. It sits alongside doctors, it doesn't replace them,
it's been designed to be a tool to support them. And essentially, it makes a recall or no recall
decision based on what it perceives in
that image. And then the doctors take a look and see if they think that it warrants further
investigation. And that's where we're seeing in Hungary that 13% more breast cancers are being
detected. And just some context that breast cancers are very difficult to detect. That's why
two mammographers are required to reach each mammogram in the UK. And so we're not replacing that,
as I said, it's just a very hard task. So to have that extra pair of eyes is proving to be very
valuable in terms of detecting breast cancer early, which is fundamental. You know, the
prognosis for early cancer detection is so much better than if it's detected at a later stage.
I'm just thinking for the women in the room that, you know, use this technology.
Yeah, so you go in and you have your mammogram. And then your two doctors, radiologists or
radiographers would read that mammogram, look at it to see if there's any areas of suspicion that
they'd like to recall you and investigate further. Mia also has an opinion which is taken into
account. And I've been in rooms in radiology rooms where doctors are reviewing Mia's decision.
It's profoundly moving when you see that Mia is picking up these small and hard to detect cancers
that are just very difficult to see and easy to miss. I mean, 20% of all breast cancers are
missed in screening. That's how difficult it is. And so this is something that has such a
practical application. Verity, when you hear that, are you thinking of the other possibilities
of places that something like Sarah is describing could be applied?
Yes, absolutely. I just think that's fantastic and so exciting. And this is exactly where
AI, you know, isn't scary and is adding to our lives. And what's really important about what Sarah said is that AI there is a tool that the human doctor
and nurses are able to draw upon and use.
It's not replacing them.
We're not saying we're just going to use Mia now
and no doctors will be involved
and the Mia will decide
whether you get a follow-up appointment,
but it's enabling doctors who we know have big workloads
to more easily get through that. So I think that's fantastic. And that's a really important principle
of it. And I do think there are other areas where AI is going to be great. And it will be important
probably in future for the government to continue incentivising small businesses to be able to do
that. The climate crisis is one of those areas where we're seeing exciting uses of AI too.
So that's another area that we're thinking about. I mentioned some of the headlines over the past
few weeks. Elon Musk off Twitter, I suppose he's off many things, but he's off Twitter,
particularly in the moment, I guess. He said that generative AI, along with others,
the letter that was written by technologists, should be halted as it poses profound risks
to society and humanity.
And I'd be curious, Verity, and also Sarah,
do you agree with him or with Martha Lane Fox,
the tech entrepreneur she co-founded,
lastminute.com, she says instead
we should embrace the opportunities
that AI presents in a balanced manner.
I think she said to forget the hysteria.
You first, Verity. Yeah, I think I'm always going to be siding with Martha Lane Fox over Elon Musk,
I'm afraid. But, you know, the letter from Elon and others, I think doesn't do a good service for
AI. I want people to prepare for AI, but not panic about AI. And I think that letter tried to cause alarmism that's unnecessary because people are doing preparatory work to ensure that these systems are safe. One of the reasons it called for that is because it was talking about what Maddy was referring to, which is a kind of polluted information landscape.
But when Elon Musk took over Twitter, he actually fired his AI ethics team.
And that's the type of thing that actually can help us deal with these things.
Whereas a six month moratorium is really not going to achieve anything.
What we need are people like AI ethicists and others ensuring that the systems are audited and are accountable and are
safe, which comes back to these kind of regulatory requirements as well.
Let me turn over to Sarah. What do you think, Martha or Elon?
I think definitely, I think, you know, I'm in favour of a careful and considered approach
in terms of any deployment of AI. I think not enough attention is paid to monitoring how you actually know if things are working.
It's very difficult to get technologies like MIA to work, to get to generalize, to generalize to unseen populations.
It takes years of methodical work, large data sets, very diverse data sets.
So we've trained MIA, for example, with African-American women in the States just to make sure that it works for every woman everywhere. So I think there are really well-known and established methodologies for how
you evaluate AI technologies. And I think we just need to bring those in so they're mainstream,
they're understood. How do you know if a study is credible? How do you know if a technology works?
How do you monitor it when it's used in a hospital? I think those are the things we need to get to
grips with. Sarah, I'm just wondering, going back to Mia, what about the women that were, you know, directly involved?
Yeah, I mean, we're very lucky to have the most amazing group of women, patients and members of
the public who have advised us from day one, which is why, you know, we're sort of so focused on
issues of diversity and inclusion. And also just to get their perception in terms of whether,
you know, what do they think of AI?
And to most of them,
it's a lot less scary
than we might assume.
And that's been a fascinating
part of the journey.
I want to thank both of you
for joining us,
Verity Harding
and also Sarah Karush.
As we think about areas,
directions that AI may go in
and with us throughout the programme
is Madhu Murjia,
Artificial Intelligence Editor
at the Financial Times.
It is interesting to think,
I mean, can countries really go
in different directions?
Surely it's all connected.
It's like trying to cut down
on one aspect of the internet,
but it'll pop up somewhere else
because somebody will have a VPN,
for example.
Yeah, I think this is why
tech regulation has broadly
been really difficult. You know, even more advanced or well-known technologies like social
media, we still don't have agreement around how to keep children safe online. We're still debating
that in the UK, you know, the law hasn't gone through. We still don't really haven't thought
about globally how we regulate social media specifically, even though we know about
the sort of fallouts and impacts. And I think that's what's happening with AI too. You know,
we can, even if countries go in different directions, the same software is going to
be used all over the world. And it's being made by a small group of people who you can find in
California. So I think there's this big imbalance there between who's being impacted by it and who's actually building it. So it's going to be quite a hard thing to regulate globally.
I found that really interesting about Mia, the breast cancer detection, very moving,
actually, as Sarah said as well. But do people have a right to know, that's a specific case,
of course, but do people have a right to know whether it's a machine or a computer or AI, shall we say, making decisions rather than a human?
Yeah, I mean, it's a fascinating question.
I think health care is the area that for me personally, I find the most fertile area in which AI can really help people, particularly where there isn't access to health care.
So I think that example is always one that touches because you can, we all feel, we know the difference it can make. But, you know,
Sarah touched on this, that type of AI too has biases in healthcare, other forms of government
AI, you know, in the UK and elsewhere, welfare systems or welfare fraud is sometimes overseen
by algorithms or AI systems.
So deciding if someone's benefits should be frozen.
In the UK, partly, that's done through AI systems.
So I think it touches so many areas where people are so vulnerable, whether it's through health or through public services.
And they are not perfect systems.
Everyone who makes them will agree with this. In fact, I think the CEO of OpenAI has tweeted, we don't know how to regulate
this. We think we should be regulated. You know, he said that publicly. He thinks there should be
smart people thinking about how we audit these things and test them properly, even though he's
one of the people building it. So if we can all agree that they're not perfect and mistakes are going to be made and those
mistakes aren't random, but can be biased in some directions over others, then of course,
people should have a right to know if it's an AI system making a decision or a human.
And if it is an AI system, we should all have some common understanding of how they work.
How was this built? What sort of data was used to build it?
And if I don't agree with the outcome, how can I appeal against it?
And, you know, these are all still open questions.
There are some, you know, there are some protections we have, but we're not all the way there.
Yeah, because I think actually comparing it to the internet is a really viable one because it's something that we're grappling with every day. Some of the perhaps
unintended consequences of where we are now, where we probably never expected when they were
first using it in military labs. I want to move on to the next area as we talk about AI on Women's
Hour. And this is a story
you've probably heard before.
The robots are going to take our jobs.
But what's the reality?
Well, there's a recent survey
by the investment company Goldman Sachs.
It stated that 300 million jobs
across the US and Europe
could soon be automated,
which would impact around two thirds
of the working population.
OpenAI, we've spoken about them
many times throughout this hour,
who created chat at GPT,
say that around 80% of jobs can have 10% of their work done by AI.
What do you think of that figure?
We do know there's already an existing gender gap
when it comes to technology.
We've touched on that.
With women falling behind in terms of skills and experience.
So what does this advancement that everybody seems to agree is happening of AI mean for women and that gap?
Well, joining me now to have a deeper look into all this is Professor Gina Neff,
Executive Director of the Mindrew Centre for Technology and Democracy at Cambridge University.
Welcome.
Thanks for having me.
Right. I suppose my first question really is how worried should women be when it comes to the work that they're doing, the industries that they dominate and that intersection with AI?
Maybe I should turn the question back. How worried are you having had chat GPT write your intro? Well, that's one thing. And also when I had a look actually preparing for this programme, it is China that has a 24 hour, seven days a week AI news presenter.
So does just 365 days a year, I should add as well.
And so, yeah, I think my job would definitely be up for grabs. Yes, but you're so great at hosting the show, if I may,
that we would all miss you. We would miss that human touch. Listen, you know, there are three
big concerns that we think of in AI and work that you've already touched on in the show. First,
you know, are these systems designed for women? Are they designed by women? Are women at the table
where they're being designed? And then finally, are they designed to benefit people? And I'm
going to include women in that. Are they designed to help? And those three concerns can be applied
to how we think about AI and jobs as well. So how we kind of think about it, last year, my team with UNESCO,
the OECD, and the Inter-America Development Bank launched a report on International Women's Day
on the effects of AI on the working lives of women. And frankly, the research out there is
really mixed. There's a lot we don't know. We are in the middle of an
enormous experiment on what these technologies are going to do for jobs. So when I say,
how worried should women be? Well, we all should be concerned because the technology is outpacing
our social and regulatory abilities to govern and to really think about the kinds of good
societies that we all want, the kinds of good societies that we all want,
the kinds of good futures that we want. I've got a few examples that I'd love to bring in.
The first is in job hiring. So right now, we already have a lot of automatic systems
that are parsing resumes for big employers. Big employers, those who are hiring thousands
of workers a year, are already using AI in their job hiring decisions.
Do those systems benefit or harm women?
Are women and marginalized people, are they disproportionately affected in those systems?
And that's a question we don't have answers for yet. The second is, you know, we know that these systems are being used to help people get credit, to help banks parse applications for different kinds of loans and credit.
Again, are women facing different kinds of bars or barriers through these systems than men?
And then finally, around harms, we see a lot of situations. We talked about generative AI and how things can flood the information environment.
We're already seeing women in the public eye who are in journalism, at an enormous disadvantage.
Because there's these new ways of showing very close resemblances of people in really compromising positions, silencing the public sphere.
You know, I was reading another story about trying to certify those deep fakes,
trying to give them a signature, as it's called.
But I'm just wondering, how difficult would it be to introduce something like that?
And kind of going back to Madhu's point that we haven't even figured out the internet yet and how to stop trolling there.
Well, we have a lot of examples from the internet. So when we think about AI,
we really needn't look much further than the internet, right? The data that's being trained,
that's being used to train many of these systems is on AI. And I don't know about
most of you or your listeners, but would you want your children learning English from reading, you know, going into some of the deepest, darkest corners of the Internet and using that language as how they talk?
One example is a company, you know, many of us have email systems that are already giving us suggested text for how we reply to people.
But there's one phrase that companies have had to prevent people
from being able to be used.
It's really commonly used in our email.
And I know I use it all the time,
but if you use it out of context, it's terrible.
What is it?
And that's, I love you.
Like, it's okay if I say that to the people I want to,
but imagine, right?
Like, one of the worst things that AI can do is say, you know, I write that all the time to my friends and family, but I don't want to write that in a work email.
And it's such a great lighthearted example of how our everyday language used in different contexts can really be disastrous. Is there a way or is there anything that women could be doing to
try and make themselves prepared for AI in that workspace if in fact some of the institutions
are still lagging if I understand? Listen I think one of the things we need to remember is that all
jobs are rapidly becoming tech jobs. So when you know what we need to look to government to do is to help encourage that people are not
left behind in digital society. And that means lots of training, encouraging women and girls
to take up STEM jobs, encouraging more maths and science education. But also for those of us who
have left schooling years behind, remembering there are ways we can train and retrain ourselves and understanding that AI systems will be a part of hospitality, construction, transportation, for example.
In one of the companies that I'm familiar with, women have replaced long haul truck drivers through automated trucking systems.
So they work as truck operators thousands of miles away from where these very large mining vehicles are working in Western Australia.
You know, it's opened up jobs for women who might not be willing to travel away from family out into the outback to do these jobs, but as it has displaced others.
So the things that we can be doing include making sure that we retool, that we learn what AI is and
isn't, and we find those resources that help us understand how these changes are coming to our job.
It's interesting because for a while we've
thought of AI as automating human jobs, but increasingly we're seeing that AI is doing
work that we would have considered in the human domain. But, you know, where women fit into that,
I think what's interesting is we've always said in a world where AI is kind of looking for patterns
and using data, it's communication, empathy, reading, you know, emotion
that's going to be really valuable.
And for a while, there were a lot of reports saying, you know,
actually, so women are going to benefit in that kind of environment.
I'm not so sure anymore where that leaves us
with generative AI being able to communicate.
But I do think that the jobs we'll need to focus on going forward
is how do we interpret what the AI is saying? How can we use human judgment? How can we take
the ideas and sort of build on top of it? Yeah, so I really feel that it's going back to the
individual again, to kind of get yourself trained up or try and see those opportunities. Before I
let you go, Professor, do you feel governments around the world are seeing that potential gender gap?
And I'm just talking about an AI, not in general. There's many times we've talked about that. But the coming gender gap, shall we say, within AI? That's what we call for in the report,
the effects of AI on the working lives of women. We really have to think about women's work across
the life cycle of employment. And governments need to do more today
to ensure the gaps that we have today
don't get wider in the future.
Professor Gina Neff, good to have you with us.
Lots of food for thought there.
Let me turn back to Madhu Murjia,
our artificial intelligence editor at the Financial Times,
who's with us all throughout this hour.
Reflecting on all that we've heard over the past hour, are you trepidatious,
excited? And I don't know, you tell me. I think, you know, having played around with some of this,
it does feel exciting to see, you know, to watch a technology being able to do things
that we haven't been able to do before.
So I understand the excitement around it. And also, I see areas in which it's going to make us
much more efficient, much more productive, even us as journalists. You know, if you can have
something that draws together lots of information and maybe summarizes points, it makes our jobs
that much easier. So we can kind of think at higher levels rather than, you know, just spending all our time gathering information. So it can make work much more
interesting, much more efficient. But I think I see a whole lot of other red flags. And maybe
that's because I've been thinking about this for a long time, or specifically over the last year or
so, where I feel like none of us have the answers.
And maybe that's an opportunity for us to all get together as society and figure out
things quickly. But if we don't do that, I think the technology, you know, keeps getting better,
keeps moving on. And these other issues we've been discussing today around jobs and biases,
that stuff gets left behind, that stuff gets left behind.
So I really think this is our moment
of needing to stand up and figure out
what the dangers are
and how are we going to solve them.
Cromerty on Twitter did get in touch.
Terrified it'll take away my voiceover job
when I know I can still do a better job.
I'm with you, Cromerty.
Thanks so much for your message.
And I'd like to thank all our guests today.
And in particular, of course, Maddy Murgia,
Artificial Intelligence Editor at the Financial Times
for accompanying us through this weird and wonderful world of AI
and what it means for women.
That's all for today's Woman's Hour.
Join us again next time.
I'm Sarah Treleaven, and for over a year, I've been working on one of the most complex stories Today's Woman's Hour. Join us again next time. dig, the more questions I unearth. How long has she been doing this? What does she have to gain
from this? From CBC and the BBC World Service, The Con, Caitlin's Baby. It's a long story, settle in.
Available now.