Woman's Hour - Woman's Hour special: AI and women's health
Episode Date: March 27, 2025Technology journalist and author Lara Lewington asks how artificial intelligence can improve women’s health, and what we are ready for it to do for us? From prevention and diagnostics to testing and... tracking, we speak to female experts, scientists and practitioners.Contributors: Madhumita Murgia, AI Editor of the Financial Times Nell Thornton, Improvement Fellow, The Health Foundation Dr Ellie Cannon, GP and author Dr Jodie Avery, Program manager, IMAGENDO Meriem Sefta, Chief Diagnostics Officer, Owkin AI Marina Pavlovic Rivas, Co-founder & CEO of Eli Health Dr Lindsay Browning – Sleep expert and chartered psychologist Producer: Sarah Crawley
Transcript
Discussion (0)
Hello, I'm Lara Lewington and welcome to Women's Hour on Radio 4.
Hello, I'm a technology journalist, broadcaster and author, and today I bring you a special
programme looking at AI and women's health. We've an expert panel to help us understand
the potential of AI to transform women's healthcare. Could it mean better diagnosis?
Might it help us look after ourselves better? And if data is AI's fuel, how prepared are we to hand ours over?
Artificial intelligence can come in many forms,
but in simple terms,
it's a computer simulation of human intelligence
that can perform tasks like learning,
reasoning or solving problems.
But, and this is an important but, AI isn't human.
So how do we deal with issues of trust and make
sure we use it in the right ways, including freeing doctors up to better
provide empathy and oversight? You know, those human bits. In research for the
government's women's health strategy, 84% of respondents felt that women's voices
had not been listened to by healthcare professionals. So how can women and
other underrepresented groups be heard and diagnosed?
Some are calling for fairer inclusion in data and research, others for greater emphasis
on training doctors in women's health issues, and how conditions can affect the sexes differently.
Women in the UK may live longer on average than men, but a significantly higher proportion of our lives is spent in poor health. ONS figures for England show it to be around 25%. That's a quarter of
our lives. And we know there are disparities across the country too. So if something needs
to change, the question is, can AI drive that? The government knows we need to embrace it
in many areas and says it's already being used in the UK in health settings.
Diagnosing breast cancer quicker and earlier. Spotting pain levels for people who can't speak.
Helping discharge hospital patients faster. The list goes on.
But we've reached a point where the possibilities are growing exponentially.
Many experts long to switch from a system they refer to as sick care to one
of health care. To a place where we can predict and even prevent disease. So what does this
mean for women's health? As always, we want your thoughts. You can text the programme.
The number is 84844. Text will be charged at your standard message rate. On social media
we're at BBC Women's Hour and you can email us through our website or you can send a WhatsApp message or voice note using the number 03700 100 444. Now I have the
perfect panel here today. Joining me, Madhameeta Merjia, AI Editor at the Financial Times and
author of Code Dependent, How AI is Changing Our Lives, Dr Deli Cannon, GP, Author and Women's
Health Specialist and Nell Thornton, Improvement
Fellow at the Health Foundation. Madhameeta, let's start with you. Can you describe to
us AI in the context of health?
Sure, yes. AI has become this umbrella term thrown around for everything from chat GPT
to our kids' education. But in this context, really, we just think of it as a software
that is trained on a huge amount of data, with health that would be health data, and
it's able to spot patterns in that data. So if it's a picture of your chest, it's
able to see, for example, if there's early signs of some kind of cancer or other illness.
So essentially, it's just statistical software that learns these
patterns over time and then can be applied to predict very accurately, in many cases
more than the average human, whether somebody is ill or not.
You use the word patterns there, which is absolutely crucial to this, both in terms
of us tracking our own health and understanding at a personal level, and that uses AI too,
and what AI is doing at a healthcare level. How crucial are patterns to all of this and
why does this make the difference?
Yes, I think this is why I feel kind of most optimistic about the potential of AI in healthcare
because when you, from a kind of scientific academic perspective, and I'm a former student
of immunology here, so it's something that I'm kind of really interested in, you do see patterns over time,
you know, across genders, across specific ethnicities, and if you have a tool that is
able to find these sometimes subtle patterns that aren't picked up by us, even human experts,
you're able to kind of solve problems and issues
that have gone years without being found, right? So the potential here is to go beyond
to augment what humans have been able to do so far and to sort of co-evolve alongside human
experts to provide much better outcomes for women.
The possibilities here are enormous. Nell, would you describe this as the
early days and how excited are you about where we might be heading?
Yeah, so we know that there are lots of challenges around women's health and so as powerful
technologies like AI become available, it's right that we're asking the question about whether they
can help us address some of these challenges, but it is still early days. And so as we're exploring the use of these technologies,
it's going to be really important that we're not just
making sure that they're safe and that they're not
making things worse, but that we're actively taking steps
to use AI to make things better.
And a critical part of that is going to be talking to women
and understanding how they feel about AI being used
as part of their care.
And our research at the Health Foundation understanding how they feel about AI being used as part of their care.
And our research at the Health Foundation has shown that women do broadly support the
use of AI for use in their care, as do the general public.
But it does change when we kind of break things down by gender.
And we see actually women are consistently less supportive of AI when...
Why do you think that is?
So what we see in our data is that women have a bit less faith in the accuracy of AI systems,
so a third think that perhaps AI systems aren't yet accurate enough and that incorrect decisions
might be made and they're actually less convinced that AI is going to improve care quality when
compared to men.
We also need to have data that's good enough and we've had years of not enough data from
women. So how do we overcome that now?
Because we also need data that's collected for purpose. We need good data or none of this is any
use. Yeah, yeah, absolutely right. I mean, the kind of the classic term that's used is garbage in,
garbage out. So if we use poor data, we're going to get poor results. It's absolutely right that
women need to be represented in that data. And our survey work at the Health Foundation has actually found that around
75% of the public are happy for at least some of their data to be used to power
AI systems. So it's a really positive finding,
but crucial to that is going to be making sure that we're being trustworthy.
And that's about having the right rules in place for the access to data and also
being seen to use those and enforce those. Cause the more people trust it, the more happy they will be to give it.
Absolutely. So let's get from you, Dr Ellie, a bit of what this means on the ground.
How much AI are you using in your surgery and what's the reaction that you're having?
So I'm possibly an early adopter as a GP and I use an AI transcription tool in all of my consultations and that basically
means that my AI and it's a specific health scribe listens to our consultation when you
come in. Obviously I get permission from my patients to do that and then rather than me
either writing notes after the patient has left or while they're there, I can just concentrate
and listen and the AI scribe
is writing all of my notes and the beauty of that is not a sort of like
just a sort of quick thing to save me time what that means is a lot of the
subtleties and a lot of the data and the actual narrative which can be so
important in women's health is captured. Obviously it's all checked
by me because I firmly believe as I'm sure the panel does as well that AI needs their
humans to work very well. And that just means that all of that proper data is there for
that patient, which means that referrals are quicker, investigations are quicker. And for
example let's say in difficult
circumstances where we know there are problems with women's health, in mental
health and in things like endometriosis, you've got good, good narrative there to
help that woman and to move things forward. This is very clearly augmenting
you and I think people will find that reassuring. But also these platforms
we're using, these are built for purpose,
with privacy and safety built into them.
And I think that's probably quite important to note
that this isn't just use of AI platforms
that anyone can go online and do.
The data is not being trained on people's personal data.
You are using things that are for this purpose.
There is still worry from a lot of people.
Partly, people don't necessarily understand how it works or what's happening.
They know what data is going in, they know what comes out, they don't know what's happening in the middle.
What concerns do you think people are likely to have and are they justified?
Yeah, so there's definitely downsides, right?
No technology is perfect, particularly one that's so new.
technology is perfect, particularly one that's so new. And we have a tendency to trust when machines make decisions for us. We treat them like a calculator, which is either right or
wrong. AI is not like that. It's predicting a probability, a risk of something. So the
main risks, I think, there's twofold. One is when it makes errors, which it will because
it's a statistical prediction engine, those errors
can get multiplied very quickly and scaled up if the same tool is being used on 60 million
people compared to a human error, which is always confined.
And so it's really important to audit these tools.
And I think people are justified to worry that it can make mistakes because it's not
going to be perfect.
It's not a calculator.
And we need to have a process in place for when it does go wrong. How do I, how do I as a patient
come back and say, that's not right. Can you fix it? We don't want a situation like with
the post office scandal where nobody's responsible ultimately. And those errors get sort of multiplied
over time. And the second big issue is bias. We've talked about, you know, the quality
of the data, right? And you can have gender bias
when you're looking at healthcare as a whole, but even within women's health, you can have
ethnicity bias, socioeconomic bias, age bias. And we want to make sure that this is the
right outcome, no matter what your age or where you're from or what the color of your
skin is. And we've seen, for example, this exists with maternal mortality.
Black women are more than three times as likely to die
within a year of giving birth compared to Caucasian women.
And so if that gets multiplied in an AI system,
then you're gonna have worse outcomes
for one group over another.
So those are very much justified concerns, I think,
and need to be designed and kind of accounted for
within the design and implementation and kind of accounted for within the
design and implementation and kind of the policy around these systems.
Nell, how do you think we can deal with this to make sure that fair information is available
to everyone so everyone gets the best out of AI and nobody's left behind?
Yeah it's a really good question and obviously the data bias and making sure that the data right
is an absolutely critical part of that but there's another element of this which is what happens at the point of implementation.
You can have the most perfect AI system in the world but if it's not implemented in the right way
that can lead to bias and what we see at the moment within the NHS is that AI is largely being
explored in organisations that are kind of pockets of excellence that have the resources and the skills to do this well. And what we risk seeing is growing an
x-ray between the parts of the system that can afford it and can do that properly and
those that can't. And that's another element we need to be looking at is how can we support
the country as a whole to move forward with AI and not widen the gap.
Quick question to you, Ellie. What problem would you like to see AI fix?
Oh, that's a great question. I'd like to see in terms of women's health, obviously
we have huge issues with very prolonged diagnostic journeys for women in
conditions like endometriosis, conditions like ovarian cancer and we've already got
case finding within the NHS for targeted
lung health screening and I think we could use AI to actually find women who
are showing early signs of endo, early signs of ovarian cancer and actually
save some lives. Well actually you've just touched on exactly where we're going
next we're about to talk endometriosis. We're going to talk to someone who's making waves in this field. She's Dr Jodie Avery and his Programme Manager of Imagendo,
an Australian project that's analysing scan images with AI to identify endometriosis earlier,
a condition that on average takes eight to ten years to be diagnosed. Jodie, hello, tell us about Imagendo. Thank you very much for having me. So we have a project called
Imagendo which we're hoping to roll out in the next five years. We're combining
endometriosis transvaginal ultrasounds which are very specific
ultrasounds for endometriosis that follow a specific criteria. We're combining those with
MRIs using artificial intelligence and we hope that this would reduce the diagnostic delay of
ENDO which is about six and a half years in Australia down to one year so that young girls
can get back to school, get back to work and just have a better quality of life by finding out they've got
endo and not thinking they've got cancer or something like that. This is an incredible ambition. How are you seeing this working so far? It's really interesting to
see the patterns in different types of scans being able to make each scan individually better
and that's effectively what you're doing, isn't it? Yes, so we're using in the AI world, they're called algorithms and we've got
some really clever AI scientists at the Australian
Institute of Machine Learning in Adelaide and we're looking at
seven different signs on ultrasounds of endometriosis and the first one we
looked at is a thing called patch douglas obliteration.
So the endo actually hides up behind the uterus and a normal ultrasound wouldn't be able to detect this.
That's why you've got to have the special ultrasounds.
And luckily yesterday the Australian government in their budget for 2025-26 brought a new Medicare rebate for these scans.
So we're all very excited.
Congratulations.
For hours about this.
Thank you so much.
We've been lobbying about this for a very long time.
So, and now we have to also teach all the sonographers
to undertake these scans
because not many know how to do them.
I think there's probably about 10 groups in Australia
that know how to do it.
And only about three people in Adelaide who actually know how to do it, and only about three people in Adelaide who
actually know how to do it, where we're from. So, yeah. And of course, that's crucial. The training
is so crucial here. You can build AI systems, but you've got to get them into healthcare and you've
got to have people understanding how to use them and also trusting them. How do you think the medics are feeling about this?
So we just have to, I mean endometriosis is so hard to diagnose in the first place and many, many doctors, general practice or family physicians don't even, aren't really even aware
about endometriosis. So if we can democratise the diagnostic tools that we're using and
increase awareness, maybe the doctors will gain a bit more trust in this kind of system.
And like when a young girl who's 12 or 14 goes along to the doctor and says, I've got
very bad period pain, the doctors aren't just going to go and tell them, oh, go and get pregnant,
it'll fix it all up. So hopefully when they gain a bit more trust, they can use this kind
of tool to help them diagnose.
Well, of course, going through a laparoscopy to be tested for endometriosis in the regular
way is really intrusive. And then if people need to have surgery, well, it's probably
not even going to be happening at the same time, is it? So this is preventing a lot of points of friction
as well. So you're more likely to test more people.
Yes, and that brings its own problems because then if we find more cases of endometriosis,
we're going to need more surgeries. And at the moment, it's at least a two year wait
in the public system in Australia to get to not only see a gynaecologist but then also see a
laparoscopic surgeon. And is everybody symptomatic or are there asymptomatic cases?
Oh, we actually did a, well because we're screening so many women we decided to
screen a whole lot of women who haven't got any symptoms of endometriosis.
And we actually, out of the 45 that we screened through MRI and ultrasound, we found about 16 cases of asymptomatic endometriosis.
So that's incredible.
Really. And this is one of the big things that we're seeing with AI is this almost inverse pyramid model where you're able to test a lot of people
at a pretty seamless level and then if it seems like you need to do the next level of testing you
can. So I suppose something like this is perfect for that. What other areas of health do you see
this sort of technology, the idea of maybe looking at two different types of scans being useful for?
scans being useful for?
Really looked into that too much. One of our scientists has been looking into skin lesions
to help build this kind of model,
because we haven't actually got enough scans
through endometriosis to build these models,
because endometriosis isn't screened like something like lung
cancer or breast cancer. So big AI models using lung cancer use thousands and
thousands of scans but ENDO we've built our model on basically a hundred scans
for one of the signs and another hundred scans for one of the other signs. So we really have to build this database up as well.
And that data is so critical to all of this.
Your data is all from Australia at the moment, is it?
Yes, but we are about to get 3,000 scans from Dr.
Matthew Leonardo in Canada.
And we've just received funding,
$2 million worth of funding actually to go to the UK,
to the US maybe, to Canada and Europe to get more scans.
And the reason we're doing that is so that we can eliminate
some of the biases that we would find
in a very small population like Adelaide, for example,
where the population is very middle class,
it's not very multicultural.
Yeah, so we need this data from other countries,
especially to get things like FDA approval
or CE mark approval in the UK.
Yes, and testing like this is so important,
not just for the diagnosing,
but also the
ruling out of conditions. And I think we're seeing that through a lot of these new ways
of being able to screen, that it's very useful to early on be able to know what isn't the
problem as well, isn't it?
Definitely. I mean, it just alleviates the fear that it might be something like ovarian
cancer or, you know, these women have got something that's not just in their
head which is one of the main problems with endo because people are being told it's just bad period
you just have to deal with it you're a woman you can do that kind of thing but once they know
they've got endo they're validated. Well we have Ellie nodding here in the studio throughout various
parts of that interview.
Ellie, how do you feel about this? You obviously come face to face with a lot of women who
are in this situation of struggling for diagnosis.
Yeah, and I think one of the issues with something like endo-ovarian cancer is that, you know,
at the outset, the symptoms can be diffuse. They might not even be related to periods.
They might be related to periods, they might be related
to your bladder or your bowels, it might be sort of different. You might see different
healthcare professionals, you might come in quickly and go to an urgent care centre. The
beauty of AI is that we can sort of pick up all of those bits of the pattern before we
can even know that we're looking for a pattern. And that's what, as I say, that's
what we've been doing with targeted lung health screening and we should be able to do that.
I see a future where we can do that with ovarian cancer, with endometriosis as we've heard
and sort of lots of other things.
And we're seeing so many proofs of concept where things work, even trials in the NHS
that go well. But the question is to you, Nell, what happens after the trials?
Yeah, it's a great question. And one of the things that we see in the NHS is they are very, very good at
piloting things. And one of the things we say is they kind of have pilotitis, so a habit to kind of
pilot and then roll it out wider than that. And so one of the key things to getting this right is going
to be starting from asking ourselves, what is the challenge
here that we are trying to address with this technology,
and then working to find and build and work with the public
and patients to build a technology that's going to solve
your specific problem rather than treating AI as a silver
bullet, because often we're kind of piloting things and they're
showing promise, but they're not solving a really specific
challenge. So we think at the Health Foundation, the key to this is going to articulate the challenge and
then work backwards, not start from what technologies have you got in your hands already.
Yeah, understood. Well Jodie, thank you very much for joining us. I shall be coming back to my panel
in a little while. You're listening to a special edition of Woman's Hour with me, Laura Lewington.
We're looking at how artificial intelligence could change the way we approach women's health.
You can text the programme on 84844. Text will be charged at your standard message rate.
Check with your network provider for exact costs.
On social media it's at BBC Woman's Hour or you can email us through the website.
Now here's a quick message from Nula.
Hello. Did you catch our interviews with Anna Maxwell Martin, Sarah Lancashire, Daisy Edgar Now here's a quick message from Nula. home of BBC Radio and podcasts. Download the BBC Sounds app on your phone and not only can you listen to Woman's Hour
live, anywhere you like, you can also catch up with any episode that you may have missed.
Just search for Woman's Hour in the app and all of our episodes will appear.
If it's a specific episode that you want, type in Michelle Yeoh Woman's Hour for example,
or you can just have a browse.
You might like to listen to our feature series, Forgotten Children, which explores the impact on families when one or
both parents are sent to prison. There's so much more of Women's Hour to explore on the
BBC Sounds app, so why not download it today and discover a whole new side to our programme.
We've talked about how AI may be able to better diagnose disease, but how about predicting
it before it's even happened?
Merriam Sefter is chief diagnostic officer at OkinAI, the French-American biotech company
that's developing a tool called RelapseRisk.
It aims to better predict the risk of breast cancer recurring.
She joins me now from Paris.
Merriam, hello, what is the recurrence rate? The recurrence rate depends on, so
there are about 55,000 cases of breast cancer in the UK year and about 10 to 20
percent of those patients will in fact recur. So that's a large percentage but
that also means that a large percentage don't recur.
And it's a very kind of important question
to be able to accurately know
in what basically category patients fall,
so as to be, I would say, more aggressive,
more attentive to the high risk in treatment
and more attentive to the high riskrisk in treatment and more attentive to the high-risk
patients, send them to expert centers, put them in innovative clinical trials, give them
chemotherapy on top of their initial treatment so as to minimize that risk of relapse.
And for the 85 remaining percent, on the contrary, reduce the therapy so as to improve basically
quality of life after that initial tumor has been removed and treated.
And today, in fact, you could say that there are three categories, obvious low-risk patients,
obvious high-risk patients, but also a big, big category of medium risk where we don't really know.
And in doubt, most patients get aggressive treatment. And so there was a
first generation of tests that came out to be able to determine what or try to
predict what was the risk of relapse of breast cancer patients.
These tests were either based on standard clinical variables,
such as, for example, size of the tumor.
And there was also a type of test that was used
based on kind of the DNA information of the tumor.
Now, these tests are not perfect
in the sense that they don't have perfect
predictive power, so it's kind of an indication as to what could be the risk, but there's
no definitive way of knowing if a patient is in fact going to relapse. And for molecular
tests, their ability to scale has been shown to be quite limited in the sense that they have high costs, they
require you know capex, specific labs, trained technicians, they have high
turnaround time, they will require a bio sample, so all in all especially in
Europe their use has been limited and in fact patients have been prioritized and
not all patients basically benefit from these tests.
Okay, so there's a lot to unpack there. Let's take this back to what happens now.
How is the recurrence risk assessed of a patient today?
So today, basically today it's molecular tests or standard clinical variables. And most patients in Europe are on standard clinical variables
because the molecular tests which have a better predictive power
are in fact too expensive to justify systematic reimbursement
for all patients. Okay so let's look at how
relapse would work for an individual patient. Can you talk me through the
process? So our product basically leverages the digital pathology image.
So the basically image of the tumor on that pathology slide
that's been digitized and we through AI basically analyze
the kind of the wealth of information that is contained in these images
to make a better predictive score.
So directly from that diagnostic image, we're able to predict the risk of relapse of these patients with basically product features that are more interesting in the sense that it's just software.
So it's a lot more cost efficient. It can be rolled out anywhere and used by anyone with very minimal training. It's very fast.
We can get results in half an hour, so we don't have to wait days or weeks to get molecular
test results back. And we have results indicating that our AI is actually more predictive than
the tests that are currently being used.
OK, so just to make a little bit of sense of these results, you talked about people
who are at low risk or high risk of recurrence being obvious. It's those ones in the middle
that it seems really tricky for because we don't want to be putting them through unnecessary
treatment. We also don't want them to not be having the treatment that they might need.
How does the AI come back with something that isn't still a 50% risk? And what if it does
because then it hasn't helped.
It's always, it's never perfect, right?
You can't predict 100% who will relapse.
At least not today, nobody in the world can do that.
The idea is to become more and more predictive.
And so here we're able to basically more accurately
predict the ones that will relapse from the ones that won't.
So better sub classify that big intermediate risk category of
patients and basically improve the chances that they'll get the types of
treatments that they need for their tumors. And I suppose what will happen
over time as we learn more about genetic impact, genetics will become a bigger part,
even lifestyle maybe, the air that people are breathing. There's so much data that could be
built into this longer term. Is that how you look at it or are you planning on sticking with the
factors you're currently working with? So both, that's kind of the beauty of AI is one, AI gets, the tech gets better and better.
So even with the current data modalities that we have, it's, you know, probably in five
years we'll have even better predictive power because I know that we have today better predictive
power than we had two years ago.
So the AI is getting better even on the data modalities that we're working on today.
And then we can also add in new data modalities that we're working on today. And then we can also add in new data modalities. I think here the the key question is making sure that this this
data is always routinely available. We can get very sophisticated, very
expensive data and be more predictive, but in practice that cannot scale and
cannot be rolled out to 55,000 patients a year. So that's
why we focused on pretty basic but very rich in information data so as to be
sure that the test can be rolled out to all patients. So the patient isn't
experiencing anything different, they're just getting an extra bit of feedback
from the oncologist who would explain to them what the AI had found, that comes
from a human I take it. So the oncologist would be basically getting extra information and taking
that information with everything else that they know about the patient to
basically sit down with their patient and make the best kind of informed
decision together. I'm just going to come now to Ellie in the studio. Well I think
I mean this speaks so much to me because you know having gone through family members on a cancer journey and sat in those sort of
oncology clinics and also obviously with patients as well, patients going through
cancer treatment are always given percentages and risks and as humans
it's very very difficult to weigh up a risk, we're all terrible at it and it's
very very clunky, your high risk as we've heard from Merriam, your high risk, we're all terrible at it. And it's very, very clunky, your high risk, as
we've heard from Merriam, your high risk, your low risk, your medium risk. And so actually
really, really honing in and saying to people, actually, you know, we don't think you need
the chemotherapy is very, very valuable. As you mentioned, you know, it's not just about
treating people, it's also about not treating people. We've seen with breast cancer screening sort of over the decades, we've been in
situations where we've over treated people and so actually to be able to
really really get to the specifics of somebody's genetics, of somebody's cancer,
I really think you know both personally and professionally I've seen how valuable
that can be. AI being used in breast cancer is already
happening in the UK. MIA has had huge success in diagnosis, hasn't it? Mel, do you just want to
talk about that a little? Yeah, so we've seen a lot of AI systems being used to try and detect
breast cancer earlier and try and improve kind of women's journey through that treatment. But what we're seeing with prevention
is that to get this right in the health service, we need to think about what the knock-on effects
might be. So if we're suddenly detecting lots of cases of cancers or other diseases that wouldn't
have been detected till later, we're going to risk overwhelming the health service and not being able
to get these patients seen. So it's really important that we're thinking about that entire pipeline.
Well, that's a huge issue.
How on earth do we deal with that?
Because if there's a lot more people
being diagnosed much earlier,
and we're saying, well, this is the brilliance of AI,
we can see this, we can see what's happening.
Well, how on earth are we going to be able to deal with it?
Yeah, so this is the, I think for me,
the core issue with AI in any sector.
So we can talk about it in
healthcare with women's health, but it's the same with government services using
AI for giving out welfare payments for example, or in any of these
situations. The issue is not once you diagnose the problem, what are we
going to do with it next? And that's a social human problem, which is kind of my
book focuses on real people for this reason, because what happens next? So with doctors
too, so if they're saying if they've been given this information about high risk or
not, you know, trust becomes a major issue there as well. What if I as the patient want
to know is that your opinion or the AI's opinion and how do I trust that it's correct? So I
think so much of where the AI works
in practice has to do with humans and human factors that include, you know, what do we do
next with these people? How do we build up trust between the doctor and a patient? And can we save
the people that we know need saving, you know, in this context? Otherwise, you know, it's just
creating anxiety, stress, and in some ways we're
worse off than we were before. Well, anxiety is something a lot of people
raise because knowing too much about your health, especially if you're going
to be waiting a long time to get that issue resolved, that's really not a great
way to live, is it Ellie? How do people react to that? Yeah and I was sort of
thinking that when we heard about sort of the endometriosis AI and
understanding that you have a diagnosis which may not have impacted you, you know, that's a question that
we've sort of gone through with things like sort of straight-to-consumer genetic
testing. There are issues of having too much information. Living with risk is
very difficult. Living with a family history is very difficult and actually
making sure, as we've said, there's no point sort of diagnosing these things
labeling unless we have the tools to help people and that's why it will always boil down to the
human factors. So there's so much at play here, we've got to bring together the human behaviour
with the AI, what's plausible to enforce because we're going to end up having to pick and choose,
that's the reality here unfortunately. When is your Okun tool going
to be available for people more widely? So we're in the process of basically finalizing
clinical validation for our product which will allow us to. So we've actually trained this model
on 5,000 patients from five different countries, 10,000 images.
We've done basically a testing of the product, analytical validation, and we're in the last
kind of stretches of testing and validating again on independent cohorts from academic
centers across multiple countries to really try to get as much unbiased evidence that the model is in fact performing the
way that we claim that it does and hopefully once that's done we'll be
ready to go for IVDR submission to get basically CE marking. I can't commit to
a specific timeline but we're you know we're hoping that this will happen
within two years. Okay we're heading in the right direction. Maryam, thank you
very much. I've got a government spokesperson's
statement here. We're trialling the use of AI to speed up
diagnosis and treatment for a range of women's health issues,
including diagnosing breast cancer and endometriosis,
detecting pregnancy complications and offering personalized menopause treatment.
These pioneering initiatives will improve treatment, expand patient choice and save lives. As we deliver our plan for change,
AI will be the catalyst needed to transform healthcare, moving from
analogue to digital and creating an NHS that is fit for the future. Now, in a
world where we can track more and more of ourselves, one thing I can imagine
many of us will be interested in
is hormone tracking. And that's what Marina Pavlovich-Rivas, co-founder and CEO of Eli Health
in Canada, decided to set her mind to. She's created an at-home hormone test powered by AI.
It's called the Hormometer and should be available to buy online by later this year.
When I spoke to Marina, I asked her what prompted her to develop the product.
It started initially from a personal need. I wanted to have access to this information
myself, experiencing various symptoms around hormonal health. As a data scientist, I'm
biased in the way that I love to have data to make important decisions. But when it came
to my health, that data was missing.
So with my co-founder, who's also my life partner,
we realized that we each had a part of eventually
what became the solution.
Well, well done for managing to work with your life partner.
That's good stuff to start with.
How do you measure this?
Because it's clearly something there's a need for,
but if it was that simple,
it's something we would have seen happening a long time ago.
Exactly. So how it works, there's a test, the saliva test that you receive directly at home.
You put it on your tongue for a few seconds, you pull on the tab, and after a couple of minutes,
take a picture with your phone and receive it directly on your smartphone, on the mobile app.
Similarly to how you would receive biomarkers from a wearable, like a smartwatch, seeing
how well did you sleep, seeing your heart rate.
Your steps were bringing a similar concept, but for hormones.
And as you mentioned, the big blocker to make this happen was the technology itself.
It was five years of R&D across the chemistry,
the microfluidic, the hardware component,
the AI component, and bringing all of that together
to have a test that provides reliable results.
You have tests to be able to measure progesterone
and also cortisol.
Why these two hormones?
We started really from the angle of which hormones would provide the biggest impact
for the biggest number of people.
And when we look at a hormone like cortisol, it affects nearly all bodily functions.
And it's true for women, but it's also true for men.
And by being able to have access to this data, it then enables users to
make different decisions across key areas like sleep, nutrition, exercise, weight management,
and much more. So having access to this information then unlocks a wide variety of needs.
And I suppose cortisol, often referred to as the stress hormone, plays into a lot of
other bodily functions and how stressed you are can affect your hormones, vice versa.
This works both ways. So it's clearly really important data. But how often would you be
measuring it? How often are you using one of those tests?
So the protocol we recommend at the minimum for cortisol, it's four times throughout
the month and two times per day.
So two different days, one in the morning, one in the evening.
And the reason behind that is that the cycle for cortisol is daily.
So it's high in the morning, low in the evening.
So you want to see at least over two days in the month how that shape evolves based
on the different
actions you're making. And for progesterone, it's a similar concept, but
the fluctuation there happens throughout the month instead of happening throughout
the day. Okay, so for progesterone there seems a little bit of a clearer reason
why in doing it just twice a month you could learn something useful. But for the cortisol, would you not need to be doing it daily to really gather some sort of meaningful
picture?
Ideally, that's really the vision to have something that is daily and even continuous
so that at any moment you can understand how your environment, how your thoughts are influencing that biological cortisol level. The limitation
here is price-wise, the more you test, the more it costs.
How much does it cost?
It's starting at $8 per test. And to give some perspective, currently for people who
want to measure their cortisol, most of the options out there require you to go physically at the lab or to order a sample collection kit that you ship back to
the lab and cost between $100 to sometimes $700 for a data point to sometimes four data
points.
What role does artificial intelligence play in this process?
So for us, it plays a very critical role at different pieces of the process. The first
one is to detect the hormone levels. When we take a hormone test, depending on the level
of hormones, the test has a different color intensity. So the artificial intelligence
and more specifically the computer vision algorithms detect that image, translate that image into a hormone level.
And then AI also plays a role in interpreting the data and providing the insights and recommendations to users.
And then when you receive that information, what do you do with it. So we focus really on that lifestyle intervention approach, which means different types
of exercise, nutrition, sleep interventions like light exposure. And that can sound straightforward.
For example, for exercise, it can sound simple to say, let's just do more exercise, but it's more
complicated than that. For example, someone that has high cortisol in
the evening, then we would recommend against doing high intensity exercise later in the day.
At the opposite, someone who has low cortisol in the morning when it's supposed to be high,
then it could be advised to do a high intensity exercise earlier in the day.
So this has the ability to bring in some real personalization of your actions, but how about
of what hormones you want to take?
Because if we're looking at progesterone here, the natural question a lot of people
may ask is, well, what does this mean for HRT?
Could I be told when I should take it and how much?
Are you looking into that area?
Where are you with that right now?
We're receiving a lot of interest from physicians to use the technology in that way,
because currently there's a lot of gaps, many physicians that reach out to us saying that they
feel it's limiting to base their approach based on the symptoms alone. For example,
if I feel that way, then increase the dosage of this hormone or that hormone.
They want to have a more data-driven approach and personalized approach. So this is certainly
something we're considering for the future working in partnership with those physicians.
It's one thing knowing this information, but actually the ability to be able to act on
it. How do you see this really playing out in making a transformation in women's health?
We see a major impact.
We spoke to countless women who told us that they felt something was off,
they felt different symptoms,
but they were not able to understand what was the root of those symptoms
and more importantly what to do about it.
So there's so many layers in terms of how that shifts the entire approach for women's
health and first being able to advocate for yourself, know that you're not crazy.
Many women told us unfortunately that they felt dismissed.
So seeing what's happening really biologically and what you
can do about it and what you can do about it on a continuous basis is something that
can improve the health of millions of people on a daily basis but also prevent different
conditions for the long term.
There's also many correlations that you can look at in greater detail like progesterone
and sleep. What are
you learning on that front?
So there's many people that call progesterone the calming hormone. And when there's an imbalance
on that front, it does eventually lead to sleep disruptions. So being able again to
have access to this data can enable you to understand what those imbalances are and
how to intervene in order to come back to what is optimal.
Well, you're clearly going to be collecting an enormous amount of data here in a way that's
probably never been collected before. Are there any drawbacks to doing this for users? So for us, privacy is really at the core of our model.
Users own their data.
They're able to delete it at any moment from the platform
and can opt in or opt out of research.
So for us, that's very exciting because it enables us to,
yes, push science forward,
but in a way that puts privacy at the core.
And when will it be available?
Mainstream availability later this year, so coming soon.
Marina there from Eli Health, and she brought up lifestyle.
I think every interview I ever do on this goes back to lifestyle.
We talked about the link between progesterone and good sleep.
Sleep, of course, is key to good health and
well-being and we know that women are twice as likely as men to suffer from insomnia. There's a
huge market out there for devices and I've tested many over the years and from my non-scientific
experiments I have found that they've got a lot more accurate but I think there is still some
difficulty in defining sleep stages. But people who use these devices, people who care about their
sleep, find it incredibly important and can even plan when they go to bed based on getting
the best sleep at the best time. In fact, I met a load of people in California who set
their alarm clocks to go to sleep rather than to wake up in the morning.
But how do these AI-powered products fit into our existing scientific knowledge on sleep habits?
Well, I'm joined now by Dr Lindsay Browning, sleep expert and chartered psychologist.
Firstly, why are women more likely to be insomniacs?
Well, a couple of reasons. First of all, hormonal fluctuations that women have that men don't
around the time of menstruation, getting pregnant, perimenopause, menopause, as well as potentially
excess stress through a burden of extra caring responsibilities that women tend to have fall
upon them.
So how might a test like the hormone one we've just heard about prove useful?
Well the more information data we can have the better. With things like menopause and
prescribing HRT, because the fluctuation of hormones changes so much throughout the day,
it's really difficult to take a test at one point of time during the day and know how
someone's hormones are. So, continual testing can help find out our progesterone levels
in a much more accurate way if you test them throughout the day. And then we could use that potentially to give us indications
on better sleep because we know that clearly progesterone, estrogen, they start to be affected
decline around menopause and women's health, women's sleep especially around menopause
starts to become significantly worse than it was pre-menopause.
Our body is all working as one
with lots of different factors playing into each other,
our health affecting our sleep,
our sleep affecting our health.
So how useful is it to be tracking it
when are there points where we need more sleep
than we do at other times?
And how individual is it?
So how useful is this tracking based on all the variation? Well sleep
trackers themselves, like wearable devices that can tell you how much sleep
you're getting, how quickly you're falling asleep and give a guide as to the
sleep stages. They have some significant benefits but they also have some
drawbacks. So sleep trackers that use data that looks at your oxygen content
during the night and they track your your heart rate, they can pick up things such as
obstructive sleep apnea, which is a condition where people stop breathing repeatedly during the night and it's often something that we all don't know
is happening and women especially women's health
postmenopause we're at a much greater risk of developing sleep apnea and
sleep trackers that you wear during the night can sometimes pick up things like that. But sleep trackers can also cause great anxiety for people and
it's almost like giving people too much information can not actually be very helpful sometimes.
And you can wake up feeling absolutely terrible and have a really high sleep score. It's very
annoying.
Yeah. So again, people should really trust how they feel when they wake up rather than
checking their phone and the phone saying, oh yes, your sleep score was 85%. Good job. We really
should be trusting our own bodies and how we feel because that's much more important
than what a generic app tells you.
And fatigue can be caused by a lot of other things. So just because you've had enough
sleep doesn't mean that you're going to feel great. Now, there's lots of conditions where
sleep changes as that condition's approaching.
Some experts even suggest that you might be able to see the onset of some diseases by
recognising them in sleep patterns. Pretty new research on this. But what do you think
of this concept?
Well, the more data that we can have, the better. We definitely have a link between
Parkinson's and REM behaviour disorder, for example like I said I mentioned already the sleep apnea. These are conditions that if we
can get insight into them before we realise we have a problem it enables us
to start treating those things so it's really the more data we can get the
better as long as we're not getting stressed out by it as long as the data
is of good quality. And we need to be getting enough of the right stages of
sleep REM sleep is? Rapid eye movement so of the right stages of sleep. REM sleep is...
Rapid eye movement. So that's the part of sleep where we tend to dream more frequently.
And yeah REM sleep means that our eyes move rapidly beneath our eyelids. We also have light
sleep and deep sleep. So everyone's sleep is made up of kind of light sleep, deep sleep and REM sleep.
And a lot of the tracking and the ideas of things we can do to sleep better are fine
if you're not an insomniac, don't work shifts and don't have young children. But obviously
for a lot of people, they don't have much choice about when they're going to be able
to sleep and the interruptions to it. What's your best advice for that?
Well sleep trackers can be helpful if you aren't really prioritising your sleep. So
for lots of people, we can't get enough sleep
because we're being woken up through the night
and that's just something, a particular season in our life,
we've got young children we have to put up with.
But if you are in control of your own schedule,
then sleep trackers can be a great way of reminding you,
you know what, you really should be going to bed
more like at 10.30, rather than scrolling through your phone,
through social media, and before you know it,
it's midnight, one in the morning,
and there's no way you can get the recommended seven to nine
hours sleep every night if you're not even trying to go to bed before midnight
and you have to be up at 6 a.m. for work. They also bring together all of your
data to see how your activities influencing your sleep, your heart rate, so
much information. What do you think we might be heading towards that could
become even more useful with this? So that's great. When we're integrating data about food, exercise levels, we can have a look at how the changes we make during the day affect our sleep
and it pulls together a holistic look at our life and that can be a really good thing because lots of people tend to look at only one aspect
but really to sleep well, to live well, we need to be eating well, we need to be exercising regularly and getting enough good quality sleep.
They're all linked together and they all influence each other.
Lifestyle, that all comes back to lifestyle.
Mademita and Nell are still here in the studio.
Mademita, what do you make of this personalisation of how we look after
ourselves? Is there a risk here that we're just putting self-care behind a
paywall?
I think it's really interesting what Lindsay said about trusting yourself versus looking
at a score.
I think that actually the same thing can be applied to AI in lots of different contexts.
So I've interviewed social workers, for example, who have been asked to use AI systems that
score whether somebody should, for example, get a childcare benefit or have extra welfare
support. But, you know, they don't agree with it necessarily, but they kind of feel like
because the AI system has put the score on it, they should go with it. And you sort of
start to not trust your own instinct and or experience that you've built up over years.
And I find this a lot with technology generally, but with AI systems in particular, because of how in the public consciousness it's seen as a brain in some way or an intelligence
and in some ways more intelligent than human.
So I think that's one of kind of for me a big risk as AI systems diffuse into society
more into our sleep even, you know, into our children's lives that we start to rely less on ourselves,
on our instincts, experiences and sort of the human aspects that make us who we are
and start to be guided, you know, even for something as small as like a recommendation
for a song that you, you know, you're supposed to like, but maybe you don't really like it,
but yet your Spotify is making you...
Yes, we're doubting ourselves. It's the opposite of listening to your body. It's not listening
to yourself at all anymore. Now, at a healthcare policy level, we are often thinking, well,
we can collect all of this data. It'll be so useful when we take it to the doctor. But
how realistic is it that anyone's ever going to have time to look at it and that it's going
to play into our bigger picture of healthcare?
Yeah. So, you know, to move, as you were saying earlier, from this kind of national sickness
service to a national health service, prevention is really key to that. And key to prevention is
understanding yourself, understanding your body. And, you know, part of that could be tracking and
collecting a lot of this data. But you're right that sometimes actually it just gives us more data
than we know what to do with. We realistically don't have the time to understand it.
And what we know is that in order
to be able to use data in the right way
and use it to get healthier, it's
not just about having digital literacy
and working the device.
It's about having the right health literacy.
When people are looking at that data,
do they understand what it is telling them?
Do they understand what they need to do?
Or are we just driving the worried world to GPs?
So policymakers really need to think about
not just getting the devices in people's hands,
but helping them really understand what it is telling them
so they can make good decisions about their care.
Yes, to avoid the healthiest people doing the most tracking
and actually to make sure we democratise
the ability to track.
We have a listener comment here.
Fiona says, my query is, who is programming the AI?
If it's mainly men, as I fear it will be,
then forget any progress.
It will be business as usual.
Please disabuse me of this fear.
I'd be so glad to hear that we had sufficient female programmers
to make it a worthwhile tool and reduce the ignorance
around female health issues.
I have ME, which is woefully misunderstood,
even by health professionals professionals in my experience. Well, which one of you would like to take this? I think it might be one
for you, Madhumita.
Yeah, I mean, I'm sorry, I can't disabuse her of that notion because it is true that,
you know, in general, there is a disparity, huge disparity in technology companies. It's mostly male men who work in the sort of engineering
and core coding parts of tech companies
and now AI companies.
And so yes, there is an imbalance when it comes to
who's coming up with the ideas for what products to build,
how those products are being built.
And so far we've talked a lot about data
and the quality and type of data and how that affects the outcomes. But actually another really important part
of these systems is also the decisions made by the people building them, as the listener
says. And the decisions they make about what types of solutions or even which problems
to solve, which diseases should have the most focus on them or how they should be approached can be very much influenced by, you know, their sort of
male way of thinking about things. And so I think it's so important, you know, not
just because of this kind of DEI reasons of we need diversity and we don't know
why, but you know, to improve the quality of the AI to have more diversity,
particularly gender, socioeconomic.
Great. Madame Eater, Nell, Lindsay and to all my other guests, thank you very much.
And tomorrow, Kylie Pentelow speaks to the Oscar nominated actor, Lily Gladstone. She's the first
Native American woman to be nominated for a Best Actress Academy Award and the first Indigenous
woman to win a Best Actress Golden Globe, both for her role as Molly Buckhurst
in Killers of the Flower Moon. Now she's starring in the romantic comedy The Wedding
Banquet. Don't miss that at 10 o'clock tomorrow.
That's all for today's Woman's Hour. Do join us again next time.
Hi, I'm Izzy Judd. Have you actually breathed properly yet today? If things are a bit hectic at the moment,
if you're struggling to switch off from work,
or if you're generally just feeling a bit stuck in life,
I've got just the thing for you.
Join me for the Music and Meditation podcast
on BBC Sounds and Radio 3 Unwind.
It's a place where we press pause
with the help of some inspirational guests,
wonderful guided meditations and stunning music.
Honestly, I think you'll love it, so why not give it a go?