Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 3x19: How AI Can Help in Medical Care and Pain Management with Sara E. Berger
Episode Date: January 25, 2022Among many surprising applications, AI can be used for pain management and medical care. Neuroscientist Sara E. Berger of IBM joins Chris Grundemann and Stephen Foskett to discuss applications of mach...ine learning in medical care. Pain management is a deeply personal field, but there are so many different data points that it can be difficult to see patterns that lead to positive outcomes. Machine learning can assist in sorting and selecting treatments, bringing in different sensors and data types to help patients. The more we see pain in a multi-disciplinary lens, and the more understanding we bring, the better the outcome for patients. Three Questions: Chris: How small can ML get? Will we have ML-powered household appliances? Toys? Disposable devices? Stephen: Is it possible to create a truly unbiased AI? Sriram Chandrasekaran, Professor at the University of Michigan: What do you think is the biggest AI technology that you can think of that will transform medicine in the future? Gests and Hosts Sara E. Berger, Research Staff Member at IBM and Neuroscientist. Connect with Sara on LinkedIn. Chris Grundemann, Gigaom Analyst and Managing Director at Grundemann Technology Solutions. Connect with Chris on ChrisGrundemann.com on Twitter at @ChrisGrundemann. Stephen Foskett, Publisher of Gestalt IT and Organizer of Tech Field Day. Find Stephen’s writing at GestaltIT.com and on Twitter at @SFoskett. Date: 1/25/2022 Tags: @IBM, @SFoskett, @ChrisGrundemann
Transcript
Discussion (0)
I'm Stephen Foskett.
I'm Chris Gunderman.
And this is the Utilizing AI podcast.
Welcome to Utilizing AI, the podcast about applications for machine learning, deep learning,
data science, and other artificial intelligence topics.
Chris, every time we have this podcast, we're trying to come up with different people that
can come up with different ideas, different angles. And one of the things that I enjoy the best is when we can
have people come in and talk about practical applications and maybe surprising applications
for machine learning and AI. And I think that's what we've got in store for you today.
I think so. Yeah, we're definitely taking a little bit different tack than we have in the past
and really diving into kind of the medical world and the world of pain management, which is
going to bring wearables, other environmental sensors, and then interpreting that through
machine learning to get a better bead on how people are feeling and how people are progressing.
So I'm super excited to talk to our guest today. Yeah, so thanks for that. And thanks for suggesting our guest.
So we have Sarah E. Berger, a PhD and research staff member at IBM.
Sarah, thanks for joining us.
Yeah, it's lovely to be here.
Yeah, like you said, I'm a research staff member, a neuroscientist by training.
I've been at IBM Research for about five years now.
And I'm doing work at the intersection
of neuroscience, digital health, and then also good tech, broadly speaking.
It was great to get to know you because you come to machine learning and AI not as one
of us nerds, but as a neuroscientist, as somebody who is deeply interested in helping people
and helping people specifically with managing pain.
How did you get here? Well, I should clarify that I'm also a fellow nerd. I just came at it
from a different angle. I was a precocious kid and I was the kid that knew I wanted to do
neuroscience when I was like in my teens. I applied for an undergraduate degree in neuroscience, like early decision.
I knew that this is what I wanted to do. I would say in part because I had a lot of
family history that led me to neuroscience. So my grandfather had Alzheimer's. I saw that when
I was really young, made me really interested in like memory and cognition. My mother came out to me as a lesbian when I was also really young. And so I
was really interested in like emotions and love and identity. And these things all kind of led
me to neuroscience as a field. But, you know, pain was both personal to me from a research standpoint and also kind of serendipitous.
So all three of my parents have some form of chronic pain or have had some form of long-term pain.
And so I saw that growing up and I saw the struggles they had.
I saw how they coped with it.
I saw how they identified or didn't identify with their struggle. So that was
something that I was like, I really want to change this. I want to understand this more.
But also in my undergraduate degree, the neuroscience program there, the two neuroscientists
at Macalester actually were both studying pain from different perspectives, one from an animal perspective and one from a human perspective. And so I was able to actually kind of jumpstart my research
in undergrad through the lens of pain and multidisciplinary even.
Wow. Yeah, I was going to have the same reaction as soon as Stephen said that. I said,
neuroscience sounds pretty nerdy to me, maybe not IT nerdy, but definitely a domain of nerds.
So that's awesome. It makes a ton of sense, right? With those family histories kind of bringing you
into this field and that, and fueling that, that kind of passion from a young age, where did
neuroscience and machine learning, artificial intelligence intersect? I mean, it's just,
is it just that you're at IBM and they're throwing Watson against everything or was there something more personal? Yeah. So first of all, going back again to my undergrad, it was a very interdisciplinary major.
So it introduced me to biology, psychology, chemistry, AI and coding.
That was my first experience with that link, that computational neuroscience aspect
and also philosophy. So I'm just bringing that up because it was very clear that that was kind of
where the field would go in one arm. But then in grad school, we weren't really using machine
learning. It wasn't until probably my second to last year where that started becoming
more part of the field. And so it was like, if you were going to continue down this with
complicated data, you need to learn how to do this, right? So we, my couple of colleagues and
I, we started learning it. We started, you know, basically teaching ourselves as much as we could. And that ended up leading me
to IBM, you know, getting a job offer, getting an interview, this type of thing. So it was,
I come, I'm coming at it, I from a self taught, you know, lens, but also from a,
from a practical lens, because I saw that this is where at least my field was going.
Yeah. Well, I stand corrected because that is a classic nerd journey. I think many of us had maybe different stops along the way, but that sounds an awful lot like how I got into
computers as well. It was self-taught and personal interests. So how exactly can machine learning be used specifically with pain management?
Where's the intersection? Oh, there's so many. Where isn't there? So I want to first say like,
you know, pain management, in my opinion, it should always be personal. There's so many people
out there that's about like, you know, automating things. And I really feel like interpersonal
relationships with pain management are critical. So I don't want to say that like AI should like
take over that or anything. But there's so much data, right. And there's so many different kinds
of data that it really is hard to find patterns in, you know, in people's experiences and their sensations, their perceptions, in their needs. And so
what we're using AI for is really uncovering some of these hidden states of people and doing so by
mixing different kinds of data. So a lot of times when you think of pain, you think of a number from
one to 10, or you think of maybe one of those weird, you know, smiley face scales that in my opinion, don't mean anything, but, you know,
and these are important, right? This, this subjective experience of pain is important,
but it's only one aspect, right? It's often only intensity, it's only location. But what does it
mean for pain to affect quality of life, right? So sleeping habits, your mood, your mobility, your sociability.
So if you can start using other sensors or start using other ways of exploring that,
and you can mix the data together to uncover new insights,
that's where AI is really taking us, at least in this domain.
Yeah, and that's, I think, really the key to so many applications of machine learning is uncovering patterns that would be difficult for a human to see because there's either
too much data or too varied data or personal biases that might stand in the way of seeing
the trend in there.
And I think that that's what you're saying here, that machine learning can help do that.
Yeah, again, and help is the key word.
I think too, and this is not necessarily machine learning, but it's more like the digitalization
of things.
This is something that I talk a lot about is there's so many people who can't
participate in a clinical trial, you know, today and even before, right.
There's so many barriers to doing that, whether it's, um, you know, educational barriers or
mistrust, whether it's, um, uh, barriers such as I don't have time.
I can't afford to go to the clinic to do this. I don't have time, I can't afford to go to the clinic to do this,
I don't have insurance, you know, there's a lot of different things that keep people from going
inside of a clinic. And so, you know, one element of this, and I don't think again, there's,
there's plenty of nuance in this statement, but, you know, by giving people access to things at home and allowing,
you know, allowing people to have sensors on themselves, allowing them to answer from a
smartphone and integrating that, it also in some ways democratizes part of this process, right?
It gives people a way to interact with the clinical environment in ways that maybe they couldn't
have otherwise. And I think it also, there's so much literature on this, but when you interact
in a clinic, right, like your, your pain perception, your memory of your pain, it changes,
right. And there's, it becomes, it becomes significantly different than the pain
that you might have measured or mentioned at home. And that's like where you mostly are, right? In
your natural environment, that's what we want to study. So I think too, by being able to digitalize
some of these things and also collect them in different environments that people are normally at, it also exposes more, I'm not going to say more truth, but exposes more of the experience people
have on a day-to-day basis. And that's, so it's more than just AI. It's also about like digital
tech, you know, digital environments and such. Yeah. That reminds me of which a story that has
nothing to do with pain or AI maybe, but,
but I think is in line.
I use an aura ring to track my sleep.
And as the COVID pandemic kind of wiped, came across the world, there was a research study.
I think it was, well, I won't even mention, cause I forget where, which, which university
was running the research study, but basically I was able to opt in and allow my data to
be used by this research study where they were pulling, you know, and it tracks movement and temperature and heart rate and all these things. And so then, you know, they had a big sample size and were able to watch, you know, what COVID actually in that. But I think that was really interesting. It definitely opened my eyes to the idea that we could be combining these kind of, you know, sometimes frivolous
digital accoutrements into a really more serious and useful medical context.
Yeah. And you can imagine that again, and this is all, we know that the medical system is biased.
We know that there's a lot of problems, you know, with this, especially for pain management, but if you can also help people collect data, you know,
and again, I keep stepping back a little bit because I don't think that, I don't, pain is
often hidden, right? It's often a hidden difference or disability. And so I don't want to say that you
need to objectify something or visualize something in order to legitimize it. Like And so I don't want to say that you need to objectify something or
visualize something in order to legitimize it. Like, I really don't believe that at all,
but I view it as a tool to help patients show their physicians, their care providers,
their loved ones, like all the things that they're experiencing. Cause it's so hard to explain,
right? Like it's so hard to explain that, you know, you were totally exhausted
and you're, you know, and you're in eight out of 10 pain and you don't feel like seeing your,
your grandkids, for example, how, you know, it's hard to explain that, but if you can actually go,
this is what this is like in time for me, you know, again, I view it as a way of,
of communicating in some extent. What you just said also reminded me,
too, of something that we did at the start of COVID. So at IBM, we were working, actually,
we still are working with medical device company, Boston Scientific. So we have a joint research
agreement with them. And we had been, as part of our study, doing this kind of digital health monitoring right collecting right? Because one, people couldn't come in to
get their device calibrated. I should say they do spinal cord stimulation and they couldn't come in
to get their device calibrated. Some people, because it was a voluntary operation, they
couldn't get, you know, their device. And so we already had the infrastructure in place to pivot,
which I thought was really important. And we, you know, we could say, okay, we're already collecting all of this information
from people. How can we use that system to ask them questions about how they're doing? And then
how can we actually see how they're doing in the pandemic? And, you know, one of the things that,
that we found out was, you know, you might think because people don't have access to pain
medications, maybe they're delayed in getting
stimulation parameters changed. Maybe they can't get their surgery, right? You might have a very
grim picture of, you know, pain management in this case, but what we found through analyzing
all these different sensor data and subjective data is that there were multiple different kinds
of patients right you could subgroup them and some patients actually got better during the
pandemic because they're spending more time with their loved ones because they aren't moving maybe
around as much or they're moving more like some ended up gardening more and like doing all these
things that affected their quality of life. So it was through that,
they were actually able to say like, look, like this is not a one size fits all narrative, right?
Like this is, this actually has differential impact on people through data. Yeah. People are just incredibly complex. And as you point out, there's, there's the sort of physiological,
but there's also the psychological aspects.
And it's very, very difficult to really kind of see the whole picture when you're dealing
with a patient that may have a disease or may have stress in their lives or may have
pain that they can't even describe adequately.
And yet maybe, as you say, maybe we can have technology
help us to sort of break down those barriers. Also, as Chris is saying, too, I'm really excited
about the idea of the advent of sensors and data as a way to discover things that are hidden from
us. So, you know, like Chris was pointing out with,
you know, wearable devices and so on, we're collecting more data, we're collecting more
data over time. And for you as a researcher, that must be pretty exciting too, because
you can go back in time in some cases, or you can go sort of sideways and look at different
data types in order to identify things that you
never would have been able to track previously before the advent of these digital sensors, right?
Well, yeah, yes and no, actually. So I think, you know, for some of these, you know, we have
wearables like, you know, watches and heart rate monitors and these types of things. But other
things have always been there. And we as humans have
intuitively known about them, but we haven't really considered a sensor. So voice is a really
good example of how, as I'm talking, right, my intonation changes. Somebody who didn't have any
sort of visual cues here could tell that I'm excited about something, that I care about something.
You can hear all of this, but how do you quantify that, right? And I think people do this all time.
I mean, this is what a psychiatrist does in their office, for example, but it's something that we
really think that language is often a window into the mind, And I think being able to, one of the things that
I've been working on a lot is how do we understand what somebody says and how they say it and how
that affects how they view their own pain? And can it say something about how they're going to
respond to treatment, their likelihood of, you know, staying in a trial, this type of thing. And so capturing like the
acoustic properties, the content properties of speech as a sensor is also something that, again,
it's not really novel. We do it all the time naturally, but like, how do you actually put
that into a tool, for example? And also how do you do it in a way that is robust enough to the adversity that is, you know, humanity and humans
voices, right? Whether it's dialects, whether it's accents, whether it's, you know, different
gender, genders, different ages, you know, so it's a, it's a complex problem in and of itself,
technically, but then you can apply it to different patient populations and
it becomes really interesting. I'm sure. And that kind of reminds me or brings me to the thought of
just wanting to understand a little bit more about how you interact with machine learning,
kind of almost on like on a personal kind of day-to-day level. I mean, obviously you're
coming to this as the domain expert, you're the pain expert, you're the neuroscientist. And I assume that means you're working with data scientists
and folks who are building these machine learning models and training them. What does that interaction
look like and how does that take place? Yeah. So I can only speak on behalf of like my previous
team, cause I've switched roles relatively recently. But, you know, for us,
so I had training in machine learning, again, self-taught, but on a day-to-day basis, I can do
some of those model runnings as well. But our teams are very interdisciplinary, especially when we work
with clients, because we want to be able to bring as much different kinds of expertise as we can.
So we have the subject matter experts, we have the
clinical experts, you know, who work with the patients, we have the technical experts who work
with the devices all in the same room, we have the data scientists, we have engineers, I mean,
it's very much this, you know, we have mathematicians. So like, it's very much this
interdisciplinary, you know, team that then
goes and says, okay, like, here's this model. Did you think about this cognitive aspect? How would
you build that in, you know, to this, or, you know, here are the, here are the types of variables
that I think that we've collected that I think you should use in this model. So it's like a give and
take, I guess. But sometimes it's also me running, you know, the machine learning
myself as well. So totally depends on the amount of data and the amount of time and all the things
that you know, you're working on at a given point. Absolutely. So reflecting back on that experience,
and obviously knowing that you are a little unique in that, you know, not all the time as a domain
expert, you're going to also be able to actually dig into the data science themselves but you know even so i wonder if you have any any advice um either
things to avoid or things to definitely focus on when working with um you know across a team like
that and bringing machine learning into um you know various fields right because i think this
is something that a lot of enterprises are facing right now when they're going down the path of okay
we want to introduce machine learning artificial intelligence into you is something that a lot of enterprises are facing right now when they're going down the path of, okay, we want to introduce machine learning, artificial intelligence into, you know, something that we're doing in our kind of digital transformation or our interaction with customers or our product development.
And we see on this show quite often that there tends to be a disconnect.
Sometimes data scientists kind of live off in one group over here and the people who actually know what's going on are in this other group over here and they're supposed to be kind of working together but they're
not really so that's why I want to dive into that and see it sounds like you've done that successfully
um yeah any lessons you learned or things you could pass along well I mean this is what IBM
does I mean again like like these are my this is speaking to the leadership of the team not
necessarily me because I was just a player in this.
But I mean, we have so many projects, whether that's internal, external clients, partnerships, that we mix people together all the time.
And it is absolutely essential to have that from the start.
I think having a vision of where you want to go, being in the same room.
We had multiple team meetings a week to establish those connections. I think, you know, what you're getting at here is something that
is really frustrating for a lot of my colleagues who are in academia who don't necessarily do
machine learning. Maybe they do traditional, you know, neuroscience research, whatever that
even means. Actually, I don't know what traditional neuroscience research means, but, you know, is oftentimes you have so much data
that you can get a model that you get 90% accuracy, you know, hooray. But it is,
the variables maybe don't make it all any sort of sense, right? That doesn't mean that they're not
real, but like, it just like, it defies any sort of logic that you can't explain it to another
subject matter expert or a clinician or a patient, you know, so it's kind of like, what is the
utility here? But then also, you know, it's often, again, going back to explainability,
especially in a clinical realm, physicians need to understand how you got that decision, right?
They're not going to trust, you know,
some sort of opaque box that is often machine learning, right? They need to see it. They need
to see how it affects their decision-making, their, you know, their clinical knowledge that
they've had, because they're also, you know, again, we're humans and we are naturally pattern
identifiers. So, you know, if a machine learning algorithm puts something new out there,
they have to, they have to see that it's robust before they want to use it on, you know, their
patient population. So I think even getting, you know, multiple stakeholders too. So people who
maybe aren't necessarily creating or behind the scenes, but people who are going to use your tech
or be affected by your tech is, again, absolutely essential.
And I feel really lucky that the project that I was on for four years allowed me to have
that environment where I got multiple perspectives multiple times a week, right?
So that disconnect can be avoided from the start if you actually have, you know, people talking to
each other, you know, shared vision, and that sounds cheesy. I don't know how, you know, again,
it's a leadership thing, but that's what worked for us. I wonder, do you have any stories to share
of results and success of your work, maybe with specific people even?
Right now, I wish I did. We have a couple publications that are being submitted. We have a
lot of, I guess, let me backtrack. With the Boston Scientific Project, a lot of things are underway.
So we have a lot of conference presentations and these types of things. And we have a publication that is
being submitted hopefully next week. So I can't really comment on that yet because of the nature
of that relationship. But in terms of previous successes, there was one project that I did in, uh, grad school that essentially used, um,
language.
So quantitative language analysis in patients to, um, quantify and predict who is going
to be a placebo responder and who wasn't.
And then we have since, uh, actually taken this data or this, this model, and we've applied
it to a totally unseen brand new cohort
of people. We actually reduced the amount of data considerably. So it went from a 45-minute
interview to like a five-minute interview and we actually replicated it. So like there's this
really cool signal in voice that can actually capture and predict, um, with relatively high
accuracy, um, you know, placebo response, which for me, I think is really cool because
one placebo kind of it, it, for me, it shows the power of our own mind. Right. And, and I love that
like somebody through the belief that something's going to work for them
can actually better themselves. Right. So I love that. But I also think that it's really powerful
to be able to know ahead of time, who's going to respond to something like this, because then you
can, you can avoid giving people medications that might have negative side effects that might, you know,
have addictive properties, these types of things. You could also theoretically make clinical trials
more accurate because if you would know ahead of time, who is going to be responder or non-responder,
you can make sure that that was well balanced between your different arms, right? You could
try to counter with like expectation. You could try to counter the placebo effect in both arms or multiple arms of the, of the
clinical trial.
So for me, that is probably, even though it's not one of my most cited papers, it's probably
one of my favorite because it's really, I think I was one of the first people to say,
well, let's try this with, with pain patients and the fact that it worked and now it's working again in a separate group. I I'm,
I'm really excited about it, you know, cause people talk to physicians, talk to their patients
all the time. So if you could have, you know, um, a device that recorded a conversation,
you know, automatically, um, analyzed, right, like somebody's voice, and then gave some
sort of likelihood of responding to a placebo, like, you know, in a doctor visit. I don't know,
I think there's so much potential there. Yeah, that sounds like it could be hugely powerful.
You know, one thing so so one of the reasons I suggested you, uh, for the, as a
guest on this podcast was I saw you featured in, um, MIT's tech review as one of the 35,
um, innovators under 35.
And there was a quote you had in, in that kind of piece that I'd like to ask you about.
You said pain isn't linear and our assessment of it shouldn't be either.
And at first glance, that seems like
a really simple statement and kind of makes sense. But then as I think about it more, I'm not sure I
totally understand. So would you mind explaining what you mean when you say pain isn't linear?
Yeah. And then how we can respond to a nonlinear feeling, I guess. Yeah. So the quote,
and this is my bad, but the quote should have been pain isn't linear or multi or unidimensional and our measurement of it shouldn't be either. So I think we've kind of touched on the latter part, the unidimensionality. It's not just, it's not just a sensation. It's not just an intensity. It's not just location. Pain is a complex perception that has sensations, emotions. It has, you know, memory, it has all these different
elements, right? So the first, the first thing is to say, okay, we need to make sure that we're
measuring more than just one thing, right? So that's where you get the sleep quality. That's
where you get mood, you get mobility, you get all these other things, right? And mixing them together
to get a more holistic view of the patient. The linearity part is I think, super interesting. You know, we, again, we tend to put
things on linear scales, you know, one to 10 and, and, and, and in reality, we're, we're seeing all
the time that our perception of pain is not linear, meaning that I can give you a stimulus, you know, that goes, let's say a ramp up, right? I can give
you some heat that goes up one ramp and you might respond exponentially to that, right? If I continue
giving you that heat, like you could have, you know, sensitization. You could, we experience
this all the time on a daily basis in the summer.
If you get sunburned, right.
You have allodynia, tactile allodynia from the sun, right.
Normally that doesn't hurt you, but you know, once you get sunburned, you're just the, the
slightest touch on your arm, which shouldn't hurt you is exponentially more intense.
Right. more intense, right? Another example would be even with pain relief, there's something called
offset analgesia. So when you are giving a painful stimulus, for example, and you
offset it just a little bit during the painful stimulus. So let's say you have,
it's like an eight out of 10 and you go to like a 7.5 out of 10. Again, if pain were linear,
you would expect your, your relief to go down just slightly. Instead, you see something totally
opposite and it goes down a lot, right? So it's like how we in our in our brains, how we actually combine the signals that we come to say is pain is not at all linear.
And so you can start figuring out how to model that from like, you know, a machine learning perspective.
Right. You can use nonlinear models. You can use you can start thinking about cognition and thinking about how
like the peak end rule. I don't know if either of you are familiar with that with like Kahneman,
you know, you can start saying, okay, like if I know what this person's worst pain was
in the last week and I know their most recent pain, can I incorporate that into my model?
Right. So there's all these different dynamics you can start thinking about when you know that
pain isn't just, you know, a notch on a linear scale. Does that answer your question?
Absolutely. No, that was amazing.
Okay. So what do you think, just to kind of sum up, what makes you optimistic? What makes you excited?
What gets you up in the morning about this field? And what are we going to see in the near future?
I think more and more people are starting to see pain in this multidisciplinary lens. So
the International Association for the Study of Pain,
IASP, like a couple of years ago, actually changed their definition and it talks about
all of these things, right? And it also highlights things we haven't touched on,
which is like that pain is also like cultural and that you have to trust the subjective experience
of the patient, you know? and, and, and I think like
10 years ago when I was in this, just starting this field, I didn't see that at all. So the
fact that we're now starting to understand that this multi-dimensional, you know, aspect of pain,
I think is going to help patients a lot more, right. It's going to help inform treatments more.
It's going to change doctor patient relationships more so that I I'm really excited about that. Um, where this is
going. I mean, I wish I had a crystal ball. I get that. I get asked that all the time. And you know,
I, I, I, I, this is not a fun answer, but I'm going to say, I don't know because I'm constantly surprised. I really am. And I also am tentatively
optimistic, but because I, I can also see where some of this goes wrong because people aren't
getting the right people, you know, you know, in the door and at the table. So I can see where we
have like, you know, digital health and like this re, you know, a new infrastructure
that doesn't work for people because our healthcare system is still broken. So it's like,
doesn't matter that it's digital, right? So like, I, that's why I'm like, I don't know where this
is going, but I think there's a lot of promise with this. Wow. Well, thank you so much for
joining us and thank you so much for sharing the nuance and this really exciting field with our listeners.
We've now reached the part of our podcast where we shift gears and ask our guests three questions.
This tradition started in season two and we're carrying it through here.
Note to the listeners, our guest has not been prepped for these questions ahead
of time. So they're going to get some off the cuff answers and hopefully a little personality as well.
We're also changing things up a little bit this season, in that I will ask a question and our
co host will ask a question. But a third question comes from a previous podcast guest. So Chris,
why don't you go first? Sure. So Sarah, we talked a little bit today, obviously, about kind of sensors and the intersection of kind of pulling that data into machine learning.
And I wonder what you think is the smallest that machine learning can get.
And what I mean is, will we have, you know, ML powered household appliances?
Will we put machine learning and AI into toys?
What about even disposable
objects with machine learning embedded in them? That's an interesting question. Well, I mean,
don't we already have, to a certain extent, you know, again, this is what I think interesting
about this question is that we're, you know, machine learning can happen on the cloud. And
if we have Internet of Things, right, like, like i'm going don't we already have like machine learning and power you know powered appliances um so i would say that's like
that's a given um if this is if i'm being uh you know hypothetical with this um
i you know it's for me actually let me let me turn the question back i don't know if people
turn the question back on you but it's what should we have machine learning in and what shouldn't we, right? So, you know,
you say toys and it's kind of like, I can see the potential there, but it's also at what point
do we not need it, right? And at what point do we need to make sure that we have humans also solving like you know human problems and not
relying so much on this um you know so I guess the smallest or the I mean we we're already it's
already here it's already attached to everything that we're doing um whether we know it or not so
um yeah I I don't I don't know if I can say smallest, but I think we're already
inundated with it. So for me, I'm going, where can we have less of it? Which is ironic because
I'm a technologist, but at the same time, I'm like, I have on my smartphone, for example,
I have something monitoring my time and making sure I don't go on it as much.
So I don't know.
Well, yeah, that's a really interesting point.
And I wonder if maybe that's a question we should be asking as well, is the limits of
the technology.
So my question is, is it possible to create a truly unbiased AI? No, hands down, absolutely not. AI at the end of
the day, right? At least with how we, I know there's like how you conceive of AI and all of
this, but, you know, machine learning is always at the end of the day, starts with people,
ends with people.
And people are always biased.
Like, you know, and the solutions that we put out into the real world are always going
to have nuances.
They're always going to have bias.
And sometimes bias is good, right?
So sometimes you have, sometimes things need to be biased, right? So in the pain literature, for example,
more women suffer from chronic pain than men, right? Your data should be biased towards women,
you know, in general. Certain people have, are more likely to have a certain disease or more
likely to have a certain symptom than other people. So, you know, I think like this whole, this whole concentration on bias is just, we need to be aware
of it. We need to, we need to practice de-biasing where we can, but it doesn't solve how the bias
got there in the first place and it doesn't solve how it's biased afterwards. Right. So in my
opinion, it's always going to be there.
It's how do we account for it? How do we make it better? Yeah.
And now, as promised, we've got a question from a previous guest. Sriram Chandrasekaran is a professor at the University of Michigan who works on AI in healthcare. Sriram, take it away.
Hi, I'm Sriram Chandrasekhar. I'm a professor at the University
of Michigan. I work on AI and healthcare. And my question is, what do you think is the biggest AI
technology that you can think of that will transform medicine in the future?
So again, AI is in so many things and is driving a lot of things. So that's why when I say this,
this is where I'm coming from. I actually think that neurotechnology powered by AI is going to change a lot of what we see. Um, and you know, when I say
neurotech, I I'm saying it very broadly to be anything that, um, collects interprets and furs
or even modifies like neurological data or, or, or the nervous system itself. So it could be things
like spinal cord stimulation or deep brain stimulation, but it can also be direct to
consumer products, things that are on either the market today or they're coming. And I think it's because it's
going to be this, we're already integrated indirectly with artificial intelligence.
But I think through neurotechnology, it's going to be much more of a direct interaction. And just the potential to change and affect neurological
diseases in the healthcare space, I think is going to just be phenomenal. I'm really excited about
that. And we'll see how it happens in the consumer space. I think there's a lot more work to be done
there. But again, I think it holds a lot of promise. Excellent. Well, thank you so much for joining
us. It's been wonderful speaking with you and learning from you here on Utilizing AI.
We do look forward to what your question might be for a future guest. And if our listeners want
to play the game and be part of this, you can just send an email to host at utilizingai.com
and we'll record your question. So Sarah, thank you so much for joining us today. Where can people
connect with you and follow your thoughts on AI and other topics? Yeah, I actually don't use Twitter.
So I guess probably LinkedIn. I'm pretty easy to find. And I'm always open to people shooting me an email.
I will talk to anybody.
So you can email me at sarasara.e as an elephant.berger, B-E-R-G-E-R at ibm.com.
Excellent.
Well, thank you so much.
How about you, Chris?
What's new with you?
Yeah, lots of new things.
Everything can be found on my website,
chrisgrundeman.com, or we can chat on Twitter at Chris Grundeman. And also I love a good
conversation on LinkedIn as well. Excellent. Excellent. And as for me, you can find me at
S Foskett on most social media sites, as well as on LinkedIn and Twitter. Also, I will call out that we have set a date for our AI Field Day 3. Officially, it is May
18th through 20th for AI Field Day 3, so please do keep an eye on techfielday.com or the Tech
Field Day social media channels for more news about AI Field Day. Thank you for listening to
Utilizing AI. If you enjoyed this discussion, please do subscribe in your favorite podcast application
or on YouTube and give us a rating and review if you can, since that does help. The podcast was
brought to you by gestaltit.com, your home for IT coverage from across the enterprise. For show
notes and more episodes, go to utilizing-ai.com or find us on Twitter at utilizing underscore AI.
Thanks, and we'll see you next time.