Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 3x06: Fighting Nightmare Bacteria Using AI with Sriram Chandrasekaran
Episode Date: October 12, 2021It is sometimes hard to see how AI technology benefits society, but applications like drug discovery really bring the power home. Sriram Chandrasekaran, Assistant Professor of Biochemical Engineering ...at the University of Michigan, is using machine learning to assess the properties of drug candidates to fight antibiotic-resistant bacteria. Presented with millions of different potential drugs, machine learning can identify the few most useful to be tested clinically. Because it tries everything and anything without preconceived biases, ML can uncover novel combinations that researchers might never notice. We also discuss specifics of the AI environment, including the preference for random forests to deep learning, privacy concerns, bias in datasets, and the interplay between domain expertise and data science. Three Questions Stephen's Question: Can you think of an application for ML that has not yet been rolled out but will make a major impact in the future? Chris's Question: How small can ML get? Will we have ML-powered household appliances? Toys? Disposable devices? Zach DeMeyer, Gestalt IT: What's the most innovative use of AI you've seen in the real world Guests and Hosts Sriram Chandrasekaran, Assistant Professor of Biomedical Engineering at University of Michigan. Connect with Sriram on LinkedIn or on Twitter at @sriram_lab. You can also email Sriram at csriram@umich.edu. Chris Grundemann, Gigaom Analyst and Managing Director at Grundemann Technology Solutions. Connect with Chris on ChrisGrundemann.com on Twitter at @ChrisGrundemann. Stephen Foskett, Publisher of Gestalt IT and Organizer of Tech Field Day. Find Stephen’s writing at GestaltIT.com and on Twitter at @SFoskett. Date: 10/12/2021 Tags: @sriram_lab , @SFoskett, @ChrisGrundemann
Transcript
Discussion (0)
I'm Stephen Foskett.
I'm Chris Grundemann.
And this is the Utilizing AI podcast.
Welcome to another episode of Utilizing AI,
the podcast about enterprise applications for machine learning,
deep learning, and other artificial intelligence topics.
Each time we meet, we discuss how AI comes to the enterprise.
And one thing that occurs to me as somebody in enterprise IT is that often enterprise people don't really understand what the products and projects that they're supporting are useful for.
But sometimes those things that we're supporting, sometimes those AI applications are really doing great things for society.
Chris, what do you think about this?
There seems to be a disconnect between the practitioners of AI and the people that are
actually using it.
Yeah, well, there's two aspects of this, I think, Stephen.
One is what we've talked about before in the podcast, which is having domain experts with
the data scientists and other IT pros, really really to really be able to understand and make sure you eliminate bias and treat data the right ways and things like that.
And the other one is just the heads down work of IT operations and how it's sometimes easy to lose sight or maybe not even know what the company you're working for even does,
or especially the people who are using the technology that you're enabling when there's a couple layers removed, right?
If you're working at a service writer or something like that.
Yeah.
And when you came to me and said, you know,
hey, I've got somebody who's using AI for something that's really cool and practical.
I was like, yes, yes, let's do this.
Let's kind of inspire the audience by talking about the potential benefits
of AI and ML applications
in a way that benefits all of society.
And that's why I want to introduce everybody to our guest today.
We've got Sriram Chandrasekaran, who is, well, I'm going to let you speak for yourself.
Sriram, introduce yourself and tell us what you're doing with AI.
Sriram Chandrasekaran, Ph.D.: Thanks, Stephen and Chris, for having me here.
So I'm Sriman Sundarasekaran. I'm a professor at the University of Michigan in Arbor,
and my lab uses AI for drug discovery and healthcare applications. So I'm really excited
to join you guys today. Yeah, as you can see from the title of this episode, what they're doing with AI and ML is basically fighting
off the nightmare bacteria that are attacking all of us. And, you know, as somebody who has
benefited certainly from the development of new drugs and antibiotics and technologies, I mean,
frankly, all of us have, It's exciting to think about this.
So maybe tell us a little bit more about the problem that you are attacking with AI.
What's going on with these nightmare bacteria?
Sure thing, yeah.
So, you know, antibiotics, as you said, is one of the greatest discoveries in modern
science.
And everyone's taken antibiotics.
And, you know, for the past 50 to 100 years, billions of lives have been saved due to antibiotics.
But unfortunately, the pace of discovering the antibiotics has slowed down and there's
a rapid spread of bacteria that are resistant to currently used antibiotics.
So what I mean by resistant is that
usually when you give an antibiotic to a bacteria, it dies in a few days and the infection clears up.
But recently the bacteria have evolved resistance to these antibiotics and what I mean by that is
that they can survive and grow even when there are antibiotics around.
So then the infection no longer clears and it could once again result in loss of life or limb due to these antibiotic resistant bacteria.
And why now it's a big problem is that on one hand, we have these resistant bacteria spreading worldwide.
On the other hand, we don't have any new antibiotics to treat them. And so if someone gets infected with these nightmare bacteria, we have no
treatment options for these patients. They have to rely on their own immune system, which
in many cases might not be sufficient to clear infection. So that's the nightmare
bacteria we are talking about. So what does that process look like as far as starting to
dissect that problem and really dig in? Are you studying known antibiotics and trying to find
chemicals that look similar or are you studying the bacteria themselves or is it the interaction
between the two or I mean how do you even attack a problem like this? Yeah that's a great question.
So it is a combination of both in the, we have to take into account both the properties of the antibiotics and the
properties of the bacteria to design effective treatments.
Different bacteria are susceptible or sensitive to different antibiotics.
We need to understand what makes some bacteria sensitive to some drugs and what makes some
bacteria resistant to others.
And so we use machine learning and AI
to learn from known examples.
Like you said, where we know some bacteria
are sensitive to some drugs or combinations of drugs,
and then we take into account the properties
of the bacteria and the drug,
and then we say, you know,
we let the AI algorithm train on these known examples,
and then we use AI and machine learning to come up with
completely new drugs that these bacteria could be sensitive to.
So then we provide data on drug properties
for thousands of candidate drugs and say, you know, like,
tell us which drugs would be the most effective
against this specific bacteria.
So it's a combination of both.
And is the problem really that each specific bacteria needs a specific drug to attack it,
or is the problem more that we just need new ideas for drugs?
In theory, I would say we need completely new ideas
because recently people have
been talking about these good
bacteria and bad bacteria. There are these
bad bacteria that cause infections
and there are these good bacteria in your body
that do helpful stuff.
They provide valuable
nutrients, vitamins to your body.
When you are
taking an antibiotic, it kills everything.
It kills almost both the good and bad bacteria. So we are looking into more precise drugs
that target only the bad bacteria and leave the good ones alone. But that's a very, it's
a huge challenge because most of the bacteria are very similar and what the antibiotics target or block in these bacteria
are present in almost all the and all both good and bad bacteria so it is a big challenge it's
sort of finding a needle in a haystack where you find this really precise drug or combination of
drugs that kills only the pathogen the bad bacteria and leaves us of the body alone yeah this
to me, once you
describe it that way, it helps me to understand why this is a good candidate for machine learning.
Because as you say, it's not finding a needle in a haystack, it's that we've got a lot of needles
in a lot of haystacks, and we're trying to find the, you know, to me, it reminds me almost of,
if you ever, have you ever been in a situation where you had like a whole bunch of padlocks
and a whole bunch of keys, and, you know, you're trying to figure out which ones go to which.
And, and so, you know, what you're doing is you're, you're taking advantage of the unique
aspects of machine learning, which is basically trying a lot of keys and a lot of padlocks.
And in a way that would be very challenging in a typical clinical environment where a doctor
or a researcher would have to basically manually kind of go through and test things out. And
instead you're kind of using this parallelism of machine learning. Is that right?
Yep. Yeah. That's a great analogy. So we need, I guess, instead of a bunch of keys,
you have millions of keys, and you're searching through
millions of options to figure out which ones will be the most effective.
Then also doing these testing is very expensive.
Some of the bacteria we work with, tuberculosis, are dangerous bacteria, and so doing any experiments
on those can only be done in very specific labs, which are very expensive and
time consuming.
So using AI and machine learning, we are screening through millions of candidates and then come
with two or three that are potentially the most useful that you can then go to the lab
and test those.
So I think the analogy was perfect.
Are there any really unpredictable results from using AI?
And what I'm thinking about when I ask that is I've seen some interesting mechanical engineering actions taken by machine learning and where basically really novel structures come out, right?
Like almost alien looking, just things that people just wouldn't have come up with,
right, the forms.
And I don't know if anybody's seen this,
but you can probably go on your favorite search engine
and look around for some of these mechanical engineering
and AI contraptions, support beams and things
that are optimized for strength and weight.
And I wonder if the same thing is true
in molecular biology.
Yeah, no, that's a great question.
Again, like we let the AI loose in our case for drug discovery and asked it to find any drugs, for example,
for treating tuberculosis.
We screened millions of combinations and said, pick anything that the AI thinks is optimal,
not what a clinician would pick.
And some of the top hits that we found was actually an antipsychotic drug.
You would think it is just focusing on the brain,
but this actually had very high potency against TB bacteria.
We also found an anti-malarial drug that we could repurpose
for treating tuberculosis.
So definitely, I think there's a lot of surprising hits
that you could find using AI.
Whereas conventionally, as a scientist, you would think,
this doesn't look like an antibiotic. I'm not going to use this, but you know but AI doesn't care about these boundaries
and it searches through everything and we found some really surprising ones using our study.
Yeah it's funny that that almost reminds me Chris of the times that we've talked about sort of ethics
and bias in machine learning and how machine learning can sometimes actually overcome the sort of human biases, human prejudices that we bring to
the table. Because as you say, as a mechanical engineer, you might be like, no, no, no, it's got
to be an IB. And as a, you know, a virologist or, you know, epidemiologist, you might say, oh, no,
no, no, it's got to be, you know, one of these class of, you know, antibiotic, you might say, oh, no, no, no. It's got to be one of these class of traditional antibiotic drugs that we're going to try.
Whereas the AI, I mean, they've got no such preconceptions.
They're going to just do what they do and find a solution that works.
And that can be pretty interesting, right?
Yep.
Yeah.
Another thing I would say in front into the AI is that we also create
cocktails that is combinations of drugs. And that you can imagine the combinatorial space
is huge, right? Like if you have just a hundred drugs and you want to create a cocktail of
four different ones, that's already like 5 million combinations. And so it's like sort
of once again, kitchen kitchen recipes putting them together,
right, but only certain ones actually are tastier and work well. And so that's the same thing here.
And AI doesn't have that concept, but still to tell it, give an objective, like in this case,
find combinations that are the most potent, it can really screen through all these combinations, which is impossible to do
experimentally or intuitively. So those are like higher order things, which is
not possible at all now, but machine learning or AI can actually make that happen.
Yeah, and that comes back to again, you know, Steven's earlier question around just playing to the strengths of AI and really using it in the right places.
I think that's something we've seen on the other side of the table, right?
Like in the IT world with AI ops and things like that, there's just areas where AI doesn't make sense and there's areas where it does.
The other thing that comes up in that same conversation then is, you know, what algorithm are you actually using?
What models are you using? And I'm guessing that there's, you're applying different algorithms
at different stages of the research
or just in parallel to kind of compete against each other
or does that algorithm selection
come into play here as well?
Yeah, yep, that's a good question.
So, the algorithms we choose need to serve two purposes.
One is obviously we need to have high accuracy.
Another thing
we also really care about is the transparency or the way we can actually understand how the
algorithm works. So this is really important because when you're going to a clinician down
the line and saying, here's our AI algorithm, this is what it recommends, they want to know
why it made that decision. If I can't do that, then it's really hard for them
to trust the AI algorithm.
So we try to pick something that solves both these problems,
that it has high accuracy, and we can actually
understand how it works.
And so I would say, in terms of accuracy,
usually the neural networks really do very well,
deep neural networks, but it's really hard for us
to understand or explain how these deep learning algorithms
work, and so we sort of hit a middle ground
where we are now using random forests
or boosted decision trees, which are pretty transparent.
And clinicians can understand deletion trees.
They've used that a lot.
And so that's sort of the middle ground
where we use these bagging or boosted deletion trees
that has very high accuracy.
And we can actually explain to people.
And the third thing is that a lot of these medical data
and biological data have a lot of missing values in them.
So once again, we need to have algorithms
that can work with missing data.
Once again, random forest usually excel at that task.
So we sort of narrowed down to random forest
for most of our applications.
But there are specific cases where
we have to use deep neural networks.
But usually, we try to try random forest first
before we try anything else.
Yeah, I really appreciate that, the nuances
of the different algorithms.
What other lessons have you come away with
now that you've been working in,
you know, specific to ML,
have you come away with?
For example, why does Deep Forest work better
than Deep Learning or sorry, Random Forest?
Anyway, you get what I mean.
Why is that the answer?
And are there any other lessons that you think
that other researchers can take away?
Yeah, definitely.
So I would say another reason is that, you know, compared to online environments, the medical environment is
very messy in the sense each person is unique, right? The data you get from one patient might
be different from another, you know, from data from one hospital in the Midwest might be different from data
from a different hospital in California. So there's a lot of heterogeneity involved.
And so that's another challenge when we are working with all these drug data or clinical data.
And so we need algorithms that once again work with all this noise and heterogeneity and put
them all together. And so those are other considerations we take into account
while we pick up AI or machine learning algorithm.
And once again, a lot of steps go in before
you even feed the data into machine learning.
And some algorithms are more robust
to these variations than others.
And I would say that underlying structure
behind logistic entries in random forest than others. And I would say the underlying structure behind position trees and random
forest where you create hundreds of unique models and aggregate them together, sort of like a
wisdom of crowds scenario, sort of works very well for this case.
Now what about the data itself? I mean you talked about the heterogeneity and the fact that there
might be missing pieces. I think that's really interesting, right? And that's a topic we've talked about
before about just, you know, how you take data and prepare it to be used in AI. And in doing so,
you know, one, you know, make sure you're not introducing bias somehow or leaving bias that's
already in the data. And then also, I think there's probably, especially with clinical trials and things,
there's probably some privacy concerns
as far as you don't want to be exposing data
to researchers that they shouldn't have.
So I'm guessing that those areas of data
kind of cleanliness and bias and privacy
are a big deal here.
Oh, yeah, definitely.
I think that's one of the cutting edge research
in the AI field.
And my lab works on some of these, but there are other labs here whose main focus is on reducing
bias in AI.
I would say that is definitely a huge problem in that some of the data sets that we get
are usually from certain populations of people and may not generalize worldwide.
That is one issue.
And then obviously the issue of transparency and, you know, like confidentiality of the
data, usually that is sort of, you know, normally people handle it by hiding the patient names
or identifier, but people have shown that, you know, despite having, removing those information,
it's, for example, the DNA sequence of that patient is there, you can always connect that
patient somewhere else, you know, like maybe they did a 20-10 testing or just going to save their
data somewhere else, we can always connect them. And so that is still an issue and that's still an
active way of research. So I don't have an answer for that,
but you're usually trying to make sure
you use aggregate data and not patient-specific ones.
Like we just look at, say, this population of people,
even this drug performed well versus another population.
So that's one way we can avoid this issue.
But if you want to move more and more
towards personalized medicine, where if you want to move more and more towards personalized medicine,
where if you want to make predictions for a certain patient infected with a certain
pathogen, then you do get to know the individual patient's details and people are still working
out how we can sort of hide that information from AI and prevent it from leaking out later.
Yeah, because of course,
bias isn't just a moral or ethical concept. It's also a key concept in the scientific method.
And scientists, I think,
are keenly aware of how their own biases,
and I don't mean like racism or know, racism or something, but their own
just sort of preconceived notions and biases can impact science. Do you think that that's something
that AI is helping with in all areas of science? Because as we said before, if you've got sort of a
black box, it can help scientists avoid their own preconceived notions.
On the other hand, of course, do you think, I guess that's question number one, and then
question number two is the opposite of that, which is, do you think that scientists are
overly trusting the AI algorithms and not realizing that bias can even creep in there?
Yeah, that's a great question again.
And that's sort of why personally I like to pick
AI algorithms that are transparent so that I can actually see how they work so that at least I can
go in and check if these are hard-cored in the algorithm. Like if it's a black box, maybe whatever
biases are in the data and that I'm providing it are sort of encoded in the algorithm. But if it's a transparent
method, then I can at least see the rules that it's taking. And for example, if sex is a feature
that's being picked up or race or gender, you know, so those things, then we can be like, Hey,
these are not a factor is we should remove those, you know, like so we could penalize those things.
And so definitely I think one huge frontier for this field is trying to create more mechanistic
or transparent AI so that all these issues, I mean, I'm sure there'll be more, but at
least you can start addressing those.
And then I guess, you know, the last thing that i'd really like to dig into a little
bit here is just um kind of what i started the show with which is this idea of the interplay
between domain expertise and and data science itself um and i think we've talked about a lot
of that throughout the whole course of the show already um a lot of the things you're talking
about about algorithm selection and and and data selection and these things.
But I wonder just more directly,
how have you experienced that interaction?
And do you have any tips there
for how to ensure both sides,
that data scientists come to the table
ready to talk to domain experts
and that domain experts come to the table
ready to talk to data scientists
and just the importance of that communication?
Yeah, definitely.
So I guess one big challenge is terminology and the
language. Every field has their own chart game and it might mean very different things to different
people. And a lot of times, when I write papers, I need to write them for both the audiences. Data
scientists would want to see certain things,. Medical scientists want to see a certain different set of things.
Sometimes they might be opposite requirements.
I think learning the same language is probably the first thing I would recommend people.
Maybe if you're an IT person and really want to work on healthcare,
I'd say maybe spend time learning some of the
targets and what the actual requirements or challenges are.
Like, you know, as someone outside the field, you would think, you know, the biggest problem
is finding the drug for curing cancer or, you know, like for infection.
But in reality, it could be that we do have a cure and the diagnosis is the problem or
grouping patients into categories are the problem. So, you know, so now definitely, you know, we have,
I think, computer scientists have many solutions, but they don't know what the problem is. You know,
so I think if they understand what the medical need is, then they'll have a much better way of coming up with solutions.
And it's also a challenge for medical scientists who are not trained in computational methods to
communicate the problem in certain ways to computer scientists. They will not be able to frame it as a
machine learning problem. They might think, hey, this is a problem for me, but
they will not be able to say that this is
a classification we need to do, or here's a regression method we need to apply.
They don't know these terms.
And so that's where I think spending time, maybe as a computer scientist, it's easier
for us to spend more time in healthcare just to learn what the challenges are.
And then I think then new discoveries would easily happen once we cross the language barrier.
Yeah and that was my experience as well just I actually did some consulting earlier in my career
I was an IT consultant and I spent a lot of time consulting with people outside the IT space like
lawyers and doctors and I found that you know what I didn't know about their fields was matched
in a way with some of the preconceived notions and the things that they didn't understand about
the capabilities of my field. And I found, as you said, that often, honestly, it didn't take
as much as I would think, but it took a conversation. It took an open-minded
conversation where you sit down with a researcher and you say, here's what we can do with machine
learning. This is kind of what we can do. This is the box of things we can do. This is the box of
questions we can't answer with it. Do you have things that we can put in this box that would be
useful to you? And I think that what can happen is you can break down those barriers. And similarly,
as a technologist, I had to listen with an open mind and not be like, oh, these people don't know
anything about tech. I had to listen and say, what are the questions they're asking? And how do I translate that into nerd terms? How do I translate that into the capabilities that I can do? And I guess
for me, Sriram, that would, I think, be kind of the ultimate question for me is,
how do we bridge this gap? And how do we get people like our listeners to spend more time
with people like you who are
who are trying to use these these tools now i would say you know recently there's been more and more
uh the buy has been lowered uh i'd say that through these science channels and also there's
been a lot of these open source competitions that anyone can join.
Like for example, you know, like the pandemic, a lot of organizations had these open challenges
for computer scientists where the problem was framed from a machine learning perspective. So
people who are outside the field could apply their best algorithms to, you know, like the COVID pandemic, like
figuring out how long it's going to take to spread or how many people it's going to infect
or drugs to find.
So if these all could be framed as a machine learning problem, you know, there are people,
smart people out there who can attack this.
So I think with this recent pandemic, people are sort of understanding
what the challenges are so that, you know, as we said, when the nightmare bacteria actually starts spreading, which I think will happen, it's a ticking time bomb, at least we'll have
the AI tools and the people who are interested to be ready to tackle this challenge.
Yeah, that preparation means a lot, I think, and just generally those
conversations, not to put too fine a point on this one topic, but yeah, I even recently had
a conversation with one of my friends who's more of a developer, and we were looking at a problem,
and it took us a while, two IT pros, to come to common language on, you know, between networking
and Kubernetes, and between networking and systems, and it took us a while, so, you know, I'm just extrapolating that out, you know, if you and Kubernetes and between networking and systems, and it took us a while.
So, you know, I'm just extrapolating that out.
You know, if you've ever had a conversation like that, extrapolating that out to, you know, going from a medical science to data science or IT can definitely be a big bridge.
And to your point, I think that having those conversations now and early and often is going to mean a lot in the coming years and decades. I know that for one example, right, Moderna and their vaccine,
I don't know if it was AI related, but definitely the scientific work
that went into that was years and years in the making
and they were able to be on the spot because that groundwork was laid.
So just to underline what you just said, Shirama, I really feel that.
Exactly. I think one thing probably now the medical community appreciates
is the amount of data we need to build these algorithms.
It's not that you can have an AI algorithm ready right away because you need to spend years and years collecting the data so that you can train the AI algorithm.
For that you need good quality data, which doesn't have all these biases. And so there's need to make sure there's a lot of care
and attention taken so that the AI algorithm is learning on an unbiased representative data,
then it can make these accurate predictions so that the problem, say AI doesn't work,
it could be because the algorithm is bad or the data is bad. And so I think now,
so many biologists and scientists are understanding that you need to have both together to discover new drugs or vaccines.
And I think that's, once again, only possible if both the communities start talking to each other.
Well, this has been a wonderful conversation, but we've reached the point in our episode now where we kind of transition into our next phase,
the fun part of utilizing AI.
This tradition started in season two,
asking each guest three unexpected questions.
And we're continuing it here in season three with a twist.
So our guest has not been prepped for these questions
ahead of time, but we always love
the creative and interesting answers they come up with.
This season, we're also shaking things up a bit.
I'm going to ask a question, Chris is going to ask a question, and then we're going to
bring in a question from somebody outside the podcast.
So let's start off, I'll go first.
One of our traditional questions, I thought it would be fun to ask somebody outside the tech space.
Can you think of an application for machine learning
that has not yet been rolled out,
but will make a huge impact in the future?
So something that's happening in the future
with machine learning.
Well, I'm gonna say like, you know,
I don't know if this has been done,
like matching clothes to people would be a nice one.
Then that takes time out of shopping.
You can be like, hey, here's a personal recommendation
based on your height, weight, or your interest.
This is what you should be wearing.
So I don't know if that's been done.
So that would be my guess.
That would save me time every day.
I like it.
And some of us need it more than others.
So another one that's kind of more AI focused,
but I'd love to get your perspective on,
we've asked this before,
how small do you think AI can get?
And so, you know, I think it's fun to ask this
because we're talking about virology and bacteria,
but, you know, is AI AI gonna get down into children's toys
or something you can carry in your pocket
or how small do you think AI can get?
Oh, yeah.
I think with all these Raspberry Pi and devices,
I'm sure people are, in the future,
they might be like medical implants
that sense chemical changes in your body
and alerts your physician.
So I can definitely see AI being everywhere.
It doesn't have to train,
set your body as long as it collects data
and there's like a server somewhere else
that learns from it.
I think I can see AI being used in a lot of applications.
I said from toys,
all the way, you know, like maybe you want to monitor your toddler, you know, like both
physically or inside biochemically. I think that is definitely possible with smaller chips.
You can monitor anything. Cool. Yeah. That's, that's, that's actually an interesting thing. I hadn't thought about how small implants
and nano AI, okay, that's going to scare people. Not scary nano AI, not like monsters that
are eating you from the inside. No, that's not what we're talking about.
So there are these micro bots, sorry, there are these tiny nanoscale micro bots that have
chips within them and you can obviously program them
with basic machine learning
and they could, you know, float in your bloodstream.
If it finds a cancer cell, then it could,
it's probably five, 10 years away,
but I can definitely see that as a possibility.
I think you just answered both Chris's and my question
with one answer, and that's kind of cool.
So as promised, now we've got a question from a special guest.
The following question is brought to us by Zach Demeyer, a writer here at Gestalt IT.
So Zach, take it away.
Hi Utilizing AI.
I'm Zach Demeyer, writer here at Gestalt IT and I have a question for you.
What's the most innovative use of AI you've seen in the real world?
Oh, well, some of the coolest things
I think are always challenging
is identifying birds or microbes.
You take a picture and say what bird it is.
I think, especially if you can figure out,
not just saying this is a bird,
but here's the species or this is a piece
of bacteria you're looking at. I think that it's always amazing to me like most of the bacteria and
animals look so similar even for the human eye and just for AI to predict those things I would
say that is some of the coolest things I've seen. I also love watching nature documentaries. So probably look at, you know,
like using AI to see, hey, this is this bird versus this animal. And that's always cool.
Yeah, I like that. I like that. What the bird? What's that bird? Okay. So thanks, Ram.
Why even the individual birds, right? Or finding tigers based on their stripes.
It's like, there's not just saying a tiger,
it's tiger number 15, and that's tiger number 16
from a different forest.
You know, like, so they can identify
individual animals using AI.
Cool, yeah, that's actually, I love that.
That would be so much fun.
Yeah, that's, this is, of course, you know,
that tiger names are all like Grr and grr grr.
Yeah, they don't call themselves Tiger 16.
That's a human construct.
Well, thank you, Sriram.
It's great to have you.
We look forward to hearing your question for a future guest. And if anyone listening wants to be part of this, you can.
Just send an email to host at utilizing-ai.com and we'll record your question
for a future guest. So Sriram, thank you for joining us today. Where can people connect with
you and follow your thoughts on AI and other topics? So my lab has a Twitter account,
it's also active on LinkedIn, it's also a website, so happy to share all these. You can always shoot me an email with questions too.
Yeah, so I'm on Twitter at Chris Gwendolyn.
Online, chriscwendolyn.com.
And also having conversations on LinkedIn pretty often as well these days.
And as for me, I'm going to change all my social media accounts to Gerger
because I'm actually a tiger. No, you can find me at S F going to change all my social media accounts to Gerger because I'm actually a tiger.
No, you can find me at S Foskett on most social media.
And I would love it if you would tune in for Utilizing AI every Tuesday, but maybe check out the Gestalt IT rundown on Wednesdays where we go through the tech news of the week.
That's another fun thing that we're doing here every week.
So thank you for tuning in here for Utilizing AI podcast. If you enjoyed this discussion, please remember
to subscribe, rate and review the show in iTunes, even, you know, give us five gers if you're a
tiger. And please do share this show with your friends. This podcast is brought to you by
gestaltit.com, your home for IT coverage from across the enterprise.
For show notes and more episodes, go to utilizing-ai.com, or you can find us on
Twitter at utilizing underscore AI. Thanks, and we'll see you next time.