Business Innovators Radio - Adjusting to AI

Episode Date: March 18, 2024

Artificial Intelligence (AI) is becoming much more common place. It can build your playlist and send your emails, write a book report or a shopping list. And it is also becoming much more common in th...e world of health care. While it’s amazing to see the innovations that result from AI use, it also opens the door to a lot of concerns and questions. What does AI mean for patient care?In this episode, Dr. Dan, Angela and Dr. Riley discuss the emerging role of artificial intelligence (AI) in healthcare. They review current and potential applications of AI, analyze both opportunities and concerns regarding care quality and personal interactions, and propose some additional studies and features that could be integral in achieving quality patient care. If technology intrigues you, or has you a little worried, this episode is for you!To learn more about this and other hot health topics, follow us on social media and subscribe to our WTH podcast. If you have a specific health question or would like to find out if we can help you with a personal health challenge, check out our office page or contact us at 412-369-0400/ info@turofamilychiropractic.com.As always, our mission is to help you Get Healthy and Stay Healthy for a Lifetime!What the Health?!https://businessinnovatorsradio.com/what-the-health/Source: https://businessinnovatorsradio.com/adjusting-to-ai

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to What the Health, where anything health is fair game as we tackle the trends and bust the myths about health and wellness. Here are your hosts, Dr. Dan and Angela Toro. And welcome to another episode of What the Health I am here with my two co-hosts today. Angela and Riley. Almost doctor. Almost. We are like three weeks away now? It'll be Dr. Riley by the time this is out.
Starting point is 00:00:29 That's right. It'll be Dr. Riley. Very good. Very good. Um, today we are talking about adjusting to AI artificial intelligence. Um, and you want to give our, yeah, our normal disclaimer. So as you guys know, we're here for informational purposes only and no way offering individualized medical advice and definitely offering no, no recommendations based off this. This is truly, uh, today is just us here talking through an article that we found and read and
Starting point is 00:00:59 just trying to keep up on the way the world is quickly going at this point. I am very interested to see where this conversation goes because, again, it's like we have, unless you did some research in a past life that I don't know about artificial intelligence, but I know very little about artificial intelligence outside of what it is in movies. You know, you've got like, I just think like Jarvis and the Marvel movies with Iron Man, you know, and then of course you had what's his name, Altron. It took over Jarvis in the one Marvel movie. So, and then, um, I always go to Irobot.
Starting point is 00:01:35 Irobot is a great example. Yeah, it's like, yeah. So, you know, artificial intelligence, it, um, it's here. I think that's, you know, that's the one thing we can all agree on. It's here. And, you know, again, the world is going to tell, you know, have opinions on, is it a good thing? Is it a bad thing? And as all things, I think it's probably going to be a little bit of both, you know.
Starting point is 00:01:56 There's going to be, um, people who use it for absolutely. evil and there's going to be, you know, people who hopefully use it for very good. And I hope that's what we can kind of discuss here is what the heck, you know, how is it going to look? He's not what or if it's how is it going to be implemented into the future of health care practices. You know, and since we have a health care practice, I thought it would be a great conversation. So, but anyway. Do you want to reference the article that we're? Yeah, so the one article we came across and using it as, again, just a reference point.
Starting point is 00:02:36 But it is in BMC Medical Ed came out in 2023. So I thought that was pretty relevant because it was new, one of the newest articles we found. But it was called Revolutionizing Healthcare, the role of artificial intelligence in clinical practice. So we just kind of use this as a bullet point. We wanted to reference a couple things on there. But at the end of the day, I think artificial intelligence is going to be great for a lot of things. Like we talk about decreasing error, being able to read massive amounts of data, you know, going through points. That's the one thing the article talked about was, you know, AI programs, being able to read things like x-ray, you know, mammography.
Starting point is 00:03:24 you know, chest x-rays and being able to, you know, analyze and come up with a better diagnosis than even some of the best radiologists. Yeah. And more quickly as well. Quick. The other thing is just, I mean, if anyone's had a had to wait on a radiology report recently, I mean, you could be waiting a couple weeks to hear back. And, you know, so just, yeah, knowing that people are getting results more quickly, more accurately. with the potential of, yeah, less of the false, false positives or false negatives. Because that's, you know, that's huge with these, you know, we talk about it all the time with, you know, the quote unquote early detection test.
Starting point is 00:04:04 It's like, you know, again, there's pros and cons to them because the rate of false positives can cause a lot of issues as well. So we can cut down on that. Yeah. And really, you know, increase the benefits of catching things early. And then just again, it's the direction that where do we go from there once we're, once we're catching these things early. And that's, well, that's a fantastic point because I, that's reading the article, the article was very much wrapped around medicine. Yeah. And again, it's like, we're looking at this from a health care approach.
Starting point is 00:04:37 And the one thing I did. But we consider health care. Well, and again, that's, that's right. Yeah. We consider health care. You know, medicine, we still argue and maintain is not health care. Yeah. It's sick care intervention.
Starting point is 00:04:48 And it has an absolute place. And I think it's actually gotten significantly better. the advancement of a lot of immunotherapy drugs for, you know, for cancer treatment. It's gotten more targeted. In fact, there was a whole section in there that talked about, right, the targeted treatment. Targeted treatment, precision medicine, they call it. And I'm like, well, that's exactly what we do. We don't do medicine, but we do precision customized health care.
Starting point is 00:05:13 I mean, that's one of our core values because we understand that everybody is a little bit different. You know, so we're taking into account those differences. And so, you know, from our perspective, I looked at it like, you know, how much cooler would it be that, you know, we're able to look at stuff like that and do it at an even deeper level to really customize, you know, come up with a customized approach. Now, again, time-wise, that's difficult to do on a mass scale. But something like AI could be able to really help us, you know, help us dive deeper to that. Like, we have friends over this weekend, right? and they were talking about their sugar monitors that they were wearing.
Starting point is 00:05:55 I think that's a beautiful example from a biomarker perspective. You know, so it's a sugar. Continuous glucose monitor. Yeah, continuous glucose monitor. It's just a patch that you wear. And it goes directly to your phone. And it's basically, you know, kind of monitoring the sugar throughout your days and specifically trying to see how, you know, different meals, what you're eating is impacting
Starting point is 00:06:18 it. And then alerting you to say, hey, you're on a, you know, quick upscale, you know, you could do this in terms of like a walking or, you know, squatting or, you know, trying to get your activity up to help curb that sugar spike. So that's a direct like, you know, bio-intervention from an artificial intelligence perspective that I think is kind of like the future of where we're headed. Because rather than wait until you get sick, right, wait until you have, you know, you what it was where insulin resistance, you know, and needing to, or, you know, having a doctor
Starting point is 00:06:56 wanting to put you on a medication, now you're able to, you know, help curb that before it ever gets there. Yeah. And it's like one of the cool things I talked about in the article, too, was the, because, again, I didn't really realize how much stuff was already being used, but or where they're trying to go with things. But, yeah, monitoring vitals and things like that. So, again, you know, blood sugar is one of those, but, you know, blood pressure, heart rate,
Starting point is 00:07:20 you know, all of these things. And again, like the ability for people to be able to do that from home. Yeah. And then it's being connected to their healthcare professionals. So again, but they're getting feedback in real time. So instead of, you know, I mean, I personally have had to wear a heart rate monitor for, you know, 14 weeks and for 14 days and, you know, send it in. And then, you know, however long until you hear back, it's like versus if that was something. Yeah.
Starting point is 00:07:48 Everybody in that hard tag in there, right? Yeah, exactly. It's like, okay, so obviously I didn't die. So there's that. But, you know, you're waiting to hear. So as opposed to like, you know, if something like that, I see that as something being that AI would be involved with, you know, where you're getting real time. You know, hey, you just had a, you know, you just had a spike or an abnormality. What were you doing?
Starting point is 00:08:09 What, you know? And again, being able to like track all that. I just think that's going to, yeah, really add to, again, some of the precisions of that as opposed to, you know, a month later you're getting those results. And they're like, oh, at this time, on this day, you had an abnormality. Do you remember what you were doing? Yeah, right. You're supposed to log everything while you're wearing those. But who's sitting there, you know, you're going through life.
Starting point is 00:08:28 You're not thinking you did something that was quote unquote episodic. Yeah. Yeah. So it's just things like that. I feel like that kind of came to mind just as something where, again, it could be really beneficial getting that real time. You know, hey, your heart rates. And a lot of the, like, smart watches and stuff are kind of doing them now. Like, your heart rate's going up.
Starting point is 00:08:47 Maybe you need to do some deep breathing. Maybe you need to... You can get off the treadmill. Yeah. It's like, are you exercising? No, you're sitting at work. Okay, maybe you need to take a... Step away.
Starting point is 00:08:58 Yeah, maybe you need to take a breath. Step away from whatever you're working on. Step away from the children. But, yeah, so I think in that way, there could be a lot of positive, yeah, positive benefits of it. And as far as what we're looking at, which is, again, using lifestyle interventions. Yeah. To be able to show that. And again, I think it's going to help to prove and show that, right?
Starting point is 00:09:22 Because even like, you know, I talked a little bit about mental health care and, you know, and, you know, different, you know, medication interventions. And again, it's like, what can we do? We know how much exercise increases, you know, certain, certain hormones that just, you know, really elevate your mood. And it's like to be able to show that in real time would just be, again, And so beneficial because then it's like, again, look, this activity stimulated this. And if you can, it's like the Pavlovian dog thing, right?
Starting point is 00:09:55 It's like, you know, hey, this increased your mood. So let's, you know, try and, you know, continue that pattern and increase, you know, increase that, you know, pattern behavior. You know, I think it can really help, you know, help people through some really difficult situations. Yeah, see the things that we don't necessarily see or see right away. but yeah, I do. I thought another part that was really interesting. They talked about using AI for patient education.
Starting point is 00:10:24 And I know that's something that we're really, really big on is making sure people understand. You know, it's not just, okay, come in and do what we say. We want to, again, show people why, you know, here's the changes you're getting. This is, you know, why you're feeling better or not feeling better. You know, again, really making sure people understand the process. And at first, that's something where I thought I was like, well, you know, before, obviously, before reading the article, I was kind of thinking, oh, AI is going to take away from the personal aspect of that. But again, I mean, they're talking about, again, AI has come such a long way. And, you know, again, they're talking about it's interactive.
Starting point is 00:11:03 And it's, you know, again, it's, they're able to, you know, put things at different reading levels or different languages. So, again, take, you know, we've had, you know, we've had language barrier issues with some. patients, you know, patients are coming in with their mother or spouse because they don't speak English well or at all to that, to the point. So again, having that where you can, you can really bridge some of those, you know, we talk about all the time, you know, it make, all this makes sense to us, but we forget that not everybody did, you know, four years of undergrad and X many years and, you know, post, you know, post, you know, post undergrad education where these things just make sense to that. And then, you know,
Starting point is 00:11:41 So it's just, yeah, having that, you know, kind of having that ability to have that, at least do like the front, you know, a portion of it where you get that. And the part that really jumped out to me when they're talking about just, again, some of the messages that you're giving on a daily basis that can become kind of monotonous. Again, the AI is always going to have the same level of enthusiasm. And, you know, AI is not going to have a bad day. AI is not, you know, AI is not dealing with, oh, I didn't sleep enough or I just can't say this message for the million times. so like it's always going to kind of have that same level of you know delivery yeah it's like you can only tell you so many times like stop being in the chocolate go exercise put the cookie down and go for a while
Starting point is 00:12:26 maybe it needs to do that yeah maybe this is now the 10th time I've told you today maybe it will I don't know maybe I will have an attitude I don't know I'm a lot I know is it wasn't that happened in something like the Siri or like I do sometimes. Sometimes I feed. Yeah. There are some times where I'm like, Alexa, you're kind of get an attitude. I'm not like in your tone.
Starting point is 00:12:52 Some people are hardheaded. Right. Feed them some sass. Maybe they do feed off. That's right. I mean, that's another great point is like, you know, the, is AI going to be able to start, again, you know, it already starts to recognize our patterns of behavior, right? So like auto correct or, you know,
Starting point is 00:13:11 hey, we're talking around their phone and now all of a sudden we're getting, you know, advertisements for, you know, this. You talk about this. Talk about this. And it's like, what the heck stop listening to me? So in another way, you start to wonder, it's going to be able to, you know, look at, you know, again, personalization based on your environment, your lifestyle, your biomarkers. But my question is, is it going to take into account our belief systems? Because like, you know, how much someone, you know, who has like, you know, a Christian background, or, you know, Muslim background or, you know, it's atheist. It's like, you know, do those belief systems come into account?
Starting point is 00:13:50 Because, you know, because I back it up a step further, it's like even AI, you know, to a point, at least in the beginning, is responding to, you know, the data that it's getting, right? So, again, just coming at it from this whole perspective of medicine, like, you know, medicine's, you know, belief system is, hey, you have this, you know, this what we consider dysfunction or, you know, dysregulation, we're going to treat it, right? You know, it's something is too high or too low outside the normal range. So we want to treat it with, you know, a drug, a surgery, some sort of intervention to bring it back into that normal range. But asking AI specific questions or giving it data around that, you know, I would say like, you know, someone who has a
Starting point is 00:14:41 very medical background versus someone like us who has a very, you know, health oriented background, behavior oriented background, and how do we, you know, establish that so that we don't need the medical intervention as much. I think if you ask those questions, you know, you ask the same question to, it's going to be answered differently, right? So, You know, I just wonder, I don't know the answer, but like that comes to my mind. It's like, you know, is that going to be taken into account? Or could that? I'm sure it could be.
Starting point is 00:15:12 But is that something that's taken into account as we move forward? Yeah. I think it's really dependent, Angel and I talked about this yesterday, based on what data is put into, you know, the software. You can hopefully roll out biases based on what goes in. But at the end of the day, you know, if you have a medical. A.I. They're not going to get, you know, that holistic as much, that holistic approach in there. And that could alter, you know, the AI's belief systems as well. Well, and that's just it Going back to the Ultron, the Age of Ultron example, it's like, you know, he did all that.
Starting point is 00:15:52 If you haven't seen the movie, you need to go watch Age of Ultron and say Marvel, and I don't get paid for that. But again, it's like when the new AI was created, you know, he went and he was analyzing all the data. And, you know, he got, he got to that's like, he was trying to figure out like what is the goal. What is my purpose? You know, the AI was trying to figure out what is my purpose. And he came across peace in our time. And it was like a clip from, you know, Iron Man saying, you know, peace in our time. And so he was so set on, you know, on trying to accomplish that.
Starting point is 00:16:25 Well, his only way that he determined that all the Marvel heroes were the ones causing all the destruction. So then, you know, he was so targeted on destroying all the heroes in the movie because of that. And it's like, so, you know, if you get a belief system into an artificial intelligence, like a medicalized belief system into an artificial intelligence, well, that's going to skew. You know, it's output,
Starting point is 00:16:51 right? And that's, you know, you go back to like the simplest answer. Tony Robbins always said, you know, the quality of our life is dictated by the quality of our questions. Okay.
Starting point is 00:16:59 So how, you know, how we are asking something, you know, someone has high blood pressure, um, well, you know,
Starting point is 00:17:06 do we, do we lower it with artificial, you know, medication or do we try and change, you know, the lifestyle, the underlying reasons of why the blood pressure is high, you know, do we have to change our work environment? Do we have to, you know, implement some deep breathing? Do we have to get adjusted?
Starting point is 00:17:23 It's like, so all those underlying factors. But if that question is never asked, then you're automatically going to go to, you know, the research and going to go to, again, what's been most researched? Medication has been researched over the last, you know, 100 years. I mean, that's what it's been. It's like, you know, you have this disease, take this medication, you know, and it's, It's very, very linear. So, yeah, I think some of the positivity that comes out of that is, you know, maybe it does start to broaden our horizon. You know, maybe it looks at interactions a little bit better. Yeah, that's what it's, yeah.
Starting point is 00:17:56 And even, you know, from the medication standpoint, because at first when I started reading, it's like, oh, this is just going to be, you know, one more way to push medications because it's, you know, okay, we're going to track, you know, people have X, Y and Z. So put them on this med. But they did even talk about it can, one of the things it can be used for is better. tracking post-trial adverse events and trying to, you know, lower that. Because again, when people don't realize, again, people who don't have a research background don't realize, you know, when people talk about, you know, things that have been clinically tested, that's in a very controlled, randomized situation. So these people are literally, like, I know somebody who's the work in drug test. I mean, these people were literally kept, like, in a room,
Starting point is 00:18:38 put on this drug, watched, monitored, you know, it's their, you know, they're, you know, they're controlled for, you know, they're weeded out if they're on any medications, if they're a smoker, if they're, so basically you're taking super healthy people and putting them on this medication to just basically see what happens to them. That's how you get that long list of adverse events. You know, if somebody, if one person passes out, that goes on as a possibility. And one person, you know, gets a headache that goes on as a problem. So it's just, it's trying to figure out what would this do to a healthy person. And then it gets tested in the actual population. So there's this whole, like, you know, things that it needs to go through.
Starting point is 00:19:15 But when you take that and then you put it into somebody who actually is sick out in the real world with all, it's just there's no controlling for, you know, what's going to happen. And so that's where, again, you'll see, you know, the people that get really sick or have interactions with things you didn't know you were going to have an interaction with. So there's still so many unknowns when you're taking those medications because it's just, again, they were tested in a very control environment. and now you're taking it out in the real world where, you know, you can't control, you can tell somebody not to smoke and drink alcohol while they're taking this medication, but are they going to listen to you? Yeah, exactly.
Starting point is 00:19:50 And again, maybe that's something where, you know, that's something AI monitors. Like, hey, you're on a medication and we notice that you're, you know, there's not nicotine in your system. You're not supposed to, that's a potential interaction. There's alcohol in your system. That's a potential interaction. Like, so if people can again get that, you know, again, the end of the day, people are going to do what they're going to do.
Starting point is 00:20:08 Yeah. But if, you know, if that. helps, you know, cut down on some of those things by, again, increasing the education for people or helping them really reinforce the seriousness of some of those things because it can be really overwhelming when, you know, you're at an appointment and they say, okay, you know, but here's what you need to do or not do. Okay. See you in the year, seeing six months. And then all of a sudden, and you're like, well, how serious were they without that? So again, if it can help cut down on some of those those adverse events that happen or again cut down on unless yeah exactly like
Starting point is 00:20:45 okay maybe about dose optimization yeah like the antibiotic specifying you know what I know we're trying to be better about that but I think there's still just so much of like okay we'll start with this one see if it works if it doesn't work go to this one go to this one I've been through that myself versus oh no based off what you have this is and you and you're your background and how you've responded to things in the past. This is your best chance of. Yeah. And even like something like analyzing a microbiome, you know.
Starting point is 00:21:13 Yeah. It's possible to do that. It's like, well, you know, based on this, you know, maybe we need this targeted. So it's like almost like a customized. It's like a customized drug, you know, based off, you know, the concentration of, you know, what's going on in your system at the time. So. So, yeah, it's a lot of.
Starting point is 00:21:32 And again, the article did say, I mean, there's still so many. things that need again you know cybersecurity continues to be a huge issue and huge concern and then just the again the
Starting point is 00:21:48 I kept laughing up because I underlied they're like yeah the big thing is that the you know scientific world needs to work with the medical professionals and they need to work together I was like that seems like two interesting groups to put together because again
Starting point is 00:22:02 you have scientists that are so data driven. So data driven. Yeah. And then, you know, ultimately you're going to need health professionals in there that are patient oriented and people oriented. And I'm like, oh, good, try to get those two groups to work to, yeah, to work together. Because at the end of the day, like you said, it's all what you put into it. I don't want people that only think data and numbers creating how these things work, because we know as, you know, I would say, you know, again, going into the line of work that we go into, where much more highly empathetic. We're much more people oriented and driven. So it's like, I don't always necessarily care of the number. Say there's, there has to be that, that human,
Starting point is 00:22:46 like what makes us human is that we can connect, we can feel energies. We can, we can connect with people. Numbers don't walk into the office. Exactly. And you can sense, yeah, I mean, you can sense somebody the second they walk in the door, what kind of, what kind of mood they're in, what kind of energy they have. And that, that can totally empath. how they respond to, you know, to what you're telling them to how their, how their adjustment goes, how their day goes. Yeah, I've had those people that are just so, you know, they're in such a bad mood or maybe that's just their general demeanor.
Starting point is 00:23:21 And it's like they've already made up their decision about, you know, your process. And, you know, but they were, they're there because their wife, you know, told them to come in or a neighbor is like, oh, they told me to see it. Their daughter. I was thinking about meeting yesterday. Yeah, they're there because they were told by a family member or a friend, but they have absolutely no interest in being there. So, you know, that, yeah, that all needs to be taken into account, like you said.
Starting point is 00:23:49 You know, and then again, going back to something you just said earlier in terms of the science and the cybersecurity side of things, now we're getting into a whole level of big brother, right? You know, the whole idea of, you know, what does, who gets this information? Who can solve this? Because, I mean, you know, I could see it. And we've had these conversations before, you know, about universal health care, right? I don't believe that we should have universal health care based on how it is right now, right? Because, you know, universal health care right now is defined as, you know, you get, like, all these different services.
Starting point is 00:24:27 But I personally don't think that, you know, cancer treatment or, you know, maybe like even heart attack intervention. Like, you know, and again, but those are ethical questions that we have to ask. But now you start bringing in something like an AI that has tracked your, you know, behaviors over the past five, ten years. And they realized, well, wait, you have, you know, you've been consuming alcohol on a daily basis. You've been smoking on a daily basis. who or why should we do an intervention you know and I mean those are the time and again I'm just bringing this up as a for instance but like you know do you know do you even do the intervention but I mean that's taking it to like a whole like the end of the line like ethical question do we treat you or not
Starting point is 00:25:13 treat you but you know taking it a step before that is like what's your cost you know do you start getting charged more for you know you're your tax more if we had universal health care right it would be a tax So do you start getting taxed more on an individual level based on your patterns of behavior? You know, because again, it's very well proven and documented that there are, you know, lifestyle factors that lead to, you know, more cancer, more heart disease, more diabetes. Like those are a lifestyle factors that lead to that. So if they're able to track over, you know, 5, 10, 15, 20 year time frame, what your behaviors have been now, do you start getting taxed more?
Starting point is 00:25:53 you have to start paying more into the pot because of those behaviors. I'm not to way above my pay grade, but that's a question that we have to look at. And again, that's why I don't necessarily agree with universal health care because, again, it's not individualized. It's a, you know, here's the stamp. Here's the one size fits all. We could start leading towards more of an individualized, you know, plan and talk about what that may look like. You know, maybe I'd be more open to it. But yeah, that's the...
Starting point is 00:26:23 And I think that was the other interesting thing. The article mentioned with AI is with some of these home interventions can actually cut down on emergency department. Absolutely. And I think that's huge. Sure. Because I think that is such a huge cost, especially in our country, because, again, people who don't have insurance, don't have a PCP, don't have a first line. So that is their first line. That's where they go for everything.
Starting point is 00:26:51 A sniffle or if their arm got chopped off. Like, I mean, it's so it's that, which one of those you should go to the emergency room, just to be clear. Your arms off. Yeah. Just a pleasure. Have you seen that movie? No, we don't. No, my gosh. You're so young.
Starting point is 00:27:16 But anyway, so yeah, being able to cut down on that, you know, making sure people who don't need to, or people who do need to go. I mean, I think about like the stubborn men. Her cousin used to work in triage as a nurse. And I remember telling the story, like the number of times that she would answer the phone and person would be like, sounds like you're having a heart attack. It's like, okay, I'll drive. No, you hang up the phone. You call 911 and they take you to the hospital.
Starting point is 00:27:43 You do not get behind the wheel of a car. Yeah, have a heart attack or stroke and kill someone else. Yeah, and cause a 15 car pilot. So again, if it's like a weekend, if we can do. do a better job of getting people who need to be there there and keeping people who don't need to be there. And they're kind of already going that way a little bit with some of the telehealth and things like that. But again, if AI can step, because we know healthcare workers are just overwhelmed. Absolutely. And again, a lot of it is because we are so sick. We just have very
Starting point is 00:28:15 poor. You know, we have poor. And again, we can call it a choice or not. But again, you know, our food supply, we talked about this and so many other things. It's like, Our food supply options are so limited and terrible in this country. Our water quality, our air quality. Water quality, air quality. I mean, those are the environmental factors that a lot of them we don't have control over. You know, now again, we have control over our lifestyle. The choices that we make, whether or not we do choose organic or, you know, but again, sometimes even people don't have access to organic fruits and vegetables and meats.
Starting point is 00:28:49 So, you know, that comes into effect as well. So, but I think at the end of the day, the coolest blend I can see is, you know, you have an AI that is available on the, you know, on the like personal, you know, we're going back to that glucose monitor, that continuous glucose monitor. And, you know, again, there's going to be more biomarkers that are going to be able to be measured. Again, even like my, you know, my Fitbit watch. It's like it measures heart rate variability. It measures my sleep. It measures, again, now how accurate some of that stuff is, you know, but it's, that. Right? So it at least gives me something to go off of. And then I can track like, well, wait a minute. What was I doing leading up to that? So it's like I have it as a personal use. And you even have an AI like you mentioned with the patient education side of things. Like, you know, hey, what does this mean? It's like you have an AI that could literally pop up with a whole, you know, 15 minute, you know, presentation or five, our attention span. I was like, I heard our attention span in the United States is like less than 90 seconds. It's disgusting.
Starting point is 00:29:52 And here we are doing a podcast for 30 minutes so far. So for the two people that are still listening. Thank you. But it's like you have. Would you have all of that for personal use? And then if I feel, you know, if I'm feeling ill or something, you know, comes up, you know, rather than all of that information being monitored by a central system, I would have a choice to say, okay, let me.
Starting point is 00:30:22 plug into this AI or, you know, this, this local emergency department or, you know, this community, you know, healthcare organization, let me now choose to send them my data, you know, or have the AI analyze it and, you know, communicate. And now that AI can be, you know, working in conjunction with a person, you know, like myself, Dr. Riley or you. And like, you know, now we can use the AI to gather, gather, interpret all that data and then come up with a decision on what. it to do. But again, it's at the, you know, requests of the individual rather than constant monitoring. Yeah. Yeah, that, I think that would be interesting, you know, again, that's going to be a big. That's a whole ethical privacy discussion. That was the big conversation that came out when, like,
Starting point is 00:31:08 genetic testing started, you know, becoming, I think, which again, that was, that was my big thing reading through a lot of this is a lot of the, the capabilities that AI will be able to do is going to be dependent on genetic testing. because again, they're talking about a lot of having that. And again, having that done and, like, before you're sick because they need to have that information available to figure out what medication might be best and all of that. So I've always been a little weird about that just personally. Like, I, you know, just because again, that was those, I remember learning about that. I mean, I was a lot of what I studied back in undergrad, you know, taking ethics classes.
Starting point is 00:31:46 And again, those same questions, it's like, if somebody gets to the point that they're sick and now you have their. entire genome on it. You can go back and be like, oh, well, you have, you know, it was because of this. It's not worth not worth treating you or stuff. You know, again, we're not saying that's happening, but it's, there's that potential when you have, like, how much information do you want to give and where does it go and who has it and how is it going to be used? And it's just, and now you're talking about putting that in conjunction with AI. And it's, you know, it is. It's kind of like, okay, well, who has access to this? Is this, you know, you know, you try to get, you know, disability insurance or life insurance.
Starting point is 00:32:24 And all of a sudden it's like, oh, no, you have this genetic marker and we see that you do X, Y, and Z in your daily life. So, nope. Yeah. And it's like, okay. So it's just, yeah, it's kind of, you know. Because, again, I don't think anybody's perfect. I mean, I consider myself a healthy individual, but I still know there, I'm sure there's things where if you just plug it into a computer, it could easily go, nope. I actually had that happen recently
Starting point is 00:32:50 I just got denied for disability insurance because of something and that was the other part I found scary because again you're talking about you're relying on health records being accurate which I know mine or not I mean I go I've been with UPMC for years
Starting point is 00:33:04 and I look back on I'm like that's not true so now you're talking about an AI system having access to anything that has ever been input on you and doing its calculations and, you know, are you, are you high risk, low risk for whatever? And so I'm just like, I'm not crazy, you know, I'm not crazy about that idea personally because, you know. I agree.
Starting point is 00:33:31 It's one again, you're looking at the risk, right? Yeah. Which is all predictive. Yeah. You know, it's like, yeah, you could have a very high risk of cancer or heart attack stroke, but you may never have one. Yeah. Like, you know. And then we see the athletes that.
Starting point is 00:33:45 that's right. You know, someone who you would never expect to, you know, dropped out of a heart attack, you know, in their 20s, you know, at the peak of their career. Like, it happens. So, yeah, it's just, but again, you start talking to, and that's the individual lives of it. Like, you start, you start putting the numbers, playing the numbers game, though, then somebody who could benefit from an intervention might be denied it because of the numbers don't, you know. It's, like you said, it's not numbers that are walking in the office. It's a human being who, it has value to. them their life. So, you know, we, we don't get to sit there and play, you know, play maker. It's, we're, we're here to, to help people the best that we can. And it's, you know, deciding if somebody's a good person or a bad person or decides to live or not. I, I don't personally think that's for anybody to, anybody that decides. So that, and that's, yeah, it is. It's a scary, it's a scary thought of just having that, that, plus I just say, it's like, has anybody ever watched these movies? Like, the AI always goes evil. Like, do we know? Can I just say that?
Starting point is 00:34:47 Matrix. Right? I'm like, like, have we ever seen like, and they all live happily ever after a I movie like, just, just saying, just saying.
Starting point is 00:34:55 I feel like the outcome was okay at the end of my robot. They had to destroy the central system because she went over to go home. Yeah, but there was like one or two that was left over. Yeah, sunny saved the day. Sorry, we just ruined it for you.
Starting point is 00:35:12 Oh my God, we haven't seen that movie. Oh my God, we haven't seen that movie. you had 20 of years. But it's, yeah, it does. It's just there's, and like you said, I mean, I know personally this is not at all my realm or area of expertise or interest even. So it does. It gets scary when you just start hearing all the stuff.
Starting point is 00:35:33 And you're like, okay, well, great. Robots are just going to control everything. So which is where my mind goes. I realize that's not exactly how it's going to play out. But it is. At least not in our lifetime. It's coming. I know.
Starting point is 00:35:47 It's coming fast, though. So it's just, but it is. I do think this was, this was interesting. I appreciate you having us read this article, I find it and having us read this article because it's definitely more than I would have delved into it on my own. Yeah, it just gets us thinking, you know, and again, taking it back to really, you know, what is our use with it, you know, again,
Starting point is 00:36:06 because we're not going to be making ethical decisions about we're going to, you know, see and help any person who walks into our office the best that we can. But, you know, even something like, you know, reading an extra motion x-ray, you know, could it get to the point of like, okay, here's the program, boom, they're, you know, they're all the way inflection, you know, head down. And it's like, boom, can they pop up all the angles for us? And could we see like before and after, you know, percent changes? Like, that's the kind of stuff. Pre-post. You talked about area under the curve, you know, yesterday with the, you know, the infrared thermal scans. So it's like, can it analyze the patterns and percent change in those? patterns and yes, that's all stuff. And give it to you immediately. Yeah. It's like, boom, it pops. Like you guys are looking at it and kind of, you know, quickly. Yeah, we're making a quick clinical decision, but you're right. It's like we can't, you know. Like, it could go back and look at all of them and, you know, see that. Yeah. Well, and that, you know, and then even being able to pick out a pattern too.
Starting point is 00:37:02 Like, you know, it's like, here's all of them. Kind of like how I, you know, for training, because I'll hear you say to him, you know, you'll face off what you see on the thermal scan, oh, make sure you check their T1 or make sure you check the, you know, the Atlas or whatever. So, like, you have the program actually doing that and then it becomes an education tool for, you know, future interns. Well, Dr. Dan, AI. Yeah, she definitely happened in your voice. We'll be too, Dr. Dan. Yeah, Aaron has already said that neither one of us can be cloned.
Starting point is 00:37:32 She cannot handle any more. Any more turros. Well, if you keep letting Dr. Riley hang out all of us, he's getting out. What do they call that? The hive? the hive brain and we're all plugged in. Oh, man. No, it's definitely interesting.
Starting point is 00:37:48 That was, it was a good article. Yeah, I mean, it was a review article. So I thought it was really well laid out as far as kind of the different areas of health care that have that potential. So, yeah, if anybody is in the health care or not and just is interested in the way, you know, the way we're kind of moving. Yeah, we'll put the link in the podcast notes underneath. But again, it's revolutionizing.
Starting point is 00:38:13 health care, the role of artificial intelligence in clinical practice. That was out of BMC medical ed, 2023. And then again, this was just a lot of our, you know, understandings and just lack of understanding and just discussions. Open it's open. Open on his discussion about. Our thoughts about adjusting to artificial intelligence in the clinical role. So if any of that interested you or you have more questions, don't ask us about AI, but we're happy to. If you have any other references, like if you've seen a good article or something that we're always, yeah, always like to hear what other people are seeing and reading and following. Absolutely.
Starting point is 00:38:52 Absolutely. So with that being said, I am Dr. Dan. I'm Dr. Riley. Angela. And we will see you on the next What the Health podcast. Thanks so much, everyone. Thanks, guys. You've been listening to What the Health with Dr. Dan and Angela Toro, brought to you by
Starting point is 00:39:10 Toro family chiropractic. learn more about the resources mentioned on today's show or listen to past episodes, visit www.com.com.com www.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.