Daybreak - AI is learning healthcare from a broken system

Episode Date: January 15, 2026

AI is learning healthcare from systems that are stretched and uneven. In this episode, hosts Snigdha Sharma and Rachel Varghese discuss what tools like ChatGPT Health and Claude for Healthcar...e could mean in India. We talk about how people already use AI to understand symptoms and reports, how hospitals deal with data and paperwork, and how bias and privacy shape these tools. Tune in.Daybreak is produced from the newsroom of The Ken, India’s first subscriber-only business news platform. Subscribe for more exclusive, deeply-reported, and analytical business stories.

Transcript
Discussion (0)
Starting point is 00:00:01 Hi, this is Rohan Dharma Kumar. If you've heard any of the Ken's podcast, you've probably heard me. My interruptions, my analogies and my contrarian takes on most topics. And you might rightly be wondering why am I interrupting this episode too. It's for a special announcement. For the last few months, I and Sita Raman Ganeshan, my colleague and the Ken's deputy editor, have been working on an ambitious new podcast. It's called Intermission.
Starting point is 00:00:28 We want to tell the secret sauce stories of India's greatest companies. Stories of how they were born, how they fought to survive, how they build their organizations and culture, how they manage to innovate and thrive over decades, and most importantly, how they're poised today. To do that, Sita and I have been reading books, poring over reports, going through financial statements, digging up archives, and talking to dozens of people. And if that wasn't enough, we also decided to throw in video into the mix. Yes, you heard that right. Intermission has also had to find its footing in the world of multi-camera shoots in professional studios, laborious editing, and extensive post-production.
Starting point is 00:01:15 Sita and I are still reeling from the intensity of our first studio recording. Intermission launches on March 23rd. To get alert, as soon as we release our first video. episode, please follow intermission on Spotify and Apple Podcast or subscribe to the Ken's YouTube channel. You can find all of the links at the ken.com slash I am. With that, back to your episode. This week, artificial intelligence has now formally decided to step deeper into one of the most sensitive and private parts of our lives, which is healthcare.
Starting point is 00:01:55 So Open AI has launched ChargipT Health, which is a version of child. chat GPT that is specifically designed to answer medical questions. And chat GPT healthcare, which is a separate offering, is for medical and healthcare professionals to help them with personal health data, like test results and wellness records. And also, after this happened, Anthropic announced Claude for healthcare. And Anthropic is specifically positioning Claude as a tool for hospitals, insurers, patients, really focusing on more of the admin side of clinical workflows, documentation and patient understanding.
Starting point is 00:02:33 Yeah. And you know what I found really striking about all of this is how carefully both Anthropic and Open AI are kind of framing these products, the way you're talking about it. You know, they're saying that these are tools meant to support doctors and patients and not replace healthcare. They're repeatedly saying this in all kinds of messaging like blog posts, advertising. They're saying it's basically to help people, make sense of medical information, reduce paperwork, like you said, and overall it improve healthcare systems function. But they're also describing these products because, of course, they understand that health data is very sensitive, right? So they're also talking about privacy and security and,
Starting point is 00:03:20 you know, human oversight. So, yeah, that's very interesting, actually, the way they are framing all of this. Yeah, I mean, so it's important that they cover their basis, right, because of the nature of this data. Yeah. But I think it, this was something that was going to happen because it really reflects something that's been happening for a while now, right? Since chat GPT has been launched and even before chat GPT, when we used to Google, like
Starting point is 00:03:45 random symptoms to see what the underlying problem could be. Yeah, and then Google would tell you, oh my God, you're going to die now. Yeah, every headache. headache, stomachache, everything is cancer. So now millions of people around the world, and especially in India, because we have a huge healthcare system with a lot of gaps, we haven't using AI chatbots to ask health questions, understand symptoms, or even prepare for doctor visits, so you know exactly what to ask them and you don't waste time. But of course, like, because again, we keep coming here back to this, healthcare is very sensitive.
Starting point is 00:04:22 This behavior has existed in a gray area. But with these launches now, major AI companies like Anthropic and Open AI are making all of this very formal. And signaling that this use case is, you know, kind of here to stay and it's only going to grow. Exactly. Yeah. And also in some ways, it raises very important questions, right? Particularly in the India context, because we know how exactly our health care system falls short. you know, uneven access, language issues, shortages, like remember COVID, you know, huge,
Starting point is 00:05:02 also like crazy differences between public and private healthcare. And now with AI in the picture, like, all of this, this whole system is going to get affected in some very, very complicated ways, I would say, right? And because this is quite nascent, I mean, there's still a lot left to see, like, a lot of this is still going to play out. Like, it's difficult to predict what exactly. would happen. But today we just wanted to take a step back and ask as AI is becoming more present in healthcare. What does that mean for India? And where might it genuinely be useful, especially
Starting point is 00:05:38 since we discussed, there are a lot of gaps in the healthcare system, but also where do the risks, you know, show up and how should patients, doctors and policymakers think about this before these moments become, you know, routine and ingrained in our life? All right. So, before, before, we actually get into the discussion, I think it's worth pausing to understand why does this moment really matter so much? Because AI in healthcare is kind of creating both opportunity and also a lot of complexity like we mentioned earlier, right? For example, in India, we know we have some really big gaps in our healthcare system, you know, look at access. look at the doctor-patient ratio in our country. It's terrible.
Starting point is 00:06:26 Exactly. We have, what, one doctor for more than 800 patients. And there's also the issue of medical language. And I think we see this, you know, every time you go to a hospital, I think you can see this. So many people struggle to understand basic medical information, right? Make sense of diagnosis, make sense of reports, right? So I think there's, yeah, these are the reasons why, you know, I think this discussion is very important. Yeah, true. I mean, even I don't think I can really look at medical reports and
Starting point is 00:06:59 understand what's happening. I always have to call like a doctor I know or a family member to discuss and understand what's happening. But still, on the other hand, healthcare is very much an area where error and bias and elapses in privacy or misplaced trust can have very serious consequences. So we wanted to look at a few specific claims that are being made around these AI healthcare launches. We'll talk about privacy, about bias that exists in medical data, and about whether AI can actually realistically support doctors and about who benefits first when these tools roll out.
Starting point is 00:07:37 Yeah. Basically, we are not here to give you a verdict on, you know, whether AI in healthcare is good or bad. We just want to understand what's really at stake here, right? I'll just start by saying that generally, I feel like this is a step. in the right direction. And it's because what Rachel mentioned earlier, right, people have already been using chat GPT and other AI tools for medical advice,
Starting point is 00:08:04 whether companies, these companies like Open AI and Anthropic were endorsing it or not, right? The difference now is that both of them are openly acknowledging this use case and they are building these specific healthcare focused products and they are telling you about the limits. They're telling you about the guardrails about what kind of expectations. you should have, you know. And I think this creates very clear lines of responsibility, you know, because once a company publicly positions its product for healthcare, I think it is going to become easier for regulators, for governments, journalists, even doctors and also users to kind of
Starting point is 00:08:44 look at their claims and also demand safety standards, right? So I think it's going to be now easier for us to hold these companies accountable for harm because now it's a formal product and there will be some kind of traceability. So I agree for the most part that I think it's best to formalize it and make sure that you're building a product that now has involved other healthcare professionals. I think they have about 260 doctors and medical professionals help build their data set and a data set and stuff. But I think even though it is a more formalized thing right now,
Starting point is 00:09:25 there's still no like federal regulatory body that's actually overseeing this kind of information, right? Like what's being uploaded to AI chat pods. Yeah, I mean, it is available only in just select parts in the US right now, but there is a wait list in India from what I know. So, but Open AI tells us that it's worked with, you know, healthcare and medical professionals to provide this platform. but who's really like checking or holding them accountable at this stage, you know,
Starting point is 00:09:54 to make sure that the quality of their data is up to mark and like whatever, whatever responses that come for each prompt is accurate, right? There's no real time checking that's being done. And I mean, we've seen a lot of cases when it, in terms of mental health and like people who go to chat, AI chatbots in case of mental health scenarios. and those have had some adverse circumstances. And those lawsuits are still, you know, going on. So it hasn't entirely been held accountable so far.
Starting point is 00:10:28 Like, it's still, like, we still have to see what comes out of it. And honestly, though 260 medical professionals, I think, is not a lot when this is a data set for a product that's being used by millions of people across the world. Yeah, but I think they also said these. these 260 medical professionals were from very diverse places, like from many different countries. They're selected, not just one or two countries, I think. Yeah, I mean, I hope it is, it is like a well-rounded fleshed-out product,
Starting point is 00:11:01 but it still has other risks like, you know, healthcare information is incredibly private and sensitive. And even though there haven't been any major leakages of prompts or any sort of interaction with Open AI NMAs, But again, there is no regulatory oversight here. So every individual user is entering into a very private contact with Open AI. So any data breach or lapse in information is on your discretion of use and not really on the service itself. But a patient doctor relationship, on the other hand, comes inbuilt with confidentiality and accountability.
Starting point is 00:11:38 We know exactly who to go to and who to hold accountable if there is any sort of lapse. But how will someone take responsibility for an accidental breach? privacy with the service. And I think my biggest problem is that it still feels like a waiting game. We are waiting for things to go wrong and then hold them accountable and then they fix it. But then, you know, what about all the possible misdiagnosis and inaccurate information that happens along the way? And will that all just be collateral damage?
Starting point is 00:12:09 Yeah, no, I agree with you and most of this, you know, very fair. But again, coming back to what both of these. companies are very clearly saying, you know, everywhere that these products are meant to support healthcare systems, not replace them. Like flawed, for example, right? It specifically says that it will show contextual disclaimers, it will acknowledge uncertainty and it will also direct people who are using it to healthcare professions, to doctors for proper guidance. And, you know, Also another thing that really strikes me, especially in our country, is how people just don't have the privilege or the education, like I said earlier, to understand medical language, right? Every public hospital you go to, you see doctors are stretched beyond their limits.
Starting point is 00:13:00 They just don't have the time or the patients to sit and explain the diagnosis or the treatment to, you know, their patients. They'll just, you'll just hear them barking instructions a lot of the times. And even, for example, when you get a test result, it's very hard to understand what does it exactly mean. You know, I remember going to Ames for my mother's treatment many times. And there's so many people, Rachel, like you see they're not from the cities. They really, it's, it must have been very hard for them to get to Ames, to get their family members treated. And the way they are interacting with the doctors or the way the doctor, are interacting with them.
Starting point is 00:13:44 Yeah, and like you can see, they're blank. You know, when the doctor's talking to them, they have no idea what they're saying. And even very simple example, medicines, right? We know for a fact that so many doctors, they just prescribe medicines to patients because they're getting paid by pharma companies. This is a very common thing in India, right?
Starting point is 00:14:06 Also, not everybody has a doctor in the family to go back to and ask these questions, right? So I really think in these kind of cases, you know, these tools can be very helpful to make sense of what's going on. Yeah. I agree with that for sure. Like medical language is not easy for everybody to understand. And like I mentioned before, even I need help. So I can only imagine people who are even less comfortable with this struggle a lot more. So, but I still feel like the problem becomes that when the narrative is that you can use chat. when you don't have access to a professional healthcare provider, then it becomes a stopgap solution, right? You still have to maintain a sense of discernment to know that,
Starting point is 00:14:51 okay, this is something that I can use chat GPT for. This is possibly not. And this is where I need to get in touch with a professional and get like a second opinion. Yeah, but can I just cut you there? Because you're saying, yeah, you still have to maintain a sense of discernment. And I think that's not too much to expect from, a user of the internet in general.
Starting point is 00:15:15 Like with the way the internet is, I think some sense of discernment is required. It's good to hone. Yeah. So, yeah, there's also that. Anyway, sorry. Yeah. No, that makes sense. But I'm just thinking if you have,
Starting point is 00:15:29 if you use chat GPT first and then, you know, something doesn't seem right. And then you still have to, you still have to contact a healthcare professional. Then why wouldn't you just do that in the first place? You know, like, I don't think it's really solving a problem of access there. And of course, there are some studies that show that LLM specifically are more likely to lean towards being helpful instead of medically accurate.
Starting point is 00:15:54 Like these bots are trained to make you feel good about yourself. And they are actually even tried to generate answers that you as a user are likely to respond to. So that, you know, you keep talking to them and you keep staying on that platform. So if a chatbot's agenda is inherently to keep you in like this discussion loop and it's not entirely incentivized to be accurate, then I think you're bound to end up with information that might be a false positive or just straight up inaccurate. Like for example, I think last week, some Google AIO views had to be taken down because it was giving up straight up wrong and dangerous information. Like it was saying that people with pancreatic cancer have to avoid high fat food, but the actual advice is the exact opposite. And the recommendation is that avoiding high fat food could increase the risk of patients dying from the disease.
Starting point is 00:16:54 So, of course, there are a lot of cracks in this because it's a new technology. But I'm just concerned that these cracks will only be like filled up after several people have already fallen through it. Yeah, true. But, you know, another area, I think we mentioned this also before, like just reducing paperwork, right? I think that that space, it's going to be really beneficial because, you know, we know hospitals in India, for example, are just drowning in paperwork. Patient records are like all over the place, you know, their handwritten notes. And doctors actually, especially in public hospitals, right? because record keeping is so important there.
Starting point is 00:17:39 So much time is just going, you know, and just documenting instead of treating, right? So, yeah, in these cases, you know, AI could just structure these notes, they could help with discharge summaries, follow up, you know. And this would actually help free up a lot of the doctor's time. And I think also another thing that I also realized this, I realized when my mother was,
Starting point is 00:18:05 undergoing treatment. Continuity is so important because we were going from one hospital to another and this is like talking about a span of years right. So it's very important for the doctor to know exactly what happened in the past, what kind of treatment did you undergo, what kind of surgery, exactly what medicines you were taking and this is yours of information and continuity is very important right in such cases. So even there I feel like AI could be very helpful. Another thing is also insurance, you know, can help organize better, you know, reduce claim rejections that way help patients, you know. Yeah, so I think like, you know, in this regard, AI and healthcare may not be so bad. You know, smoother data management, less paperwork across the system could really do us a favor.
Starting point is 00:18:59 Yeah, I think this is definitely an area where it is helpful because admin work, as much as stress can be reduced in admin work, I think the better for any sort of healthcare organization. But I would still like to point out a few things. Whereas since you mentioned that continuity is so important, some studies have actually shown that LLMs tend to hallucinate or produce inaccurate information when there is incomplete data that is provided to the model itself.
Starting point is 00:19:34 So I think over here there's like a huge responsibility on the staff that is using this LLM to make sure that whatever information is going on it is very accurate. And whatever information is coming out of it, whether it's arranged or put together or whatever is also accurate and not there aren't any errors or consequences. Because like you mentioned, if it's something that could help on the insurance side, if somebody were to lose their insurance, or lose out on their premiums or something like that, then that's a huge problem. And again, a lot of people in India who can't afford to recover from those kind of mistakes. And in these situations, do you hold the AI accountable? Do you hold the staff accountable?
Starting point is 00:20:23 There's very little clarity on that. But overall, I think it could be a good thing. Right. You know, actually, speaking about data, I think, one thing that could be a huge risk is data bias, right? Right, yeah. Because we know how modern medicine is already biased, right? Like all the clinical trials, not all, at least most of them.
Starting point is 00:20:48 So much of medical research has been historically focused on the Western white male population, right? And this obviously shapes how disease are being treated, how they're understood. And AI is kind of learning from the same sort of medical literature. and health data, right? So it's not really, actually these gaps are not going to get filled. It's going to just reinforce these gaps, right? And in a country like ours, this is very worrying because our population is very much younger. We are more diverse.
Starting point is 00:21:22 We're genetically very different from each other. We're exposed to, I don't know, all kinds of different nutritional conditions that shapes our bodies. You know, disease work differently. you know, medicines work differently on us. So if an AI's default reference point is Western data, then yeah, then this is a huge problem and then add to this all the gender bias also, right?
Starting point is 00:21:46 Where women's pain is under-recognized, it's not acknowledged as much as the man's is, right? And also, what about the poorer, rural, marginalized communities, right? They are barely documented at all, even now. So if that data doesn't even exist, those populations are basically going to disappear from the AI's perspective, right?
Starting point is 00:22:12 Yeah, that's true. That is actually a really huge concern. But I think an alternative for that is like really deep research into rural area, building smaller data sets that are very locally based and for very local use based on the language that are spoken in that area and if it's not not every you know culture especially smaller rural areas have a written culture so to say they have a more vocal talk based culture so having voice
Starting point is 00:22:46 chat pots and things like that those these are the kind of products that would really help instead of like these mass consumer facing products that have that inherently come with a huge that come with huge holes. Yeah. I think there are already some AI companies who work in the healthcare space. Yeah. Who are doing this, right? Yeah. Working with smaller, more rural, marginalized communities.
Starting point is 00:23:11 Yeah. Right. So, yeah, I think we're coming to the end of our discussion now and clearly AI is already a part of how many of us think about our health. You know, it's the same thing, right? We check our symptoms. leader reports and we try to make sense of, you know, all of this and prepare for our doctor visits and none of this is going to go away. But the concern is that, yeah, these tools are going to reflect the systems that already exist, you know, and these systems are full of gaps
Starting point is 00:23:46 and full of biases. And yeah, so these tools can help for sure, I think, but they should not replace judgment and I think that's really on us. So if you're using AI for health advice, treat it like a second set of notes, you know, not a second opinion. Use it to understand, to ask better questions when you're finally in a room with a doctor, but know when to stop and see a professional. Because in healthcare, convenience should not be the thing that becomes care. True.
Starting point is 00:24:19 And with that, this is the end of our episode and I hope. you guys liked it. If you have any thoughts on it, please do write to us. I'm sure you have a lot to say about A&Healthcare. Write to us at podcast at the ken.com and on the subject line, you can say daybreak Fridays and we will get back to you looking forward to hearing from you and that's a wrap.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.