WHOOP Podcast - Scaling Access to Healthcare: The Future of AI and Wearables with Dr. Trishan Panch

Episode Date: March 18, 2026

In this month’s episode of the WHOOP Podcast Longevity Series, Emily Capodilupo sits down with physician, entrepreneur, and AI educator Dr. Trishan Panch to explore what artificial intelligence actu...ally means for the future of medicine. From ICU algorithms to everyday primary care, Emily and Dr. Panch unpack why the healthcare system is structurally built for acute rescue rather than prevention and how AI, wearables, and continuous monitoring could fundamentally shift that model. Dr. Panch presents what it would take to responsibly integrate AI tools into clinical care, and how technology might finally make proactive, personalized health scalable. But this conversation goes deeper than algorithms. Dr. Panch argues that true optimization brings together the physical and emotional wellbeing of patient populations. Longevity, resilience, trauma, and self-worth all play a role in long-term health outcomes and Dr. Panch gives his insight on how best to build algorithms to reach the long-term goals of clinicians and patients alike.(00:33) Introduction to Dr. Trishan Panch(01:15) The Intersection of AI and Healthcare(04:54) Can Healthcare Modernization Be Left To AI and Wearables?(11:20) Integrating Data From Wearables Into The Healthcare System(18:00) Implementation of Preventative Care For Long-Term Health(21:31) Breaking Down The System: What Is The Future Of Healthcare? (28:38) How Can Clinicians Use AI?(47:06) What Are The Things People Need To Know About AI?(54:18) Key Takeaways For Listeners Looking To Optimize AI UseFollow Dr. Trishan Panch:LinkedInSupport the showFollow WHOOP:Sign up for WHOOP Advanced LabsTrial WHOOP for Freewww.whoop.comInstagramTikTokYouTubeXFacebookLinkedInFollow Will Ahmed:InstagramXLinkedInFollow Kristen Holmes:InstagramLinkedInFollow Emily Capodilupo:LinkedIn 

Transcript
Discussion (0)
Starting point is 00:00:00 You even need doctors. What determines your health is what is going on for the other 364.5 days of the year that you're not anywhere near a doctor. Could it all just be done with wearables? If you look at improving people's health, there is a big need to look at optimizing a health state over time by monitoring data on more of a week-to-week month-to-month basis. But it's impossible to answer these questions without at least some first principles understanding of modern AI algorithms. Can this just be done with wearables and AI? Or is there still a role for doctors? So what I would argue is...
Starting point is 00:00:31 Hi, everybody. I'm Emily Capitalupo, Woop Senior Vice President of Research Algorithms and Data, and today I am joined by Dr. Trishan Ponce. Thank you so much for being here. Dr. Trishon is a physician entrepreneur. He has a very cool mix of background. He's an MD,
Starting point is 00:00:50 but he's recently really embraced all things AI and become a very fascinating entrepreneur. So he is CEO of Lunar Studio. he's executive chair and chief science officer of Lumen Health, and he's co-founder of Wellframe. And he brings this very cool experience bringing technology for high-risk populations and merging that with all of his clinical expertise. So Trishon, thank you so much for reading here. Thanks for the introduction. Thanks having me.
Starting point is 00:01:15 Why don't we just start with the sort of easy basics? Sure. Tell me about this intersection of AI and your clinical. Yeah, fair enough. I mean, it's a very topical question, of course. I'm sure lots of your listeners are kind of thinking about this. I mean, most tangibly, what I'm working on at the moment is we run a course at Harvard School of Public Health. And it started off actually when I was kind of more on the software side myself.
Starting point is 00:01:39 And we were working a lot with health plans and large hospital systems. And the people who run those organizations, on the clinical side at least, are typically physicians. And, you know, we were trying to sell products that had AI in the middle of them, so to speak. you know, it's kind of a part of the software stack. And most of the people that I was working with on the other side of the table didn't really understand what any of this stuff was. And they were hearing more and more about it. And they were like, well, is there somewhere I can go to learn more?
Starting point is 00:02:08 And at that point, I'd just taken over as present at the alumni association. I used to be present alumni association. So one of my remits there was to try and think about the needs of the alumni, right? And so there's like 13,000 alumni. They, a leadership in healthcare organizations across the world. So I spoke to them. And I was like, well, okay, like, are you interested in this area? And everyone pretty much said yes.
Starting point is 00:02:29 And what are the kind of things that you'd like to know? And everyone also said the same kind of thing, which is like, I want to know what is real in this area. I don't necessarily want to be first, but I also don't want to be last than left behind. So we started a course initially, and that was five years ago, and that's become a certificate program. And it's for clinical and business leaders of healthcare organisations who are trying to think through how to put AI into their existing organisation.
Starting point is 00:02:54 and or to start new things. Most of these people by training, they're MDs, right? Yeah, exactly. So in medical school, even a very good one, you're not learning anything about AI. And now all of a sudden they're hearing it's going to come take your jobs or all this whole stuff and they're trying to see through the hype. Yeah. So that's where you come in.
Starting point is 00:03:11 Yeah, no, that's right. And it's really interesting, right? So for example, if you look at like software, digital health, it was really through like building software that I got into entrepreneurship and stuff like that. So digital health in a way is like quite easy for like clinicians to get their head around. because software is basically a pre-written set of rules, kind of like a recipe, whatever. And then if you can describe the world in rules
Starting point is 00:03:33 and you can put those rules into a computer, then the computer can do some stuff and it does fairly much the same thing every time. And I think people found that easy to understand. I think the problem with the AI paradigm, and it is definitely paradigmatically different, if you really want to understand it from first principles, then you need to have some understanding of statistics
Starting point is 00:03:50 and also some understanding of how computers work. And it's fairly abstract. And then even if you have that, certainly like modern AI algorithms, the kind of foundation models, large language models now that pretty much everything's based on that is gaining traction, they're impossibly large and complex and what's called non-deterministic, i.e. the same input doesn't always lead to the same output. So that's inherently really tough for clinicians to understand. But they're having to make a decision about how to introduce these technologies,
Starting point is 00:04:19 not just for their patients now, but like what, you know, the future of their organisation. I mean, if you want to think very existentially, like, what is the future of the profession, right? Do you even need doctors? Could it all just be done with wearables and advanced blood tests and patients doing more things themselves? I mean, these are the things in the mix. But it's impossible to answer these questions without at least some first principles understanding of like how we got here with AI and roughly speaking what is going on. So that's basically like, you know, on the educational side of what I do, which is like
Starting point is 00:04:50 the massive is the minority of my time, but that's the aim of least. So just to tease where this conversation is about to go and to really put you on the spot to painfully oversimplify things. Can this just be done with wearables and AI or is there still a role for doctors? Okay. Well, look, I mean, I mean, I think in theory, yes. Actually, not to be too kind of academic of it or about it or it kind of depends what it is, right? So we're here in Boston, right? Yeah. Boston is obviously it's Massachusetts, incredibly well off state. It's also the biggest town in Massachusetts. It's one of the biggest healthcare hubs in the world, right? But for all of us, even here, we've both got commercial insurance. Everyone in this
Starting point is 00:05:28 building obviously one of commercial insurance. We have the highest rate of commercial insurance in the state. But for most people, if you want to try and get a primary care appointment, like today, to get something sorted out, it's pretty tough. If you have a psychological problem and you want to get access to psychological therapy, it's next to impossible. Or you're on like a, you're either paying a lot out of pocket, which basically excludes essentially 95% the population and or you're basically waiting for some time. Or you're incredibly sick. Yeah, yeah, exactly.
Starting point is 00:05:58 Or you're so sick that the imbalance is so massive that you can't deal with it in your life and you have to be in a kind of facility or hospital, right? So what I would argue is the effective supply of healthcare for most people, most of the time, is zero. And like we can kind of create some abstraction as the medical fridge to make you feel better that like, yeah, but there's all these buildings and hospitals and all that kind of stuff.
Starting point is 00:06:20 But I think, you know, you guys here have definitely pioneered this understanding in the broader population that, like, what determines your health is what is going on for the other, like, 365.5 days of the year that you're not in, like, a medical facility for most people. And in the other 23.5, nine hours, on average, that you're not anywhere near a doctor. And I think in that area, the supplies basically effectively zero in the medical system. at least. I think if you look in the consumer side of things, then you really see a huge amount of leadership. I mean, obviously I'm here with you guys. So like that kind of approach I can really see scaling. But I think broadly like the it component can AI like do it. I think if you
Starting point is 00:07:04 look at that with improving people's health, I think unequivocally the answer is yes. And there's some stuff we should discuss about figuring out how. If it's about changing the way like the medical system itself operates. I think the answer is also yes, but it's much more nuanced there. And I think the reason for that is related to two things. One is the nature of the problems that we deal within the medical system. And two is the people and what their needs are. So yeah, we could discuss that. But like, I mean, I think the answer is yes, but it's a very kind of long way of saying yes. Yeah. And it sounds like part of why you're saying yes is that the healthcare system just simply is not set up to meet most people's needs most of the time. And so, you know, in that
Starting point is 00:07:48 white space, technology is going sort of step in. Yeah. But I'm curious, like, to what extent are you saying, and that technology is going to be a lot better than what you're getting right now, which is nothing, versus are you saying, the technology will get to a point where if I could get my PCP on a telehealth visit, whatever they would say to me is not going to be any better than what chat GPT is going to be capable of saying to me. Yeah, yeah. Well, okay. So that, okay, there's a bunch of very interesting things. So, all right. So I think the first thing I want to mention just to qualify what I said before as well
Starting point is 00:08:18 and to qualify what you said. I don't think, you know, either of us are saying that it's not the intent of people working in the healthcare system to address these problems. It's just that. Exactly. It's not set up for that. They're not incentivised for that. It's not really the problem the healthcare system is trying to solve.
Starting point is 00:08:32 It's like you've got an acute need. You present somewhere. It may be so acute that you need like essentially hospitalization. You're like physiologically unstable versus like you've got some. going on in your life that may need something and it may need something over 10 or 20 years. The delivery model of like fee for service, insurance, reimbursed healthcare where it's all done in facilities just from first principles is clearly not optimized for that, right? I think we kind of need to qualify all of our answers with that.
Starting point is 00:08:59 I mean, fundamentally, I think the problem that kind of we're all trying to solve in a way is like the latent or hidden problem for most people, right? So like I think conceptually this is just the way that I look at it. And then I'll kind of introduce another thing, which is this thing about preemptive medicine that you implied there versus like waiting for a problem. And where does AI fee in. You know, and this kind of goes back to why I was interested in digital health. I think it's where like Woot plays a huge role as well. So like when I first started in this area, I'd been a primary care physician for about 10 years at that point.
Starting point is 00:09:31 And do you know, office-based practice. I had a panel, all this kind of stuff. And then I joined a lab at MIT. I did a master's and then joined lab at MIT and it's his lab laboratory of computational physiology and it's really pioneering. I think it is like the homebrew computer club which is like where like for like healthcare AI
Starting point is 00:09:48 at MIT, this guy Roger Mark did it, he's retired now but he's in his like mid 80s. He was working till like his mid 80s. And basically what they were doing is taking physiological signals from the ICU, right? You know, in ICU someone's obviously unconscious. Everything is instrumented. They have a very rich, very granular data set.
Starting point is 00:10:05 So lots of modalities and lots of things. measurements per modality. So then back in 2009 when I kind of joined, they were saying, well, these like machine learning approaches, which had not got to anywhere near what we're talking about at the moment, these could be useful in figuring out, like, given these physiological signal, how do we predict events? And the vision was, if we could do that, could we build evidence-based medicine at the local level? So i.e. each organization uses its own data, builds its own algorithms, with data scientists, which wasn't a thing, but like someone who understands this stuff and clinicians working together to produce like prediction rules for a specific organisation
Starting point is 00:10:45 given their resources and their stuff. And that's kind of initially what I started working on. What became clear to me and I think is where you all kind of come in here and it's where we played as well in this area is that like, well, that's great if patients in the ICU. That is possible. You can start using AI to like look at the data coming in and the events at the end and see if you can predict them and then intervene earlier. But the problem is at home, no one's instrumented and you haven't got a clue. So we felt we had to kind of address that. And I think that is part of this movement.
Starting point is 00:11:14 Now, people are increasingly of their own volition products like yours. They're increasingly instrumented at home. Yeah. And it's funny that you bring that up because that's exactly whoops origin story. Our founder, CEO, Will, playing squash at Harvard and overtrained. Yeah. And sort of got punished for working too hard. And so asked this question that, you know, when you want to talk about,
Starting point is 00:11:36 happy accidents. It's probably the best thing that ever happened to him to go overtrain, but asked this question of, you know, why is it in the wealthiest university in the country in a D1 program where everybody kind of was watching me over train and praising me for working so hard, did nobody realize what was actually happening and like nobody could stop? And he started reading all these physiology papers and talking to anybody who would talk to him, just teenager at the time. And, you know, just sort of got obsessed with this idea of HRV and heart rate monitoring and all of that kind of stuff. And from there, whoop was born.
Starting point is 00:12:09 I've probably told this story too many times on the podcast. That's cool. I'll leave at high level like that. But it's a lot in a lot of ways kind of the same thing you're talking about. But what I'm curious is sort of you're talking about the ICU, right? Everybody's hooked up to loads of monitors and sort of very similar in its weird way to whoop where it's like, let's just kind of monitor everything so that the second something starts going south, we can intervene.
Starting point is 00:12:31 We don't need to wait until you're halfway through an awful season to go, maybe you should do less. Yeah, yeah, yeah. I'm taking the rest day, maybe. Yeah, yeah. But when we talk about the role that Woop can, like, step in to fill, it's very much outside of the health care system. It's people spending their own energy and resources and whatnot to kind of fill a gap that they're finding where, like you said, they sort of don't have unlimited access
Starting point is 00:12:56 to their PCP. And frankly, their PCP's training doesn't really extend it to, like, degrees of wellness. And so then they're getting these supplementary products. a lot of what's very fascinating about your research is you're actually trying to bring this stuff into the health care system. And so I want to talk a little bit about the differences there or like how does what's a similarity sort of become part of the healthcare system versus this like privileged thing that some people are elected. Okay. So I think to answer that we've got to get a little bit conceptual. So kind of what is the task, right?
Starting point is 00:13:25 What conceptually what's going on? So basically like in the ICU, let's use that one. It's very simple. You know, you have different physiological systems, right? You've got like the cardiovascular system, you've got the respiratory system, et cetera. Let's just say there's like an number of physiological systems in the body. And then each one of those systems has some things that you can measure. And so basically at any point in time, one person's health state can be kind of articulated
Starting point is 00:13:48 by where they are on each one of those dimensions as defined by each one of these measures. And if you know that, you know what health state they're in. And hopefully what happens over time is their health state goes from like one level to an incrementally higher level to an incrementally high level to a much higher level and at that point you're kind of good enough to go out into the community system right so like that state of incrementally moving one's health state forward the edges of that the connections between each state are basically medical practice the interventions of the health system right so so therefore like what the task is here is basically translating that into the community and the problem with that of course is that we we know
Starting point is 00:14:28 what the dimensions are, but we just don't know what the data is on each dimension in the granular way. So what we first started working on, if you look at healthcare expenditure, it's predominantly concentrated in people with what's called multi, in this country, excuse me, in people that's called multimorbidity. So that's more than one concurrent chronic disease. In the US, it's the majority of people, the majority of adults at least that have this. Now, for those people, Basically, what we focused on is the people that the healthcare system really kind of is most incentivised to look at are those people who are being admitted into hospital. Because hospital is where all the costs are being driven up. So what I started looking at is this approach that we were looking at of using AI in the ICU, could we look at like other areas of the healthcare system where there was an alignment of incentives for one of a better word, which is basically that the health plan really cares because if they are not managed well, they end up driving.
Starting point is 00:15:22 up a lot of cost. And there's a window of kind of motivation. You know, there's a teachable window, a teachable moment, which is basically after someone has been discharged from hospital, preventing them getting back in again. And then what we put in place was basically each person gets a healthcare checklist of all the things they need to do and look out for for all of their conditions. And then all of their health state is computed and then given that a coach, in this case it's a nurse, basically has a panel of patients and that is filtered, to who needs what kind of input, and then some of that input is automated. So we then rolled that out with a bunch of our plans.
Starting point is 00:15:58 But that core idea is fairly tractable, right? Like in primary care, people are also trying to figure out the same thing. And I think there is a big need, and you're seeing primary care move in this direction. I think it's more the concierge medicine, you know, I would call it performance health. I think, you know, that would be more of a consistent term with the work that you guys do. but like that is moving more in this direction of basically trying to look at optimizing a health state over time and doing that by monitoring data on more of a maybe not day to day for all but certainly like week to week month to month basis. So this is a very like human in the loop system.
Starting point is 00:16:36 So you have an AI that's recommending what the coaching is. Yes. Yes. But a nurse who's delivering. Correct. When it started off, it was basically just me writing all out. So I wrote out thousands of lines of code on like basically all the person. permutations for every kind of permutation of patients and their conditions, which was fun in,
Starting point is 00:16:54 if that's your idea of fun. It was tedious but fun. But like, yes, like that's ultimately what it brought down into. And that same space is very rich. And so this is very much focused on those patients who are at the end of the journey of like cardiometabolic disease. You know, they've had an ischemic event, either cerebrovascular or cardiovascular typically, have got some end organ damage of the kidneys from like long-term diabetes and we are trying to optimise their health. And there's a lot of room for like physical conditioning, physical activity as well as medication compliance, testing for secondary complications, all that stuff, right? But that same idea does definitely apply earlier on for all of us, everyone here in this room.
Starting point is 00:17:39 But the issue is that the medical system is less incentivised because the things that drive up cost, which is hospitalisation, is much, much further in the distance. And have you had any success there? Because I think that's what's really challenging, right? It's hard. Yeah. Right after you've had a heart attack, you're really spooked and fear is a great motivator. Sure.
Starting point is 00:17:58 You can get people to do things. I think we're a lot of the cost savings that have been traditionally harder to find those incentives for are how do you take the 30-year-old who's like not at risk in the next five years for a heart attack, but sort of doing the behaviors that put them at risk in their 60s and like make the system recognize that it's worth investing in them 30 years before. Yeah, yeah, sure. So that's the kind of, I mean, more than a million dollar question, isn't it? I mean, it's like multiples of tens of billions, I guess, in terms of preventable costs.
Starting point is 00:18:28 But like it's a core problem. I think there's ways of doing it for like short periods. No one has cracked a way of doing it for a decade or something like that. And that's really the key here. I think like, you know, the kind of just kind of the theoretical point of view would be like, like there's an externality. If you can internalize it, then people can optimize to that. So I think, you know, these long range health scores, health age, like you guys do,
Starting point is 00:18:54 that makes a lot of difference to some people. But I think what we're talking about is people who already have a lot of internal motivation and agency to do these things. And that's a relatively large number of people. I'm super encouraged. And so when I started looking at this Emily, so after we sold the company, right, I started kind of thinking about what I wanted to do with the rest of my life kind of thing. And basically, you know, one of the areas I really felt, you know, I kind of feel that we're in the midst of these four areas where we have like these kind of 10x solutions that are cresting or maturing at the same time.
Starting point is 00:19:25 And I just felt I wanted to kind of focus my energies on each of those areas and then just see what comes. And one of the areas I thought was this kind of area of like longevity, performance health performance medicine. And I was also looking at like, well, what is the approach here? And I think the device and device associated services like you guys are doing clearly makes a lot of sense. And obviously, you're a very successful organisation doing it. I think the medical services side is much less clear. And it really does seem to be in the area of like membership medicine, basically, you know, performance health, which basically just because doctors, their salaries are a certain number.
Starting point is 00:20:03 And there's only a certain number of patients you can deal with, even with a lot of tech leverage. it ends up being fairly costly and beyond what most people can handle unless they're in like, I guess, like the top 10% of like income earners. But I think that's where it is now. So I actually got involved with a physician friend of mine and he does this stuff for me and a bunch of other kind of entrepreneurs of had exits in this area. So there's like a lot of people like that here. I think in other major cities you have that. But it's not the norm of primary care for most people. So these are like things that are going very.
Starting point is 00:20:37 separately. And I think to shift that, what you'd have to do is find like the trigger or find the population that's motivated. And I definitely got some thoughts there. And then basically tie all of the stuff in the wearable domain to a healthcare metric that matters. And unfortunately, the ones that matter most are utilization and occur over like a very long period of time. So otherwise, I think the way of getting this stuff in is moving more towards, which I think a lot of people are advocating for now that your health insurance and all that stuff is for catastrophic risk. And you basically have a different more concierge type model technology and both devices and AI enabled the more kind of lifestyle related component. But then there's a lot of work that's needed
Starting point is 00:21:22 in improving the public's awareness that stuff is going on way before you have an event. And it's in that period that you kind of need to do something. What does that system actually look like? Because I think one of the things that I get encouraged by is exactly the technology that you've built where you can have a lot of it automated. Maybe there's a human in the loop. Maybe even eventually that human in the loop is not necessarily in every loop. You should be able to scale the sort of ratio of doctors to panel size or a number of patients that can be served if you actually had really, really good triage and kind of all that kind of nursing level, just simple question answering happening by the AI. How far away do you think we are from that?
Starting point is 00:22:07 And what do you see as the big barriers? Okay. All right. So the first part of it's easy. The second part is very hard to answer. I never said this was going to do. Fair enough. He's a good question.
Starting point is 00:22:18 So with the first part, we're definitely there now. I think the issue that we have in medicine is really a difference of like at the population level versus the individual level, right? So like I can kind of explain some specific word. You know, we're already at now this kind of area what you're describing is clinical reasoning. Yeah. So basically given an individual patient's needs, they sometimes feel subjectively, sometimes it's expressed objectively in like things you can measure.
Starting point is 00:22:41 A clinician listens to those and examines and looks at some data and figures out what's going on with the patient, this kind of health state. And then given that, relates that to like what's in medical knowledge. That's making a diagnosis. And then based on the diagnosis management plan, can, you know, try and get the patient involved in doing that, measure if it's working, do that in this kind of. game loop almost where you're going backwards and forwards, measuring, trying, measuring, retrying, et cetera, until the patient gets to like kind of what they have as a health goal
Starting point is 00:23:13 and what the clinicians have as a health goal. In studies on somewhat abstract populations, the team at Google, so you met Vivek, Vivek Natarajan, so we work pretty closely with them on some research stuff. Yeah, and he came on the podcast too. It's going to be great. Yeah, I mean, so him and Alan, Carter Kesslingham, I work with a lot. They've done, you know, prospective, of randomized clinical trial of human versus clinician blinded in clinical reasoning, clinician versus AI, then the AI comes out as superior, right? And that's across a number of conditions. I think it's a few thousand cases.
Starting point is 00:23:44 So at a population level, I think it's the way the mental model for this is, it's kind of like self-driving. Like self-driving, even in its rudimentary state at the moment, is probably safer than humans across the whole population, right? And like, I think it's, you know, every day there's like a jumbo jet's worth that people are dying in road traffic accidents in the US alone. But of course, if a jumbo jet dropped out the sky every day, then within like a couple of days, no one would be flying.
Starting point is 00:24:10 But now, of course, it's an acceptable risk, this base rate neglect, basically. Like, it's the same thing with like healthcare as it is is hugely variable. And clinicians, you know, we're fallible, inconsistent, all that kind of stuff. AI is demonstrably better in the kind of research setting with relatively robust methodology. Now, of course, you could have larger populations, like, all of that stuff is true. But the signal is directionally correct that at least in silico, in vitro, outside of actual clinical practice, that yes, like what you're saying is correct. Like, AI should be able to do this better.
Starting point is 00:24:46 Now, the question, of course, is like, well, how do you introduce that into actual clinical practice? And that is much more complicated. Great, that's super complicated. And it's interesting. There's data. It's year old now. So I don't even know how it's changed. But there's data from Nielsen last year that showed that 80% of millennials and Gen Z.
Starting point is 00:25:00 trust Chachachibati more than they do their doctor. And I think probably post-pandemic, there's like an all-time low in terms of like rate of people trusting the health care system. And so in some ways, it seems like we're really primed for this. In other ways, it's like, but I want to talk to a human, you know, even if they're good. And then of course, as of right now when we're recording this podcast, we're less than a week out from Open AI's big announcement that they're no longer going to give health advice or legal advice. And so it does seem like there's both this, like, desire for AI to fill this role. I think the regulatory space as far as, like, how the FDA is going to handle all of it. It's really scary. And I think there's something maybe also a little scary if you think about it too much of, like, do we want Open AI to, like, decide what kind of care I get? Sure. You know, there's sort of humans behind that. So it'll be really interesting to see how all of that comes together.
Starting point is 00:25:52 But I'm curious from your vantage point, like, what is this, like, regulatory framework going to look like? what is the process of getting like acceptance to actually try this stuff? Because I fully believe that the technology is like better than the average clinician, right? Like that's not that hard to imagine. And when I think about, you know, teenagers driving, I would so much rather than be in self-driving cars. It's a good example. A 16 year old on the road. But like how do we actually get the system over there?
Starting point is 00:26:21 Yeah. But I think at scale what you're saying is definitely true, right? but like for each individual, it's not always true. So like, for example, you know, on average, self-driving cars are safer, but they're also sometimes going to fail. Yeah. And like someone's going to be harmed. And I think that's the same in medicine.
Starting point is 00:26:39 You know, it won't be better for every single patient, but across a large population of patients, probably we have enough evidence now, certainly like directionally going forward as the models get better, et cetera, et cetera. That's going to be unequivocally true. But there are still going to be some people who are harmed. There's a really complicated, and that's essentially a social decision.
Starting point is 00:27:00 That becomes then a political decision, right? And I think we did a thing on this, if any of your listeners are interested, it's on Harvard School Public Health YouTube, where we did a pressure point, AI and healthcare. And we talked a lot about regulation because that's kind of more of the lens there. And essentially, in the US at least, and it's pretty similar everywhere. Governments have essentially delegated this to individual healthcare organisations. They're like, well, we don't really know.
Starting point is 00:27:23 and part of that is we have a bunch of regulators who take our course. Many of them come from legal backgrounds or policy backgrounds. They don't have like the technical expertise to vet these technologies. Even if they did, they're developing so fast. And, you know, the foundation model paradigm are these like kind of generally intelligent but hugely variable performance models that are non-deterministic and operating at massive scale and hugely general is an inherently difficult thing to regulate. That's the situation we're in at the moment.
Starting point is 00:27:52 We don't have like national statute that says like this is okay and that isn't. When it comes to healthcare AI at least. So what's left to is individual healthcare organizations to figure out what they think is safe and how they're going to manage it, right? What's up, folks? If you are enjoying this podcast or if you care about health, performance, fitness, you may really enjoy getting a whoop. That's right. You can check out whoop at whoop.com. It measures everything around sleep, recovery, strain.
Starting point is 00:28:21 and you can now sign up for free for 30 days. So you'll literally get the high performance wearable in the mail for free. You get to try it for 30 days, see whether you want to be a member. And that is just at whoop.com. Back to the guests. Could a clinician just sort of put everything in chat GPT and then sort of do what chat GPT said? And then is that like defensible? Like the AI.
Starting point is 00:28:48 Well, so as you might imagine, they do. already do that. In theory, there's like a human review. Yeah, sure. But like, I mean, they do it at universities. There are faculty who are using these technologies extensively, even though they shouldn't be, because the gains are so overwhelming, right? Like, in terms of productivity, that everyone's doing that. So clinicians absolutely at the moment are doing that. So I do a bunch of work at Boston Children's Hostel with John Browns The, and the chief innovation officer there. And he's led a bunch of work basically getting enterprise versions of chat GPT. He does it with OpenAI.
Starting point is 00:29:24 It's on that local Azure instance and it's HIPAA compliant. And none of the data leaves anywhere. It doesn't go to like OpenAI servers. It's all run locally, what locally on their on their Azure cloud. And clinicians are encouraged to use it, not just for clinical reasons, also for research and also for like service development and innovation. And also for like administrative stuff, like stuff you don't see in the background of healthcare that actually is like there's a lot of waste.
Starting point is 00:29:48 that goes on there. So I mean, I think clinicians using chat GPT is happening a lot. I think probably the spirit of your question, though, is like a lot of them are also using it off the books, and they kind of shouldn't be because it is like protected information. So people using their own personal chat GPT accounts. Sure. So I think like there are things like locally hosting that are going to prevent the HIPAA and privacy concerns. And those are very real. So glad to hear that Children's Hospital is on top of that. But as far as just like the technology, and what it is and isn't but maybe appears to be capable of. What are the concerns that you see as like doctors rely on this?
Starting point is 00:30:27 I mean, I mean, I think medicine and law is actually pretty similar now in terms of this stuff, right? So lawyers use these tools a lot. And, you know, law has actually probably been better than medicine in terms of task shifting to like paralegals and then offshoring and all that kind of stuff to manage costs. So the problems arise when lawyers all right. or doctors are using these tools without any review. And that is definitely unfortunately happening. And it's basically just straight up bad practice. Same as if you just Googled it, you know, copied and pasted what you found on Wikipedia
Starting point is 00:31:01 and then pasted that into the medical record. Obviously, that would be no, that would be indefensible in everyone's guys, right? And I don't think we're in a fundamentally different situation here. I think what's interesting, of course, is that in many situations, especially with the reasoning models, that most people don't have access to. they're hidden behind the kind of $200 a month tier firewalls here in the US, they are better. Like the reason it's just much better. Like, it's really interesting when GPT4 came out.
Starting point is 00:31:28 I had like a preview version. It was called Prometheus through the research team at Microsoft. And so we had it. You had to install a version of the edge browser and then the dev version and set it up. So I had it. And I was blown away by how good it was. And I had one of my former students is their head of pathology at National Institutes of Health. And so I was basically posting stuff on LinkedIn
Starting point is 00:31:49 were like, this is produced by AI. And he just didn't believe me. And so I basically said, all right, well, let's just get on a call. And so he sets the questions for the pathologists, in the whole country, right, the border pathology. And he has a team of like 10 pathologists working on this year round to produce these like hard questions to like for board certification. And he just gave me like one of the questions,
Starting point is 00:32:09 which there's no way it could have seen because it'd like written it the week before. And it answered it beautifully, but more importantly, the reasoning was. like watertight. And then he gave me another one. And when I first did it, he thought I was basically making it up that I was like writing these things. But it's obvious that like my level of pathology knowledge is no way near sufficient to do this. And it was tireless and it just kept going and the reasoning was watertight and the answers were correct. And of course it's going to make some errors. And the models are much better now than when the like the test version of GPT4
Starting point is 00:32:39 came out. But that core logic, I think clinicians have embraced. And I think, you know, they are like, the models are definitely unreliable and it still requires professional judgment. So it doesn't obviate that. But each clinician should be much more productive. You know, I think certain things, obviously like the area of medical scribing transcription has already become like populated with AI. I think lots of stuff like referral letters, other forms of documentation, care planning, all of that stuff is coming in short order. That means that clinicians should be able to spend more time with patients. So I think that's all happening. But what you're asking about is much more ambitious, which is that like, well,
Starting point is 00:33:18 what are other things that they're not currently doing that they could do because of AI? We've been thinking about this a lot. So basically, I think there's two areas that are kind of non-obvious that we're seeing with AI moving very fast. And we actually have some courses at Harvard that we're teaching for clinicians in this area. And this is broadly in the area of vibe coding. I don't know if you've had much exposure to. Yeah. For people who aren't familiar, vibe coding is this idea of like in natural language explaining. to program like cursor or even chat chv-t can do it where you say like here's with the code that I want it to write and then it'll just generate sometimes thousands of lines of code based off
Starting point is 00:33:55 of one or two natural language sort of sentences and this is you know horribly popular because you can build mobile apps in an hour but also horribly dangerous because they tend to totally avoid things like security and all those kinds of things on the surface it looks like it achieved the thing you wanted to. But to a software engineer, it's like, yeah, it's totally frightening barf code. Yeah, yeah, yeah. That should not just be deployed blindly. Yeah, exactly.
Starting point is 00:34:20 So, like, you know, I think the vibe coding approach, it's very interesting. Like, if you take, like, so professional engineers are also vibe coding now, right? They're just vibe coding a very different way to the rest of the public. And I think my point is that looking at the ways professional engineers, software engineers, have used these tools to increase their own productivity and teaching that approach. And they use it very differently. They don't just write, I want a mobile app that does X, Y, and Z. They will say, like, okay, I have this idea for a mobile app.
Starting point is 00:34:47 And then they'll write all of that out in think about the users, think about all the things they do, think about the risks, think about the infrastructure, implementation, security, et cetera. That becomes like a massive requirements document, which they write with AI, typically these days, I think. I don't know why you wouldn't, but maybe some people don't. But like, that's certainly what I'd recommend. And then from that, you get a set of technical requirements.
Starting point is 00:35:09 and then from that you break it down into like individual smaller chunks and build each chunk with what's called a unit test. Like how do I know if this thing's worked? And then build each one at a time. That's very unglamorous and deliberate. But then you get something that it's modular. So if it breaks, you know which bit is broken. And you can test each bit as you go forward.
Starting point is 00:35:29 So when you get a result at the end is much more likely to be stable. So I think that movement is incredibly powerful. What that means is that like what we're working on now is if you could teach the way of doing vibe coding correctly, quote unquote, to clinicians, could they develop their own first versions of products? Just MVP's, because obviously to deploy something at scale, there's a whole different set of considerations. It's massively transformative for clinicians who've got many who've got lots of ideas, but they're basically kind of hamstrung by the fact that they can't necessarily develop them themselves. So then the effective cost of developing an idea
Starting point is 00:36:05 is massive, they've got to go and find an engineer, most hostels don't have engineers, or if they do they're massively restricted and people just kind of give up and don't do any of this digital innovation. And I think a lot of great ideas I've seen for years just died on the vine. But the second thing with that is vibe coding as applied to data science. So could a clinician look at their own service data from their own practice and basically try and figure out what's working, try and figure out what they need to improve themselves? And so I work with Heather Matty is she teaches, she's the executive director of the Health Data Science Master's program. And her and I are taking like that one year program and teaching it in a much shorter amount of time with like
Starting point is 00:36:41 vibe coding doing all the work. But you're getting the statistical intuition. And basically what we feel, and I think it's kind of relevant to like the consumer work here as well, is that all clinicians will be their own data scientists and will look at their own data from their patients and from their practice and make inferences to improve their own practice themselves. And so for people listening who aren't quite following what you're saying, what would that mean for me as a patient? How would I experience? That would mean because at the moment, like, your doctor is providing care and seeing you and they're submitting a bill and they're getting paid.
Starting point is 00:37:13 But they typically don't have the bandwidth necessarily to see, well, is that care working? Like, I think it's working when I see you and you feel like you're getting better. But like what about the wait time? What about the like, if I'm asking you for like how you feel about this and whether you'd approve it to a friend, what about the significant events? Like all of that, because it's so difficult to look at all the data, they typically get an external agency and wait to do it on like an annual cycle. But managing something, I mean like a company like this is definitely not managed on looking at the metrics once a year, right? Of course. So like,
Starting point is 00:37:45 you should be looking at these things once a week or maybe even every day. But that becomes possible if you have access to the data and you have some tools of inference, these like vibe coded data science tools I think to start off with. But then of course I think it's going to just shift to like monitoring products and you're not involved with making it yourself. But I think in this very early phase, we're really interested in engaging the leaders of organisations who are very interested in improving the quality of their organisations. The data exists, but they just don't have the ability to analyse it themselves to use AI and figure out how to use AI to do the analysis. But we think that also applies to patients and, for example, their wearable data at the moment,
Starting point is 00:38:25 for example, right? And I think this is the way this field is going. You have, I wanted to say this earlier, but I've just kind of lost the thread. Like it's like, in terms of like the future for like this kind of AI-enabled primary care or performance health. I think actually, honestly, it's all pretty clear that, like, what you're going to have is you're going to have some very high-resolution view of what your baseline risk is. And I think that's determined going to be by whole genome sequencing or exome sequencing, which has come down and cost massively. But the ability to interpret still has a lot of work to do.
Starting point is 00:38:56 And then probably some high-resolution imaging. So whole-body MRI or, like, some. some kind of blood panel that's a very high resolution. Then you have basically monitoring on a day-to-day basis that's going to come from wearables like whoop and other modalities and also like more low latency tests. So proteomics is probably the one that's furthest along. Still not necessarily quite there.
Starting point is 00:39:20 But then you need AI to make sense of all that. And your clinician helps set the strategy. Like what's the long-term plan for your health? And therefore, like what things need to be measured? and given the inferences that come from the measurement, can you just do something more yourself? Or do you need to speak to a clinician and then you're referred into like the health system?
Starting point is 00:39:41 So there's a somewhat rosy picture that you're painting here where like as doctors become more tech enabled and like have more access to AI that they're going to start to get into sort of this wearable and wellness and preventative health data that right now they're largely unequipped to handle and therefore ignoring and start to serve a function that I'd say arguably today they're not at all.
Starting point is 00:40:06 You think that's where we're headed? Because that's a very exciting future. I mean, yeah, sure, sure. But I mean, I think some are, right? I mean, I think it would be the same with Wu. Who was using Whoop at the beginning. It was like college athletes, of course, right? In exactly the story you told.
Starting point is 00:40:18 Or like people who are already in like a high performance environment. Then if you look at it now, of course, there's more people like me who are not necessarily college athletes, to be blunt. I think it's the same difference. I think you're seeing that in the way that primary care is being done. there's primary care towards optimization, towards finding like the best version of yourself now and also living as long as you can, that is, I think it's a secular trend. Now, of course, to get it to everyone, I think depends on the price point coming down along,
Starting point is 00:40:46 which means much more technology and much less people. So I think you've touched on this a little bit, but just to make it really practical for any clinician who's listening, what should clinicians be doing right now to prepare themselves for the way that their field's going to change in the next five years. Okay, well, so that one is easy, actually. Okay. Like, it's like you need to be getting educated. Like, this is the wrong thing to, like, hide one's head in the sand about.
Starting point is 00:41:13 Because it's theoretically fairly dense to get your head around. The primitives, i. the basic conceptual fundamentals of understanding this, are not related to anything else in medicine. So, like, it's going to require intentional study, which unfortunately a lot of clinicians do, which is that, well, there's this thing I don't really understand. but I have a strong opinion that it doesn't work, even though I don't really understand it, and I'm just going to ignore it.
Starting point is 00:41:38 The growth rate is unprecedented, and obviously just anything that grows exponentially, it's difficult for human beings to predict. Yeah, we're bad at that. Yeah, totally, exactly. So I think that's the first thing. So the second thing, beyond that, is start playing. And by that, what I mean is start developing things yourself.
Starting point is 00:41:56 The great news there is it's never been easier, right? Prompting, writing stuff in natural language into a model is akin to like programming for some tasks. That will get you so far. Some things you need to build. But then the news is also good there. You have like methods where like anyone, including myself, can like functionally behave as like a very junior software engineer.
Starting point is 00:42:18 And I just think there's no, just to be blunt, there's no excuse. I don't, I mean, say you're busy, this, that, whatever. But like this is like a fundamental moment for the profession. I think really, in my opinion, like, I'm good friends with Alan and Vivek at Google. So, like, what I'm about to say is coloured by just the fact that, you know, I know them and I know them outside of work. But I think their paper, the Amy paper that they produce, I like, I firmly believe
Starting point is 00:42:44 it's a strong statement that, like, at that point there was like a new era of medicine that started. Basically, that problem with clinical reasoning that we always thought only people could do that. It was proven, at least in the early phases, like, that computers could do it. And I think the rest of it is now inevitable. It's just early. That was what, like 18 months ago, which is nothing in kind of getting things into clinical practice, right?
Starting point is 00:43:07 It usually takes 10 years. We're in the early phases of proving that out. And it seems like absurd now. I kind of find it difficult to see another eventuality here. Like I think it would take time. I think there's other areas here, Emily, which we haven't kind of gone into a load of pushbacks going to happen from the profession, which you've already seen in like things. I think that's only going to increase.
Starting point is 00:43:27 But it seems like that's going to be more. just sort of protectionist than actually good. But it's a very powerful lobby, right? Definitely. Yeah. I mean, you heard, did you hear what happened in like Illinois? Basically, the state of Illinois, the governor produced some decree about essentially it being illegal to use AI tools, psychological therapy or psychological counseling or psychological
Starting point is 00:43:49 support, I think they call it, which is obviously unpoliceable, right? Because that means that anyone using chat GPT to like think about their feelings or whatever is basically breaking the law. and then obviously dedicated vendors can't operate in the state. But that was in response to a lobby from the National Association and Social Workers, who are psychotherapists, to basically say that, like, this is not safe and whatever. And at the same time, you have massive waiting lists and most people have zero supply of, like, psychological therapy. But all the professions, and it's not just in clinical, right, every profession is going to be like this,
Starting point is 00:44:23 are going to respond in a defensive way. And, you know, maybe that's their job. They should do that. And there's a broader like discussion for society to have to be that, well, the net benefit to people counter balances the interests of the individual professionals. Yeah. And, you know, I definitely agree with everything that you just said. And I would add to that that this is true times every profession. And anybody who's sticking their head in the sand and saying like, this is a fad. You know, I think we're kind of past the place where that's a reasonable point of view. And, you know, you know, it's just going to knock off one case after another. And I do believe that that doesn't mean that every doctor is about to lose their job. It's a transformation, not a loss. But, you know, some people are going to figure out how to ride that way and other people won't and retire early.
Starting point is 00:45:13 And I think that, yeah, everybody should be playing with these tools. I mean, they are free versions of all of them. I think another part of what I wanted to say about where this is going, right, basically these two economies, right? There's the kind of economy of needs and there's the kind of of economy of wants, right? And in the economy of wants, which is basically like products, the digital things that are on the table, you know, anyone listening to this, the things they'll be using, those are all getting more and more powerful and cheaper and cheaper and cheaper, right? The actual price may be staying the same, but the features for that price is obviously massively increasing. And that's kind of happening in a lot. Then there's the things that you actually need,
Starting point is 00:45:50 which is like healthcare, education, housing, those are all getting way more expensive. And so the question is why, right? Like that's a kind of, it's a bit weird. Like the cost of education is steadily going up. The cost of healthcare is steadily going up. Now, you can make a couple of arguments here. You can make an argument, a kind of more libertarian argument, that like, well, that's because government gets involved in these things that you need
Starting point is 00:46:10 is socially important and introduces some distortions possible. You can make a different argument, which is that like, well, the essence of these things are two humans interacting. And so it's not necessarily automatable or, you know, the technology revolution that is in everything else doesn't apply in these areas, right? You can also make another argument, which is that basically, well, as societies get richer, people spend more and more money on like the core necessities. They spend more on housing. They spend more on education. And certainly in the US, you've seen
Starting point is 00:46:38 that. And in richer cities like Boston, you spend more on those things. Therefore, if you follow that through, if you buy this kind of AI thesis that society overall is going to get richer, it's going to get more productive and richer, then you'd actually expect that the amount spent on healthcare as a percentage of GDP, obviously as well as absolute terms, is going to increase considerably. And that means that there's lots of opportunities for different forms of healthcare without substitution for the medical system. And that's basically the argument I'm making. I want to shift gears a little bit before we wrap. Yeah. On average and at a population level, AI, augmenting the healthcare system is going to result in higher quality care. Yeah. We also acknowledged
Starting point is 00:47:21 that that doesn't mean that it's going to be perfect. For every individual who's listening and who is likely, I'd guess close to 100% of people, are consulting Dr. Chat, GPPT, just like we've consulted Dr. Google for a long time, not really a new phenomenon. I wonder if you can just put on your clinician hat for a moment. What are the things with the state of the technology today that people who don't really understand AI should know in terms of how to make that helpful, when to not overrelevant? lie on it, when do you need to go and do the headache of actually getting a real human doctor?
Starting point is 00:47:57 And I feel like this answer could change quarterly. No, no, no, no. Worth noting that we're recording this. But, you know, with GPG5 or whatever your favorite LLM is, what's the like safe, responsible way to be using these tools and where would you say don't go there? Okay, all right, great. So there's a great, very practical question. So basically here in November 2026, this is 2025 even, sorry, this is what we think, right?
Starting point is 00:48:20 So like I think the core thing is, I think, personally speaking, like intelligence probably is not something to economize on. So spend as much as you can on the highest tier of model possible. So it sounds a little bit flippen, but I'm kind of, you know, the performance of the advanced reasoning models, pro models if you want to look at Chad GPT or the equivalent tier and the free models are just night and day. And if you are using these for professional reasons, pretty much everyone are using a professional reasons, pretty much everyone is using a professional reasons uses like the best models for obvious reasons, right? And I think if you're more of kind of using it for your own personal health, there's a more complicated decision. If it was me, I would definitely pay as much as I could to get the best raw horsepower intelligence to
Starting point is 00:49:09 play. And especially with these kind of problems, these, without kind of going into the technicalities, there's these reasoning models that basically just use much more computation to think about what you're putting into them. And the way they operate internally is different as well. But it may take up to 10 minutes to produce an answer. Just like with a human being, if you went to see a doctor and you told them like, you know, your entire life story and they gave you an answer in like five seconds, you'd be a bit skeptical. You want them to go back to the library. Exactly. Exactly. That's the reasoning is doing. Exactly. So that's exactly what reason is doing. That's the first thing that I would do. The second thing is put as much useful stuff in context as possible.
Starting point is 00:49:48 And so that is basically, if we're getting very practical with either of these things, is like create a project, put as much health information as you can in there. I think like this baseline imaging, bloods as much wearable data as you can. Family history. Yeah, even like screenshots is fine. You guys do data exports as well. So like that's much better. But yeah, put as much as you can in.
Starting point is 00:50:11 I strongly endorse and do myself recording of the transatlantic. of recording, excuse me, with transcription of any interaction with a clinician. With consent, they're basically fine with that. Most of them are recording it themselves anyway for ambient scribing. So that's not really a problem. And take those transcripts in and basically critically evaluate everything. Ask questions. When you come up with issues, try and get feedback.
Starting point is 00:50:35 Also ask it, what should I do now? And that same approach also works for investing. If one's interested in public market investing, it also works there. which is actually architecturally a quite similar problem to solve. But for an individual, that's the way. Pay as much as you can for the models. Give them as much context as possible. And then basically interactions with the healthcare system,
Starting point is 00:50:56 take the kind of stuff that's not digital and make it digital and put it into the thing and then ask questions. Okay. So your clinician, so when you say give it context, I think you know what the relevant context is. But for the average listener who maybe doesn't know how to determine whether or not something is relevant and doesn't know if you're asking to also include, you know, what they're wearing and what they ate yesterday and all those things. Like, is there like a little checklist or a couple of things or it's like, here are the things that could be relevant, right? Like just off the top of my head, family history, you eat current medications, all your supplements. Yeah, yeah.
Starting point is 00:51:28 This is a problem, Emily, and I don't have an answer to it at the moment. I think it's going to be answered. But basically, right, these are like tools that are not meant for this purpose. Yeah. And the outputs, certainly using the pro models are, I think, unpalatable for the first. most people. It's like getting like a very academic doctor or or a PhD to answer your questions and it's very dense and I think for most people much too much and it will take you like at least half an hour just to understand what it's what it's giving you. And for most people that's not user-friendly
Starting point is 00:51:58 it or so I think for like unless one is very motivated, which of course everyone should be motivated on their health and wellness and all that stuff but like you know time is limited. I think it's probably this is like where we're going to see startups, basically using the models in the background and doing that translation piece for you. Yeah, coming up with like templates and things. And so I'm working with a few biologists who are looking at this area through one lens. There's a genetics group of children's where we're looking at it through a totally different lens for kids. But I think that translation component, at the moment you have to do it yourself. I hope there'll be dedicated products for that that are just using the AI in the background
Starting point is 00:52:34 and you don't see it, right? Yeah, I definitely imagine that, you know, there's a big white space there for startups to come and, you know, anybody listening, feel free to jump in there and take our ideas. For those people who don't want to wait and are motivated, I think it's worth just like a great hack in general for writing a good prompt is ask chat GPT or ask your LLM what makes a good prompt. So you can say, like, if a doctor was working this up, what questions would they ask and then answer those questions? It's like a great place to start and, you know, really treat it like a conversation.
Starting point is 00:53:06 It's not a Google search. It's a different thing. And so you could say like what information, what a good doctor ask as a follow-up question and try and get it to like ask you the questions to answer before you just ask it what's the most likely diagnosis. Because if you just say like, hey, I've had a headache for two days, what's the most likely diagnosis? It's probably going to tell you something like dehydration or something, which like, given what you've given it, which is nothing, that's a pretty good answer. But like you would never have an interaction with an MD like that.
Starting point is 00:53:34 And so you can ask it to like put on its MD hat and like, yeah, have a better question. conversation. I totally agree that. And you can also, I mean, I do, and I'm sure you do, you have more than one model. And then you use one to critically evaluate the other. So whatever output you get, I think this is, it's so new. Like this field is so new. It's like, the things we're discussing the moment is like, you know, we've got dial up modems, right? Everyone's this internet thing. How do you use it? Well, yeah, you dial up and it makes this noise and then you get on, you know, that's kind of where we're at with this field. It's like this will seem very rudimentary and absurd in like a few years, but that's just the nature of the game. I've so
Starting point is 00:54:07 enjoyed this conversation because I do think you're like right on the bleeding edge of thinking about how healthcare is, I'd say, already transforming, but sort of what's coming in the next couple years. And as we wrap, what are the two or three takeaways that you really wish people understood, maybe misconceptions or just underappreciated things? So I think there's one thing that we haven't discussed actually that I think I'd like to kind of leave with, which is that looking at all of this stuff, longevity, optimization in a vacuum, is probably not the right approach. You know, like, obviously we're like evolution. We are the social apes, right? And like all of that stuff is incredibly valuable to us. And for most people, the thing that underpins getting the metrics
Starting point is 00:54:50 all correct is feeling like they're deserving of it. And everyone is exposed to some degree of trauma when they're growing up. I think the understanding of trauma has massively changed in the last decade is just like feeling overwhelmed in someone's life early and then this internal syndrome of like shame and like lack of self-worth. They're literally everyone, everyone listening to this, everyone is going through in some way. And unless that is explicitly addressed in life, then like feeling like you're worthy of these like long-term changes is actually really challenging. And so you have this very interesting world at the moment where you have society's got much, much richer. We've spoken in a very top techno-optimist way. And
Starting point is 00:55:29 about like where technology is going. But many, most people are feeling worse. You have much high rates of depression, much high rates of anxiety, much more in kids, much more in particularly biological girls. And also like young men feeling left behind and much more loneliness, suicide and violence and all that stuff. There's something else beyond the metrics here in basically understanding one's own story and reframing it and kind of, you know,
Starting point is 00:55:58 learning the counterfactuals and learning how to manage your emotions and manage your attention that I think really underpin the achievement of like long-term health outcomes over and above like the physiological monitoring and tracking and training. I think that's come on leaps and bounds, you know, very largely due to the work that you guys have done. And I think what we've discussed a lot is how that can maybe join with medicine, especially like AI enabled medicine a lot, which makes a lot of sense. But I think there's still this component, which for most people is the majority of like what they actually value when push comes to shove, like how they feel about things, what their relationships are,
Starting point is 00:56:34 that there's a different axis and a different set of things they're needed. And like certainly when I've been looking at this, you know, I look to this space a lot in terms of thinking what I'd do next, I really feel that we need some kind of unlock there in terms of people understanding their stories, reframing them, learning how to manage their emotions, counterfactuals there. I like that you went there because I think, you know,
Starting point is 00:56:53 we spent a lot of time talking about how the robots are stepping in And then you sort of highlight the need for this deeply, deeply human gap to be filled. And I think we're very focused right now in what the robots are going to do. And we need to not lose sight of human needs. Yeah. And I think the good news on that is that there's also a lot of like unlocks there, a lot of things that, again, early, as transformative as AI and paradigmatically different. I am really helpful about the role that wearables and kind of the automation of remote monitoring can have in a lot of. of those things. I agree with you.
Starting point is 00:57:28 Fascinating papers, even things like the way that you use a computer keyboard, like can predict all kinds of mental health states. And as more and more data with the help of AI becomes health data, you know, things like changes in your gate, changes in speech patterns, changes in typing patterns, all of a sudden can speak to mental health. Totally. You know, what does it look like if these nine-year-old girls are getting screened a lot more regularly so that they don't kind of stew and teenage moodiness, you know, kind of undiagnosed for so long. But the field, if I'm correct, and maybe I'm not totally up to date, basically using things like
Starting point is 00:58:04 WOOP in the context of interventions, like psychiatric interventions in the community is extremely poorly understood, but like I think is incredibly rich. So basically you have someone with a condition such as depression and they are going through like some therapeutic process. The wearable data probably has a lot of use in terms of figuring out how that process is going forward, right? Yeah, I think the field is pretty good at the diagnostic side right now and less good at the treating side. I do think there's really interesting work and we published a paper back in 2021 looking
Starting point is 00:58:41 at mental health resilience. And so when acutely traumatic things happen, what different? somebody who becomes depressed or develops clinical anxiety or suicidal ideation versus people who experience the same trauma. In our case, it was the COVID-19 pandemic in sort of their mental health weather's the storm. And sleep patterns were highly predictive, which was really interesting. Which makes sense, right? Yeah, we partnered with the CDC and with BWH on that study.
Starting point is 00:59:10 And it was a lot of fun to look at all that data and sort of understand that. And so I do think what we will learn is that mental health and physical health are much less separable than we've liked to think about. I mean, somebody a long time ago, like, for the purposes of health insurance or something, decided that mental health wasn't health. And I think that was a huge to service. And we shouldn't think about those things as being so separate. It's very, yeah, I mean, it's very interesting. I mean, you know, we say in medicine that, right, once psychiatry becomes understood, it becomes neurology, right? Yep. So, like, so once it's pathways, like, but the problem is, in the absence of pathways, you get a lot of the overlay of, like, the moral. Yeah,
Starting point is 00:59:48 I think the monitoring, 10% of the US population have major depression, one third of them will never respond to any of the treatments. So that's like three and a half percent. It's much more in younger populations of people have like depression every single day. It's what's called major depression. And none of the treatments that we've got at the moment are currently working, which is a crazy number. So basically what is the importance of like physiological monitoring? Because basically like what, you know, HIV, all that stuff is like, you know, it's the balance
Starting point is 01:00:18 between the sympathetic and the parasympathetic system, right? That clearly, if like psychological therapies or psychedelics or any of these emergent treatments are working, they're kind of altering someone's like physiological resting state. And we should be using this wearable data to see if the treatment has worked. Yeah, I think one of the things that I'm the most excited about AI coming into with things like that is likely the reason why the therapies we have aren't working is because it's actually not the same condition. And we're lumping it all together.
Starting point is 01:00:47 I agree with that. Totally. Symptomatically, the presentation is similar. And so as we start to understand what caused this and what does it actually look like, not just sort of what you report once a week when you sit in the doctor's office, we'll start to realize this is something else. And then, you know, that'll unlock a lot of the therapies. To loop back, we were going to wrap on the most important takeaways you'd like for people to hear. So I really appreciated the beautiful callout around mental health.
Starting point is 01:01:13 Is there anything else you want to end on? Yeah. You know, the great film director, Stanley Kubrick. was asked about, you know, what advice would you give to young directors, right? Well, buy a camera, make a film. And I think it's the same thing here. If you're a clinician or you're a patient and you want to learn about this field, basically just get started. I'd say get educated. There's a bunch of stuff online for free. If you kind of look on YouTube under me, there's a bunch of HSPH things and there's a lot of other things as well. But there's also really good things out there on YouTube in terms of learning about AI from first principles. I'd strongly
Starting point is 01:01:46 recommend that, but it's kind of like learning about statistics. You know, there's some things you can kind of wrote learn and there's some things that are just conceptual algebra or like you just need to like kind of play around with a little bit. So I think it is in that territory. So firstly, like get educated. And secondly, part of that is use AI to test your understanding and also to get more educated. Build things yourself. And like we've discussed some ways of doing that. There's never been a better time. And I think then it gets very different if you're a clinician or if you're a patient. I think if you're a patient and you've come this far into listening to this podcast, you're probably okay.
Starting point is 01:02:22 You're probably on the right path anyway. You're probably using whoop. You're probably measuring these things. You've got some interest. The extraordinary benefit of physical activity is going to give you a lot of the gains. You know, if you follow what the hoop coach is kind of telling you to do and you don't smoke and don't drink too much, you're probably going to be okay as long as you do some regular. screening for cancer and that kind of stuff.
Starting point is 01:02:46 And then it's a question of how you optimize that. I think what I'm really interested in is everyone else. What about the people who haven't got as far as finding you guys yet? Or don't even kind of, yeah, they just don't know that this is like an area. And there I think there's like a lot of work to be done. But I think you can look at another way and say there's a lot of opportunity. I so appreciate the incredible work that you're doing and commend you so much for making so much of this educational material available for free.
Starting point is 01:03:13 Yeah, thanks, Emily. I can really link that on the show notes because I do think that this is the future and it is really important that people understand it because I think it can be scary. Yeah, yeah, for sure. And exciting and powerful. And I appreciate your techno-optimans. Thank you. Yeah. This is definitely, I mean, I guess the beauty of this format is we'll see, right?
Starting point is 01:03:32 We'll see like how this place out. We will. And I look forward to it. Thank you. Take care. If you enjoyed this episode of the Woot Podcast, please leave a rating or review. Check us out on social at Woo. at Will Ahmed.
Starting point is 01:03:44 If you have a question to answer on the podcast, email us, podcast at whoop.com. Call us 508, 443-4952. If you think about joining whoop,
Starting point is 01:03:52 you can visit woop.com, sign up for a free 30-day trial membership. New members to use the code Will, W-I-L, to get a $60 credit on Woop Accessories when you enter the code
Starting point is 01:04:03 at checkout. That's a wrap, folks. Thank you all for listening. We'll catch you next week on the WOOP podcast. As always, stay healthy and stay in the green.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.