a16z Podcast - a16z Podcast: A Guide to Making Data-Based Decisions in Health, Parenting... and Life

Episode Date: May 11, 2019

with Emily Oster (@ProfEmilyOster) and Hanne Tidnam (@omnivorousread) Are chia seeds actually that good for you? Will Vitamin E keep you healthy? Will breastfeeding babies make them smarter? There’s... maybe no other arena where understanding what the evidence truly tells us is harder than in health… and parenting. And yet we make decisions based on what we hear about in studies like the ones listed above every day. In this episode, Brown University economics professor Emily Oster, author of Expecting Better and the recently released book Cribsheet: A Data-driven Guide to Better, More Relaxed Parenting, from Birth to Preschool, in conversation with Hanne Tidnam, dives into what lies beneath those studies... and how to make smarter decisions based on them (or not). Oster walks us through the science and the data behind the studies we hear about -- especially those hot-button parenting issues that are murkiest of all, from screen time to sleep training. How we can tell what’s real and what’s not? Oster shows us the research about how these guidelines and advice that we are "supposed" to follow get formalized and accepted inside and outside of healthcare settings -- from obstetrics practices to pediatrics to diet and lifestyle; how they can (or can’t) be changed; and finally, how the course of science itself can be influenced by how these studies are done.

Transcript
Discussion (0)
Starting point is 00:00:00 Hi and welcome to the A16Z podcast. I'm Hannah. Good data, bad data. There's maybe no other area where understanding what the evidence actually tells us is harder than in health and parenting. In this episode, economics professor Emily Oster, author of Expecting Better and the recently released crib sheet, a data-driven guide to better, more relaxed parenting, does just that, looking at the science and the data behind the studies we hear about and make decisions based on in those worlds. From whether to breastfeed your child to screen time, to sleep training. We talk about what it means to make database decisions in these settings, in diet and in health and in life, like whether chia seeds are actually good for you and how we can tell what's real and what's not. We also talk about how guidelines and advice like this gets formalized and accepted for better or for worse and how they can or can't be changed. And finally, how the course of science itself can be changed by how these studies are done. You describe yourself as teasing out causality in health economics. Can you Give us a little primer on what exactly that means and how you start going about doing that.
Starting point is 00:01:02 So there are a lot of settings in health. And in all of those settings, we have to figure out what does the evidence say? And I think about some of them in this context of parenting. But you can think about even questions like, you know, is it a good idea to eat eggs or is it a good idea to take vitamins, other kinds of health decisions? And you can sort of think about there being kind of two types of data you could bring to that. One would be randomized data. So you could run a randomized trial in which half of the people got. and half of the people didn't, and you followed them for 50 years, and you saw which of them died. And that would be very compelling and convincing. And when we have data like that, it's really great. I mean, I kind of think of that as being the default, no? Is that not at all the standard? That is the gold standard. Yeah. But it is not the default. So many of the kinds of recommendations that I look at in parenting, but that you look at in general and health are based on observational data, which the other kind, where we compare people who do one thing to people who do another thing and we look at their outcomes. And one of the ways in which the people differ is on
Starting point is 00:02:01 the thing that you're studying. But of course, there are other ways that they may, they may differ also. A million other ways. A million other ways. Yeah. And data like that is really subject to these kind of biases that the kind of people who make one choice are different from the kind of people who make another choice. One of the things that's very frustrating in a lot of the health literature is that there isn't always that much effort to improve the conclusions that we draw from those kind of data. And we're drawing, we're using that kind of approach because of the inability to have long, longitudinal studies, or does it tend to be a shortcut? So I think it is both things. So it is much easier, faster to write papers to produce research about that. And it can be really useful for developing hypotheses. So it's like a scratch pad almost. In the best case scenario, be like a scratch pad. Like let's just look in the data and see what kinds of things are associated with good health or associated with good outcomes for kids. And then we could imagine a next step where you would analyze more rigorous gold standard. And sometimes that happens. So there's one really nice example of the book where this happens exactly. like you would hope, which is in studying the impact of peanut exposure on peanut allergies.
Starting point is 00:03:07 Right. So the first paper on that is written by a guy. And what he did was he just compared Jewish kids in the U.S. or sorry, Jewish kids in the U.K. to Jewish kids in Israel. And he saw that the kids in Israel were less likely to be allergic to peanuts. And he said that's because they eat this peanut snack when they're being used. Bombas, right? Yeah.
Starting point is 00:03:23 The bomba. And so then that's like the hypothesis generation. And then he went and did the thing you would really like would just say, okay, let's run a randomized trial and let's randomly give some kids early peanuts and some kids not. And indeed, like, he found that he was right. So that's, like, a great example of, like, how you would hope that literature would evolve. But in many of the kinds of health settings we're interested in, that you can't do that or it is much harder to do that because the outcomes would take a long time to realize or it's expensive or it's hard to manipulate what people are doing. And then we often
Starting point is 00:03:56 end up relying on these more biased sources of data to draw our conclusions, not just as a scratch pad. And I think that's where we encounter problems. That's where it gets murky and we never know whether we should eat eggs or not. Yeah. And that's exactly the area that you tend to focus on. Yeah, exactly. I try to first see, are there good pieces of data that we can that we can use? And then if we're stuck with the data that isn't good, trying to figure out which of the murky studies are better than others. And, you know, what would you mean by better? Well, you know, how it's roughly like how good is this study at controlling or adjusting for the differences across people? So you talk about kind of breaking it down into both into the relationship
Starting point is 00:04:40 between data and preference. How do you factor in that in the healthcare system where it's so diverse, where preference has such an incredible effect and puts you into so many different possibilities. I think this is why in these spaces decision making should be so personal. We often run up in health and also in parenting in all of these spaces into a place where we're telling people like there's a right thing. There's a right thing to do. And I think that that can be problematic because it doesn't recognize this, this difference in preferences across people. You have to basically accept the variety in the system and then give a space for preference in the decision making. Right. Yeah. But I think it's exactly
Starting point is 00:05:20 these preferences that, of course, make it hard to learn about these relationships and the data. Because once you recognize that a lot of the reason that some people choose to eat eggs and some people choose to eat Cocoa Krispies is that some people really like Cocoa Krispies, some people really like eggs, how can you ever learn about the impact of eggs? Because we know there must be differences across people. And I think that that becomes even more extreme when we think about really important decisions that people are making, like the kinds of choices they make in parenting or also in their diets. So can you walk us through one example like that of where it was a really kind of murky gray area and how you pull out the causality?
Starting point is 00:05:53 The best example of this in the data in the parenting space is probably in breastfeeding. Let's say you want to know the impact of breastfeeding on obesity in kids. That's the thing which you hear a lot. Breastfeeding is a way to make your kid skinny and so on. And so the basic way you might analyze that is to compare kids who are breastfed to kids who are not and look at their obesity when they're, say, seven or eight. And indeed, if you do that, you will find that the kids who are breastfed are less likely to be obese than the kids who are not.
Starting point is 00:06:20 But you will also find that there's all kinds of relationships between obesity and income and mother's income and mother's education and other things about the family. And those things correlate with breastfeeding and they also correlate with obesity. So you can't really pull apart this web. So it's hard to pull apart the web. So I would say this is an example where like the data is suggestive. Like it would certainly be consistent with an effect of breastfeeding on obesity. But I think it doesn't prove an effect.
Starting point is 00:06:47 And then you can sort of take the next step and say, okay, well, do we have any data that's better? And in that example, you know, we do have one kind of randomized data. But again, we run up against the limits of all kinds of evidence. So the randomized data on this question is from a randomized trial that was run in Belarus in the 1990s. Oh. They randomly encourage some moms to breastfeed and some moms not. And so there's a lot of good things that we can learn from that. But such a specific place in time.
Starting point is 00:07:14 Exactly. It's so specific. And you said, like, well, you know, how do I take? that result to, you know, the Bay Area in 2019. Right. It's a challenge. Okay, well, was there any other within this space of, you know, not randomized data is or anything that's better?
Starting point is 00:07:28 And in that case, there is, you know, there are some studies that, like, compare siblings. Oh, okay. Where you look at two kids, you know, born to the same siblings, born to the same mom, one of whom was breastfed and one of whom was not. And then look at their obesity rates. And when you do that, you find there's basically no impact. So then you're kind of holding constant, like, who's the mom? So if your worry was that there are differences across parents in their choices to breastfeed,
Starting point is 00:07:51 well, now you're looking at the same parent. Right. You're normalizing. You're normalizing. And so you may think, oh, that's great. Perfect. I'm totally done. But of course, you're not, this isn't perfect because why did the mom choose to breastfeed one kid and not the other?
Starting point is 00:08:03 People are not choosing randomly. You had a C-section one time. You didn't another time. If that were the reason, that would be great. Right. If the reason were just like kind of worked one time, it didn't work the other time. You know, if there were something that was effectively a little bit random, then that would be exactly what, that would be exactly the kind of variation you'd want to use.
Starting point is 00:08:20 Okay. But the thing you worry about is like one kid is not doing well is unhealthy. So the mom chooses not to breastfeed or chooses to breastfeed to try to make them healthier. Those are the kind of things where there's some other reason that they're choosing differences in breastfeeding, which has its own effect on the kids outcomes. So you kind of like some of what I try to do in the book is sort of like put all of these pieces together and kind of like look like look at them. and think about them all as a as a sort of totality of evidence and just think like how compelling is this altogether. It sounds almost like sifting like using a sifter. You take all this very murky data, very variable from all sorts of different contexts and like put it through the sifter
Starting point is 00:09:02 of like this kind of data, this kind of data and then match it all up and say, okay, what do we have left? And then therefore and then hand that over and say and now you make the decision based on this based on this. Right. Here's kind of what we can be more or less or less sure about. Talk a little bit about the idea of constrained optimization as being very important. Can you explain what that means and how that plays out? In economics, we think about people optimizing their utility function. The idea is that you have a bunch of things that make you happy. That's your utility.
Starting point is 00:09:30 They produce your utility. And you want to make the choices that are going to optimize your utility, that you're going to give you the most amount of happiness points. Yudels, call them utals. It's really, it's a very warm and fuzzy discipline. Yeah, I'm going to go home and use it. I was like, absolutely. Like, you gave me some eels today.
Starting point is 00:09:49 But we also recognize that people have constraints. In the absence of constraints, like having money to buy things or time to do them, people would just have an infinite amount of stuff. That's the thing that would make them the most happy. And so, but when you're actually making choices, you're constrained by either money or time. And in the book, I talk a lot about this in the context of time that, you know, you're as a parent, you're making your choices and you have some preferences and some things you would like to do, but you are also facing some constraints. But is there, is information flow kind of
Starting point is 00:10:22 and the data itself a constraint in that regard? Is that a, because it's so piecemeal the information you get. That feels almost totally random. Like some media story picks up on something, you tend, you know, some tidbit you hear some, unless you're like systemically studying a graduate seminar on parenting, which none of us do, you know, then it is random. Yeah. Yeah. And I think we wouldn't necessarily think of that as in constraints because, of course, in our mom, people are fully informed about everything all the time. Right. Right.
Starting point is 00:10:51 That's one of the great things about the models. But in real life? But in real life, yeah. I think people face constraints associated with just not having all the information and, you know, also the fact that this, these kind of information, like whipsawes over time. Yeah. You know, you get one piece and then you kind of, the next day there's a, there's a different piece of information and we have a tendency to kind of a glom on to whatever is the most recent thing that we have seen. about this as opposed to what is the whole literature over this whole period of time say. Right. You say you have a great quote where you say, in confronting the questions here,
Starting point is 00:11:26 we also have to confront the limits of the data and the limits of all data. There's no perfect studies. So there will always be some uncertainty about conclusions. The only data we have will be problematic. There will be a single, not very good study. And all we can say is that this study doesn't support a relationship. So it feels kind of hopeless. I loved when you talked about the first three days of when you brought Penelope home, and it really brought that back for me as I was just this dark room that you're kind of alone making these decisions. How do you even begin to see this data, you know, as a decision-making practice? Like, how does that, how does that translate? There are pieces where it's easier, where the data is better and makes it is clearer about
Starting point is 00:12:08 what you need to do or what the choices are. You will be making many choices without the benefit of evidence or data or very good data. I think part of what makes some of this parenting so hard is that for those of us who like, you know, evidence and facts and it's hard to accept, I'm just going to have to make this decision basically based on what I think is a good idea. Based on my gut. Based on my gut. And, you know, maybe based on my mom.
Starting point is 00:12:34 Yeah. Yeah. Which is a sample size of one. Sample size of one. And, you know, maybe if you have like a mother-in-law, father-in-law, it's like a sample size of two, but that's kind of it. And I think that that's, that's really scary, especially when the choices seem so important. Yeah, I mean, but it feels like, you know, that's kind of at heart what you're trying to do, right, is like to translate and to give tools in this decision-making place.
Starting point is 00:12:58 So how would you begin to systematize that? I mean, is there a way to bridge that gap better in the system? I think that it would be helpful if more information was shared. So I think, a lot of these things, there is a lot of information that is contained in people's experiences that we are not using in our evidence production. So in the book I talk about like the like sleep schedule. Right. So you're sort of told as a parent like, oh, you know, this is kind of roughly like around six weeks. Your kid will start sleeping like longer at night. But there's no, the information that's sort of typically conveyed to people is is not, is not a range. It's just like. like around six, six weeks-ish, you know, that'll start to happen.
Starting point is 00:13:46 But the truth is, like, yeah, that's, that's kind of right. But it's, if you look at data on when that actually happens, it's pretty, it's a pretty wide range. And I think part of what is so stressful about this early, these like early parts of parenting are that, that it's very hard to understand whether what you are experiencing is like normal. And I think if, if you could understand, like, yeah, most kids don't do this thing at this time or most parents have this experience or the way the graph plots kind of a little more broadly. Exactly. I think that would be, that would be super helpful. And that's a place where I can
Starting point is 00:14:20 imagine, you know, data collection helping, right? That, you know, we have much more of an ability at this point to, to like get information about what is happening with our kids, what's happening with, you know, with our health. There is a sense in which that could be helpful in in just setting some norms for the normal, the standard variation. across people. So looking at the variation and providing that as like a piece of the information.
Starting point is 00:14:46 As a piece of the information. Here's also the variation on that. Yeah. Yeah. And I think that's, that is kind of part of like generating the uncertainty and sort of showing people like what are the limits of the data. Right? That how sure are you that this should happen at this time?
Starting point is 00:14:58 Not just how sure, but like what are some of the like other ends of the curve? I mean, that's just information you just don't get. Yeah. Right. So let's zoom out a little bit as somebody who lives in the world deeply of data in the health system. We're in a time of enormous shift, right, for data. Does the improvement, does our kind of the sea of data and like better data,
Starting point is 00:15:18 cleaner data, more granular data, all that help this at all this question? Yeah. And, you know, I think we are collecting so much data on people, both sort of individual people are collecting a lot of data about themselves. Health systems are collecting a lot of data about people. This data is like underutilized, I think. We're amassing pools of it, but not in ways that are especially helpful. So, you know, when I go to conferences and people who work on health care, like, there's a tremendous amount of data that's being used on health claims.
Starting point is 00:15:50 Right. So if you sort of think about, like, what are some kinds of data that we have? We have, like, health claims data, like payments, everything that where there's an individual payment for it will, like, see, we'll see it. There's almost no work with medical records. Even though every hospital, everybody's using Epic, you would think that that would make it straightforward to have that data in a usable. form but it's not and and but you know at the same time the the potential for sort of going beyond like here is all the tests that you ordered into actually like what happened with those yeah tests and then what happened to this person later like that that data is not being mined in
Starting point is 00:16:26 the way that that we could to try to look at some you know at some of the kinds of outcomes that are that are a result the causality that you would pull out afterwards yeah absolutely you know how can we improve our causality. More data is helpful. More information about people is helpful. Being able to look at, you know, the timing relationship between some treatment and some outcome. Those are all the kinds of things that, you know, having better data would help us, would help us do. Are there other areas where you start, you are starting to see the data coalesce in a way where you're able to pull meaningful insights from it? So I think, yes, you know, when we have better data, we can use better tools even if we don't have randomization. A classic,
Starting point is 00:17:07 example in health is looking at the impacts of like a really advanced neonatal care, like, how cost effective is it to have like, you know, kids in sort of getting like really extensive NICU care. Right. Like how, like how effective is that in terms of improving survival and how much does it cost? Oh, such a basic question. Such a basic question. And super hard to imagine analyzing because, of course, you know, babies that are very small and
Starting point is 00:17:29 are sick cost more, but also have worse outcomes. Yeah. And so if you sort of looked at that, you would be like, well, actually like spending more, we're not getting anything because those babies are more likely to die than babies that are spending less. We define very low birth weight babies as less than 1,500 grams, which means that the treatment that you get if you're a baby at 1,503 grams is very different than the treatment that you get as a baby at 1,497 grams, which is completely arbitrary. I mean, the choice of 1,500 grams has nothing to do with science. It's like this line in the sand. That's not a good way
Starting point is 00:18:01 to set policy. However, having set the policy like that, you can then say, okay, well, now we have and babies that are almost exactly the same, but the babies that are a little bit lighter that are like 1,497 grams get all kinds of additional interventions relative to the babies that are 1503 grams. And when people have done that, they see actually the babies at 1,497 grams do better. So the line actually is beneficial in that way because you're defining these two groups very closely. Oh, interesting. So the line that's setting this line in this arbitrary way lets you get at some causality. Even though not good for the babies. So having done it, good for good for information. Interesting. What are some of the other tools? Are there others in that list?
Starting point is 00:18:39 So that's that is a class. That's an example of regression discontinuity, that there's some discontinuous change in policy on either side of a of a cutoff. And that has become a sort of part of a big toolkit of things people are, people are using more of the other is to look at sort of sharp changes in policies at a time, at like a moment in time. Oh, so the same thing. The same thing. Yes. Then there's that. And then there's, you know, looking across when different policies change differently for different groups. So all of these things have become easier with more data and become more possible with more data. And I think that that has improved our inference in some in some of these settings. I love that you talked a little bit about the experience of doing your own data collection kind of in the wild with this spreadsheet after Penelope was born, which made me laugh so much because it was so much like my spreadsheet. It was just so sad to think of like all. these moms alone in their bedrooms at night. I know. I like Google Drive. I mean, I think there's been a lot more apps since like
Starting point is 00:19:41 since then that help. Yeah. But still, how I love that you said it gives the illusion of control, not control. And in that particular, in these kinds of like data vacuums, like if we're not good at statistical analysis or like pulling out causality from these murky areas, like if we're not Emily Oster, basically. How do you, like, or even if you are, how do you kind of stay on that line of like the illusion of control versus like actual knowledge that like impacts real decision making? No, I think it's super hard because the thing is the illusion of control is a very powerful illusion. Very. And both empowering and dangerous in health, in health context. Exactly. Like we would, you know, you would sort of, we like people to feel like they're in,
Starting point is 00:20:27 they're in control. Some of the message of this book, I think people have taken, not quite right, but to say, like, well, it doesn't really matter what choices you make. Like, all choices are good choices. Wow. I think there's, that's, that's not quite the right. It's not quite the message. I'm surprised that that's the message that people take from that. Occasionally.
Starting point is 00:20:43 There are a lot of different good choices that you could make about parenting. And so I think that there, there is a piece that, like, we maybe don't need to be so, like, obsessive about all of these. About one of those. About one of those, any one of those choices. What's your point about range? It's like, well, let's give, let's educate a little. bit more about like the spectrum of the spectrum of good choices yeah another area I feel like
Starting point is 00:21:07 where every other day there's a new study that says something different and it feels like there's a plethora of studies is screen time I'm just I'm going to put that out there right now I'm sorry everybody we're going to touch that third route so can you walk us through like can you help guide us through some of that maze so when I looked in the screen time I had always thought about like screen time is like bad. Like it's like, the question is, is it bad or not? Right. But actually there's like a whole other side of this, which is some people like screen time is the way to make your kid smart. Like you can like your baby can learn from them. Okay. So point number one is what does screen time actually mean? Right. Which is a bunch of the, like that's part of the problem with this is like when you say screen time like what do you mean? Yeah. Do you mean like, you know, educational apps? Yeah. Do you mean timing with grandma? Do you like jump in the shower? Yeah. Exactly. And or yeah. And that point like do you while you jump in the shower. Yeah. Exactly. And or yeah. And that point like do while you jump in. in the shower. Like what is the other thing you're going to be doing with your time? I think this is where all of these recommendations seem to assume that the alternative use of your, like, if your kid wasn't watching Sesame Street, you would be like on the floor, like, playing puzzles with them
Starting point is 00:22:16 and like super engaged with them. Which like maybe is true. Taking them to the zoo and like having them touch different textures of animal skins or whatever like sensory development. Yeah, which like is great stuff that you should definitely do with your kid. But some of the time when, you know, when our kids are watching TV. It's because, like, we, it's, it's, that maybe isn't the thing that you would otherwise be doing. Yeah. Well, you could be like puring healthy vegetables to like feed them well. Yeah. Exactly. I'm sure that's what we're all doing. It's definitely for sure. Or maybe watching a little reality TV for five minutes while you fold laundry. Like a little bit of call the midwife, you know, got a little bit. Yeah. The problem with
Starting point is 00:22:53 screen time is that the evidence is very, is very poor. Can you just break up like why the evidence is so poor? Because this does seem like an area where, there should have been time for that kind of gold standard randomized study to develop, no. What is the evidence problem? So I think the evidence problem is twofold. One, it's actually not a super easy thing to run a randomized trial on because these are choices that people are thinking a lot about. And, you know, think about something like an iPad, like, do you want to be involved in a
Starting point is 00:23:21 randomized trial of whether you're a kid? Oh, there's too much intention, too much at stake. Too much lifestyle. Exactly. Too much like lifestyle stuff. some people have been able to use, like, the introduction of TV, which was sort of had some random features to, like, look at the impacts of TV. And that evidence is sort of reassuring and suggested TV is okay. But of course, it's very old. It's like from the 50s. A whole different way of consuming. Exactly. Everything. Yeah. And I think the other thing is the other problem
Starting point is 00:23:44 with the sort of current, answering the current questions people want, like what about iPads, what about apps, you know, is that they just haven't been around long enough. And so a lot of the kinds of outcomes you would want to know that even things like short run like test scores, you know, I got the first iPad when my daughter was born. Like, that was like one. And I remember getting and being like, this is never going to catch on. This is why I'm not in tech. I was like, who would use this?
Starting point is 00:24:10 Meanwhile, your daughter's like, swiping. Meanwhile, you know, okay. But, you know, now she's, she's in second grade. Like, that's kind of the earliest that you could kind of imagine getting some kind of of what we'd measure test scores or something like that. But even, you know, she didn't use the iPad anywhere. near as like Fasel away as my four-year-old, right? This is evolving so quickly that any kind of even slightly longer-term outcomes are really
Starting point is 00:24:35 hard to imagine measuring, let alone, you know, absent a randomized trial, the, if you, like, if you weren't able to randomize this, which I think you won't be able to, the, the amount of time kids spend on these screens is really wrapped up with other features of their household. Yeah. Okay. So you have the definitions. You have the time. and the speed at which things are changing.
Starting point is 00:24:59 And then you have the willingness for people to actually, like, engage in change or doing things differently. And then so all of that leads to what kind of, so what the, what do the studies actually tend to look like in this space that we draw conclusions from? So actually, there's almost nothing about iPads or phones. That seems so contrary to, like, what the media is saying every five minutes. Yeah. So there's tons of studies on TV, which compare kids who watch more and less TV.
Starting point is 00:25:24 And, you know, you can, but most of that, again, is sort of. studies that are like based on data where before these before people were watching TV on these screens. Yeah. Maybe TV is TV and, you know, you can imagine that that would be kind of similar, but, but things like these absolutely, just like no studies, you know, or there'll be, there's like, I think there's one like abstract from a conference. This is not a paper.
Starting point is 00:25:46 There's like an abstract conference where it's just like we have some kids and we like compare the kids who like spend like the babies who spend more time watching their parents' phones. and then they like do worse. They're like, look, but it's like, that's not, I mean, this is pathetic. It's sad. This is a terrible piece of evidence. So is this an area in which you just go with your gut? I mean, I try to generate a fancy version of go with your gut, which is called Bayesian updating.
Starting point is 00:26:10 And so I basically try to say, look, you know, I mean, we want to step back and think about what are the places of uncertainty. Logic would tell you, you know, your kid is awake for what it is like 13 hours a day, 12 hours a day. If your two-year-old is spending seven of those 12 hours playing on the iPad, then there is a lot of things that they are not doing. Right. That's probably not good. Right. On the other hand, you know, if your kid is spending 20 minutes every three days, it's very hard to imagine how that could be bad. So just thinking about it purely in times of like time allotted to any one activity, basically. And then I think once you do that, then you're sort of like,
Starting point is 00:26:48 okay, but, you know, there are things that we're uncertain about, you know, what if my kid watches an hour of TV every day or spends an hour on a screen every day. Like, is that too, is that too much? Is that too much? If we sort of accept, like, five minutes a day is fine. Seven hours a day is too much. Like, is the limit at an hour? Is a limit at two hours? You know, and I think the truth is what we will find if we end up doing any studies like this is that it depends a lot what other things they would be doing. Yeah. With their time. Wouldn't it also depend so much on the child? Some children need, you know, learn in a kind of way that lends itself to this technology. Some children need other kinds of learning, you know, that it's highly individual.
Starting point is 00:27:25 Yeah. I mean, I think this gets into the problem with studying older kids in general that just like there's so much, there's so many differences across kids. It's hard to even think about how you would structure a study to learn about them, never mind actually using evidence that exists. It's really interesting because the last time we went to take my daughter for her annual checkup or maybe it was my son. I can't even remember it. It's so different from the first days of those early spreadsheet. She were not like, um, did I even get it? Which one is that? Yeah, exactly. Anyways, the doctor said very concretely, two hours, two hours max within any day. But it was really interesting to me that it was such a specific line in the sand.
Starting point is 00:28:02 And now I'm thinking about how that information would even get into that to percolate down to that level of like the system and get kind of fossilized into the system so that that recommendation is being passed on to parents. Like how does that happen with these studies? How do they translate to that level of advice? Yeah. I mean, I think what happens the, what happens is like organizations like the American Academy of Pediatrics, they bring people together to basically talk about the conversation we just had, which was like, well, it's fine. Okay, let's agree. Sort of like, we don't know that much about this. Like five minutes seems fine. Seven hours is too much. These are like smart people who see kids a lot, who presumably are using some knowledge that they have about kids to pick some number. But the answer is like it, like you could pick a lot of different numbers. We sort of say this and then it becomes like this rule. Yeah. And people have some. impression that it comes from some piece of evidence as opposed to sort of like, you know, a some gold standard study of expert opinion or something. Yeah.
Starting point is 00:28:57 Which is really what it's, what it's from. You also work specifically on certain health recommendations. So how they change over time and how we stick to them. You wrote a paper on behavioral feedback. And then you talk about how those individual choices might in fact be changing the science itself. Can you talk about what that means and how that might be happening? I was thinking about exactly this issue of like, okay, we just make some recommendation.
Starting point is 00:29:21 And sometimes those recommendations are kind of arbitrary. Yeah. But then they go out in the world and like people. Take on lives of their own. Exactly. And so like a sort of a good example. Vitamin E supplements like in the early 90s, there were a couple of studies, which suggested that like they are good for your health that like prevent cancer.
Starting point is 00:29:38 Okay. And so then there was like a recommendation like people should take vitamin E. And you know, then we have to ask a question like what like who takes vitamin E after that. And one of the concerns is that the kind of people who would adopt these new recommendations, like who listens to their doctor, it is people who are, you know, probably maybe they're more educated, maybe they're richer, but like above all, they are interested in their health, right? So they are taking vitamin E so they avoid cancer, but they're also exercising so they avoid cancer. And they're eating vegetables, so they avoid cancer.
Starting point is 00:30:11 We just call them selected. Okay. These people are like positively selected on other health things. And so, indeed, you can see in the data that the people who start taking vitamin E after this recommendation changes are kind of also exercising and not smoking and doing all kinds of other stuff. Well, why is that, like, you know, interesting or problematic? Well, later we're going to go back to the data because that's like the way science
Starting point is 00:30:35 works. But now the people who take vitamin E are even more different than they were before. Right. So now these people are like, you've added another layer. added another layer. So in fact, you can see that in the data. You can see that basically before these recommendations changed, there was sort of a small relationship between taking vitamin E and like subsequent mortality rates. But after the recommendations change, you see like a very large, very large relationship between vitamin E and mortality rates. And so it looks like basically
Starting point is 00:31:04 ends up looking like vitamin E like is really great for you. Has this big impact. But of course, that's because at least it seems like it must be, at least in part, because the people who adopt vitamin E are the people who are also doing these other things. So what does that mean then? It feels like such a loss. Like how does one ever? Yes. How would one ever develop like a recommendation based on what we think we know? I know. And untangle it from like. So this paper is very, it's just very destructive in some time. And sort of like, other than saying like it probably doesn't matter if you take vitamin E. So that's like that's like news you can use. you can take that home with you. But I mean, I think it does, it does, yeah, more or less just
Starting point is 00:31:50 highlight some of the inherent and very deep limitations with our ability to learn about some of these, some of these effects, particularly when they're small. Is this basically part of the sort of crisis of reproducibility? I think it's, I think it's not unrelated. Yeah. So I often think about this idea of P hacking, which refers to the idea that you like keep, you can like, you keep running your studies until you like get a significant result. There are a bunch of, there's a bunch of people interested in this process of like how science how like science evolves and the the ways in which the like the evolution of science influences the science itself or the incentives for for research influence how how science works and I think it's particularly
Starting point is 00:32:32 hard to draw conclusions in these spaces like diet or these these health behaviors where the honest truth is probably a lot of these effects are very small. So if you ask the question, like, what is the effect of, like, chia seeds on your health? Right. My dad is, like, really into chia seeds. Yeah. That was a thing. There was a moment.
Starting point is 00:32:51 Yeah, that's a thing. Well, he's still in that moment. He's still in there. And, you know, like, what is the effect of those on your health? The actual effect is probably about zero. Maybe it's not exactly zero, but it's almost certainly about zero. But are there sometimes secret sleeper, like, whoa, there actually might, you know, the only way to find out is. Is to do these things.
Starting point is 00:33:08 Yeah. Yeah. Yeah. And so maybe there are some secrets. Like, maybe Kale really is magic. Maybe it is. Yeah. But it's probably not.
Starting point is 00:33:14 Okay. You know, I don't, I, you know, I spent a lot of time with these diet data and, you know, there's this sort of like dietary patterns, like the Mediterranean diet, which do seem to have some sort of vague, you know, support in the, in the evidence. But I would be extremely surprised if we ever turn up like a one single thing, one magical food. So the point is it's the pattern. It's the pattern. And it's all the other things that you're doing.
Starting point is 00:33:39 Yeah. If you smoke three packs a day and you never exercise, but you eat. eat some kale, that's not going to help you. Yeah, yeah. The kale's not going to help. What about when you really do need to affect change? What are the ways in which these guidelines can shift over time with kind of new sources of information or data and statistics?
Starting point is 00:33:58 Like, what's the positive? How does that actually play out in the right manner? Yeah. So I think there are times in which the change in evidence is so big and so, like, compelling that we can get changes, best practice. is in obstetrics, like how do you deliver reach baby as an example? They change, like those changed over time because there was like one very big well-recognized study that everybody agreed like this is now the state of the art. And it happens fast at that point. And then it
Starting point is 00:34:30 happens pretty fast. It doesn't happen immediately. Like you might have thought that those kind of changes could be like immediately affected and I think that they're that they're not. But they do happen over time. Those examples really rely on there being like a a cohort of sort of like experts who are all reading the the guidelines and and sort of seeing that they changed and then themselves are sort of doing this all the all the time I think part of what's hard in the in the health in the broader health behavior space where it's people who need to make the choices not you know not physicians yes when it's in the home in those dark bedrooms right it's like that you know that's much this much harder to get people to change
Starting point is 00:35:09 their behavior in those spaces it's not these pediatric guidelines those aren't not effective. Yeah, I do not think those are effective, or I think we don't see any evidence in the data that those are effective at moving these, at least in these kind of spaces. So what's the answer? How do we positively affect change and gather these insights and have smart people making good recommendations? So, I mean, I think one answer is media attention. They kind of few times when we see very large spikes and changes, they actually seem to correspond with some media coverage. On the flip side, like media can often be very bad. Some of these big changes in these expert things were kind of resulted from media coverage, which was really, like, sensationalist
Starting point is 00:35:46 and, like, totally inappropriate. Yeah. And, you know, it wasn't, like, a very nice, like, New York Times story about some study. It was, like, a, like, a sensationalist, 2020. About what? About this was, this is about vacuum extraction, which is a way of pulling the baby out and has gone down a lot over time. And it was, like, this sort of sensationalist, like, John Stossil 2020 episode about how it could
Starting point is 00:36:09 hurt your baby, which caused, like, basically. big productions. Interesting. Yeah. But, you know, that like. But the science was there. The science, yeah, was there. I mean, he overstated the science.
Starting point is 00:36:18 Yeah. But it was, it was probably there. So it's almost like a random confluence of like when the science is there and the media hits it the right way and then we see change. Yeah. Which doesn't need to hope for it. Yeah. That doesn't feel like we can plan so much for that.
Starting point is 00:36:34 No. You also study when, when we are resistant to change. You looked specifically at diabetes. I think, who had been diagnosed with diabetes and then whether or not their behavior changed even given a certain amount of information. So what do you see there about our resistance to change even with the right kinds of information? I mean, I think one of the big challenges in the health space at the moment is that like so
Starting point is 00:36:56 much of the, so many of the health problems that we have in the U.S. are like problems associated with behavior, just the fundamental fact that like people do not eat great and we have a lot of morbidity and expense associated with that. And I think there is often a lot of emphasis on the idea, like if we just get the information out, if people just understood vegetables were good for them. They would eat vegetables. Doesn't happen. That's not true.
Starting point is 00:37:21 Yeah. I think. And so, you know, this paper is about sort of looking at something where kind of a pretty extreme thing happens to people. Like, they are diagnosed with diabetes and we can see what happens to their diet. And the answer is, you know, it improves a tiny amount. Even with a real come to Jesus moment. Exactly.
Starting point is 00:37:37 And a lot of new information. Right. Like, you know, and monitoring. follow-up, right? Me, you're diagnosed with diabetes, like, you know, you have to take medicine every day. You've got to go to the doctor, I get to test your, you know, test your insulin, at least for some period of time. So this isn't like something where you can just forget that it happened. And even then the changes in diet, you know, they're there, but they're really small. They're like, you know, like one less soda a week or something. Like really, like really
Starting point is 00:38:01 small. And how are you noticing these? We're inferring information on diagnosis from people's purchases of testing products and then following their grocery purchases. So this is like an example of using, you know, a different kind of data. Yeah. So not health data in this case. It's actually like Nielsen data. So Nielsen data on, on what people buy. But then, you know, using some like machine learning techniques to try to figure out from the kinds of things people buy when were they diagnosed with diabetes and then looking at their diets over time. So is the answer that there has to be some sensational story that talks about? I mean, I think there I'm not even sure that would help. I think part of the problem is people really like
Starting point is 00:38:40 the diets that they're comfortable with. Like diet is like such a habit formation thing. And, and, you know, people are willing to make important health sacrifices to maintain the diet that they that they like. We get into some of these questions of preferences. Like, and, you know, if people, if that is the choice that people want to make, like, should we be trying to intervene with with policy? Like, let's say everybody had all the information. They knew that they shouldn't drink so much soda and that they should lose weight, but they still chose not to, like, do we want to develop policies that affect that? I'm not sure. Yeah, or maybe that's just free will. Yeah, maybe that's just free will. Yeah. And it comes up in the parenting stuff too. Like, you know,
Starting point is 00:39:19 how much do we want to be externally controlling the choices people make with their, with their kids, even if we don't think that they're the right choices? But I do think there's a segment of people who want to make the change, but the gravity, you know, because of the information, but the gravity of the habit is so much that it's hard to know where to go about it. I guess, I would say, where do you see this data going? Like if you had your fantasy for where you want the kind of data and the way that we see this data evolving and the way that you see that kind of percolating out to the public, I mean, in terms of being sort of a translator and providing people the tools, like what do you want to see in terms of the way the system responds to or integrates
Starting point is 00:39:56 this data in the future? Yeah. I mean, I think the big message of the book is in some sense that like you you should use the data to make yourself like confident and happy in your in your choices. I think so much of what is hard about parenting is that in the in the moment you are not often confident in your choices. And then when somebody asks you like, why did you do that? Then you feel bad. Yeah. Right. And I think that there's there's a sense in which sort of looking at the data, but then confronting like, well, we don't know. But you'd be like, okay, I made this choice. You know, I decided to let my kids watch an hour of TV every day. Yeah. Because like I thought about it and I thought there wasn't any data. And like, that's the choice that I made that sort of
Starting point is 00:40:35 that confidence is like important for for being happy. And if we could sort of like move in that direction, that would be that would be good. It reminds me a lot of what one of my good friends Brannie said to me when I was in the trenches of like babyhood and having a lot of anxieties around all these hot button issues, breastfeeding, sleep time, like all of it. She had been through it. Her kids were in college and she was like, let me give you a piece of advice. Be wrong, but be wrong with confidence. Yes. Just be wrong with confidence. That's all that matters. Yeah.
Starting point is 00:41:02 No, exactly. Be wrong with confidence. I love that. Yes, I am wrong with confidence so frequently. And actually turns out to be right. The truth is there's a lot of good options. A lot of good options. Thank you so much for joining us on the A16D podcast.
Starting point is 00:41:14 Thank you for having me.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.