In Good Company with Nicolai Tangen - David Spiegelhalter: Statistics, Communicating in Crises, and Living with Uncertainty

Episode Date: September 3, 2025

Why do we fear uncertainty—and what can statistics teach us about living with it? Nicolai Tangen speaks with Sir David Spiegelhalter, one of the world's leading statisticians and expert co...mmunicator of risk, to explore how we navigate an unpredictable world. From the psychology of uncertainty to lessons from COVID-19, climate change, and even weather forecasts, Spiegelhalter unpacks why numbers are never just "cold, hard facts" and how we can use data more wisely in a world full of unknowns. They explore why trust depends on admitting uncertainty, and what it means to build resilience—both as individuals and societies. Engaging, insightful, and sometimes deeply personal, this conversation blends statistics with human experience to explore how we can make sense of risk and uncertainty in an unpredictable world.In Good Company is hosted by Nicolai Tangen, CEO of Norges Bank Investment Management. New full episodes every Wednesday, and don't miss our Highlight episodes every Friday.  The production team for this episode includes Isabelle Karlsson and PLAN-B's Niklas Figenschau Johansen, Sebastian Langvik-Hansen and Pål Huuse. Background research was conducted by David Høysæter. Watch the episode on YouTube: Norges Bank Investment Management - YouTubeWant to learn more about the fund? The fund | Norges Bank Investment Management (nbim.no)Follow Nicolai Tangen on LinkedIn: Nicolai Tangen | LinkedInFollow NBIM on LinkedIn: Norges Bank Investment Management: Administrator for bedriftsside | LinkedInFollow NBIM on Instagram: Explore Norges Bank Investment Management on Instagram Hosted on Acast. See acast.com/privacy for more information.

Transcript
Discussion (0)
Starting point is 00:00:00 Hi everybody. I'm Nikola Tangen, the CEO of the Norwegian sovereign wealth fund. And today I'm in really good company with Sir David Spiegelhalter. He is, well, I would say he's the world best statistician and for sure the best communicator of risk that I've ever seen. Written lots of fantastic books and is particularly known for making statistics, which is difficult, really accessible for most people. So big thanks for joining us. No, great, great pleasure to be here. Having worked with risks, what's the main thing that you have learned about human psychology? Well, first of all, that I'm not a psychologist.
Starting point is 00:00:50 So, you know, I am a statistician, but I have done my best to learn from psychologists that I have worked with. And I suppose just from observing how people react to risk and uncertainty, I tend to think of the broader idea of uncertainty rather than just risk, everything to do with not knowing about what might happen in the future or even not knowing what's going on at the moment or what's happened in the past. All these things we are uncertain about. Some of them, you know, usually things do have an upside and a downside, and so they could be considered risks depending on how they occur.
Starting point is 00:01:23 But I think what I've learned is that people have to live with uncertainty. When you ask people, they say, oh, I don't like uncertainty. But some people like it, right? Well, some people like it. Some people are a bit more bold than others. But the point is that when you actually then go a bit further and say, well, do you want to know what you're going to get for Christmas? Do you want to know how a match is going to end, you know, if you've recorded it or something?
Starting point is 00:01:46 Do you want to know, do you just jump to the end of your, you know, series on TV to see the last episode and see what happens. Also, the thing I ask is, do you want to know when you're going to die if I could tell you? No, they all say no. Some would quite, some would like to know when they're going to die. Some are so, in a way, uncertainty averse that they really like to have everything planned. But most people have realized that you've got to live with uncertainty and think of a life without uncertainty, without some risk. What are the people who hate risks the most? Oh, there are some people, I think, who are very cautious, who would like to have of everything planned out, who want to feel that they can control all contingencies and have mapped
Starting point is 00:02:29 out the possibilities. And of course, this is impossible. And this is, you know, when I talk to audiences, when I say the first thing is not only that we have to face uncertainty, but we have to face actually deeper uncertainty that we can't even list the possibilities of what might happen to us in the future. We have to deal with that, you know, a cloud of unknowing. And that's part of human life. And I'm interested in strategies that people use personally to deal with that. How is it linked with the Big Five? So my impression is that introverts like risks less, Americans like it more, old people perhaps less than young people. Just how do you, how does that? Yeah, I haven't actually looked at that. I mean, people will have looked at that because people have
Starting point is 00:03:16 studied risk and proneness from risk aversion. What they found is that there's not a single character. I think this is why it's not part of the Big Five, a single risk characteristic in people's personalities. Some people, I've known people who are incredibly sort of what I would think consider reckless physically with what they did with their bodies. They were extremely cautious with money. Other people may be, you know, very bold socially and take all sorts of risks in changing jobs, in changing friends, and going into new environments. but again, may be very cautious about their physical health. And so there is not a single risk scale where you can put everybody on. It's much more multidimensional than that. Why do people fare the unknown so much?
Starting point is 00:04:03 Oh, I mean, I suppose if I was an evolutionary biologist, I might say. I mean, we fare the unknown risks more than the known, more than the known, right? It's why COVID was so scary because we hadn't seen it before. Exactly, exactly. And that's been known for eight. that unknown, unknown risk, Ellsberg paradoxes that people invest in, you know, looking in the 1950s, that if you couldn't actually say how big the risk was, people were much more averse to being exposed to it. And that's been known for ages, a well-known, that idea of risk aversion
Starting point is 00:04:34 to a well-known, to uncertainty about the risk, has been, you know, 70 years ago, I think Daniel Ellsberg did his original, you know, study of that. And I think quite reasonable, because if you don't know what the possibilities are and roughly how likely they are, then all sorts of other, perhaps rather deeper attitudes to caution and precaution come in, and people might start being really hedging themselves against, and quite reasonably, against major losses. And we can see that going on in all parts of our life. What kind of habits can help us interpret risk in a more rational way?
Starting point is 00:05:16 Well, first of all, I don't like the word rational. So in my book, I hardly use the word rational at all because I think this claim that's been, oh, yeah, we can look at risks rationally and deconstruct them. Actually, in real life, I think that's pretty nonsensical. And I've taught this stuff for decades. I've taught decision theory. I know how you should be doing it. And I know you can't do it in practice because for the theory to work,
Starting point is 00:05:41 you have to be able to list all the possibilities. You have to list all the options. You have to look at the probabilities of the possible outcomes. and their value to you and then according to economic principles you should maximize your expected return and things like that. Well, it just doesn't work. It fails at the first step
Starting point is 00:05:58 that you can't even list everything that's going to happen, let alone except in really simple circumstances put numbers on everything. So I think nobody can be rational. It just doesn't exist, I think. So I really don't like thinking and this is some objective.
Starting point is 00:06:17 No, I think one can try to be, in a sense, reasonable in a much broader sense of, you know, I suppose how I would say is to think is, first of all, you have to use as much as imagination as you can. Things might happen that you didn't think of, but really you're really opening yourself up to problems if you haven't at least made a big effort to envisage the possible futures to consider quite extreme scenarios. and that requires diversity of inputs. I'm hopeless at this. I have no imagination at all. Absolutely disastrous. But I know that if I were, God forbid, in an important, making important decisions for society, I'd want advisors with a real range of different inputs.
Starting point is 00:07:02 The example I like is Barack Obama when he was faced with the decision about whether he was sending the seals when it was suspected that Osama bin Laden was in a compound in Abbottabad. And he had a very diverse team of advisors who didn't speak to each other. And some were, you know, they may have been set up as kind of red teams. They were really pessimistic, 30 to 40% chance he was going to be there. Others were a real gung-ho, 80 to 90% chance he's there. And he had to put all that information together. But I think that's what someone who actually has to take the rap, the real decision maker,
Starting point is 00:07:35 should be open to a diversity of opinion when there is no clear, correct answer. So I think the first thing is we have to acknowledge. There's no correct way of doing this. We have to have a diversity of opinion. And we have to work in a combination of an analytic approach, which I'm, you know, based in maths. I've done maths and stats. I love trying to deconstruct uncertainty looking at the sources, analysing data, building statistical models. And it's great. But it's never enough. It never tells you what to do. You also have to judgment. But increasing the numbers are emotional. They're weaponized and debates and so on. Just how do you kind of? direct that. Oh, I think it's well-known numbers. It's a complete myth that they're cold, hard facts. Even before they're weaponized, we know that someone has made a decision to collect that particular data. There's lots of judgments that's gone into every analysis, every sort of management. There's always judgment behind every statistical analysis. And it's good that's
Starting point is 00:08:33 made very explicit. And so other judgments can be added to it. People may disagree about the fundamental tenets, the fundamental assumptions that are always underlies. line any statistical analysis. So it's a mixture. It's not thinking fast, and it's not thinking slow. It has to be a combination of the two. Now, you started out in medical statistics. Just how did you, how did that come about? Oh, very naturally, just because there's a good job going. And curiously, the job by first, you know, a real job after, after university, I taught in America for a year and then came back. And it was 1978, and it was working on artificial intelligence in medicine, late 1970s.
Starting point is 00:09:21 It was a booming idea. It was then largely called computer-aided diagnosis and computer-aided prognosis. And it was building statistical models to enable diagnosis, but it included computer interviewing of people with stomach complaints and things like that. It was really advanced, and the tech was terrible, but the ideas were absolutely modern and the problems, the issues about how to integrate this in with medical practice were in there. So I was working on uncertainty in AI for much of the 1980s, and we thought we'd solved it. How wrong could we be?
Starting point is 00:09:52 Because it's still a massive topic. Obviously, the machine learning techniques that people use now have developed beyond all imagination, and they really are incredible. However, they are really struggling with uncertainty still. what were what was some of the strangest stats that you've seen in in the medical sector oh i don't know the mind-boggling the mind-boggling ones yeah i suppose it's the ones i've been involved in actually four major public inquiries into health scandals in the uk that's kind of where um i develop quite you know a slightly higher public profile i think so ones where you know over 30 babies died
Starting point is 00:10:36 with heart surgery at a centre more than you would expect to have died. And then, of course, the second one was Harold Shipman, the mass murderer, who murdered at least 250 and possibly 400 of his patients over a 20-year period. And we were brought in, he had been caught by then, but we were brought in to say, could he have been detected earlier or not? Could he? Yes, yeah. We conclude he could have been detected after a few years.
Starting point is 00:11:03 If somebody had been looking at the data, because he had so many excess deaths, but nobody was looking at the data. Are people now looking at data properly across hospitals across the world? It's got better, but it's still slow. I'm on a group in the NHS that's only now building a really rigorous statistical monitoring system for adverse events in maternity units. And there's been endless maternity scandals in the UK. And finally, we've got a system based on, it's essentially a statistical process control system,
Starting point is 00:11:34 but that we applied to shipment and then people took what we'd done, which was adapting industrial quality control to medical outcomes, and then applied it to, for example, children's intensive care in the UK has got a monitoring system based almost precisely on the work we did for shipment for early detection of problems. Now, you are a leading kind of public communicator of statistics. Why is it important that people have a grasp of this field? Oh.
Starting point is 00:12:08 What kind of wrong decisions are people being? What are the big wrong decisions people are making? So we only have to look at some of the, without mentioning any names. Well, do a few. Okay. We only have to look at what's happening in America at the moment to see what happens when high-level public discourse is not based on evidence, It's not based on numbers.
Starting point is 00:12:31 It's just based on people saying what they feel like saying, and regardless of the evidence behind it. Give some examples. What are the most horrifying ones in your mind? Oh, well, I think what RFK is saying about vaccines at the moment, you know, because he's got a built-in bias. I mean, vaccines are not perfect. They're not perfectly safe and they're not perfectly effective. So I'd be the first one to say the term vaccines are safe and effective is actually misleading. However, they are of enormous value, and he's got his particular, I think, biases there.
Starting point is 00:13:06 And, of course, I'm not going to talk about Trump and his way he was setting charists and things like that. So, you know, it seemed to be, you know, how he originally did that seem to be based on what a, you know, 20-year-old intern might do on a, you know, on a spreadsheet. And so it just upsets me when I see, you know, the enormous, oh, apart from, of course, sacking the head of the Bureau of Labor statistics, when he doesn't like the numbers. So all these things are deeply upsetting to a nerdy statistician who I don't want to tell anyone what to do. I don't want to tell anyone what the right policy is. All I want to do is say, please respect the evidence that we've got.
Starting point is 00:13:45 Just respect it and it doesn't tell you what to do or whatever, but just try to respect it. And it's not just because politicians, it's social media, it's conspiracy theories everywhere. The lack of concern or the, I think, deliberate lack of understanding of what good evidence is and how it should be used is deeply upsetting to a nerd. Why is it happening? Oh, God. This is beyond my. Look, I'm a statistician. I'm a great big sociologist.
Starting point is 00:14:18 Okay, but give us some more kind of examples of where you think the world is going totally bananas and moving away from. from facts. Yeah, yeah. I think, obviously, there's enormous blame on social media on the algorithms, on the recommendation algorithms that mean that something, oh, wow, that looks impressive. And it's almost certainly wrong. And so often they are based on numbers. You know, people love numbers. And they kind of think, as we said before, they kind of think they're cold, hard facts. But no. And often the numbers are actually not completely wrong. It's just that they're grossly misinterpreted and exaggerated and one-sided. Cherry. picked. And, you know, a wrong number that's blown up and people making some broad claim is around the world, you know, is just triggered by the algorithms everywhere. And it looks good. It looks impressive. And trying to backtrack on that is really, really difficult. And that's why one of the things, you know, I'm on the board for the UK Statistics Authority. And one of the things I try to hammer all the time in the communication of official statistics, boring old official statistics is to try to preempt the misunderstandings that people will make. Because once everything's
Starting point is 00:15:33 out there, it's really difficult to counter the misinformation of misclaims, which might be made accidentally or deliberately. You can't stop people saying everything. You can't stop misclaims. But if you can preempt them, if you can understand by knowing your audiences, by listening to people's concerns, even the people you don't like, to know what might be said, you can get in there and actually say this data means, you know, I think we can interpret it to mean at least this, but it does not mean this. And that, I think, is going to become, is becoming a more common trend in the communication of official statistics to say what things don't mean. What was the most important thing we learned from COVID? Oh, the importance
Starting point is 00:16:18 of data. The importance of data and it's clear communication. Which data in particular? Oh, everything. Good. We wrote a whole book, you know, with every chapter. on a different data source. There was so much. I mean, I was working, you know, around the clock, really, analyzing data and communicating about it. Because I didn't have an official role, which was great, which meant I could get out there with the media and try to explain things.
Starting point is 00:16:37 And again, I never said what should be done. It was only trying to explain the numbers. And you got to have everything. You had the vaccines. You had the Rolak. You had the testing. You had, of course, the disease, the infections. But what a fantastic time for a statistician.
Starting point is 00:16:50 I mean, it must be paradise for you. Well, I don't know. Paradise isn't quite the world. right word. But it was, it was very exciting, very challenging and incredibly rewarding. And important. I mean, statisticians have hardly been that important before. Exactly. And I've had I keep on it. I even now, even now, get people coming up and say, well, thank you so much for the work you're doing during COVID. And the media had to learn. It wasn't just me or other statisticians who were out there talking about the numbers. And yet we're always at the beginning at least
Starting point is 00:17:20 asked, well, you know, who's to blame and what's going to happen or what should be done? And we'd have to say, I'm not going to say, no, no, it's not my job. You'll have to ask somebody else. All we're doing is explaining the numbers. And after a while, the media learned that this is what the audience actually wanted. They wanted an unbiased, unagendered discussion of the numbers. And they loved it. And so, you know, I was really popular, to be honest. Well, you, in your book, you talk about the five FICA rules. You know, you tell people what you know, you tell people what you don't know. Tell us about these five things. Oh, yeah. Well, this is for communicating evidence in a crisis. This is all derived from John Krebs when he was head of the
Starting point is 00:17:59 Food Standards Agency in the UK when he faced crisis after crisis. He had foot and mouth. He had mad cow disease. He had everything, one after the other, total disasters. And he developed sort of playbook that he there afterwards he wrote and said, this is what I did when I was talking. Five points. And I, you've got to have, everyone should have these written tattooed on them, I think. First, what you know. So, you know, be really clear about what we know. And then you say, what you don't know. You say where the areas of uncertainty are. You admit them straight away. Second, not first, but what is second? And then you say, you say what we're doing about it. You know, we are learning more. We're doing experiments. We're finding out. We're
Starting point is 00:18:35 collecting data. We are learning. Then you tell people what they can do in the meantime that you may want to be cautious. You may not want to eat beef. You may not want to do everything. So you give people advice, self-efficacy in the meantime. But the final one is the most important. You say, we will come back to you and our advice will change as we learn more. So you emphasize the provisionality of what you're saying. Now, this is both deeply trustworthy because it's true. It's also, as far as I can see, absolutely impossible for politicians to do. It's just not in their vocabulary at all, this idea of provisionality. Why is it so difficult? Well, they think they have to be absolutely confident about it. They say, oh, if we're not certain
Starting point is 00:19:20 about everything. Nobody will believe us. They'll just listen to somebody else. We have to be absolutely certain about everything. And I think they believe it. And our research with psychologists, and not just our work, but other research, we've done randomized trials for different ways of messaging, strongly suggests this is a complete myth that if you actually do, you're in a position of authority, you do actually admit some uncertainty, that there are pros and cons, et cetera, et cetera, that you are trusted more. And what's more important, you're trusted more by the people who were initially skeptical, by the very people you're trying to reach trust you more, because you're finally listening to their concerns. You're finally acknowledging
Starting point is 00:19:59 that there are issues out there. Perhaps vaccines aren't completely safe and effective. So what that means is the common political way of communicating, which is, I think, put into practice by communication departments in government, which hammer through the message, bam, bam, are actively decreasing trust in the group they're trying to reach. Those who are most skeptical. People who believe them already, there's no point. So they're making it worse by their attitude. And since we did these trials and actually saw data on thousands of people
Starting point is 00:20:29 showing that trust was improved in the most skeptical way, if you gave a balanced trustworthy message, including uncertainty, it totally changed my mind. It absolutely convinced me about this. I also believe it's correct on an ethical point of view, because it is correct, but it's also, purely from a practical point of view, it should be more effective. No, totally agree with it. We, I've done a podcast with Rachel Botsman, who is a specialist on trust, and indeed this is very important. But it's a bit, in the public sector,
Starting point is 00:20:57 generally, they never apologise either, right? It's kind of tied into the same thing, I think. But what did we not learn from COVID? Oh, a lot of it was not being flexible enough, getting tied into. You know, we were told to, you know, wash our hands and wiped surfaces and things like that. And within about a month, we knew this was pretty useless scientifically. And yet, nobody ever said, oh, what, you can, you know, this is actually, this is not the point of point. It's fresh air that's more important ventilation.
Starting point is 00:21:28 Nobody said that for a year. And so I think what we didn't learn was that you need to be agile and flexible and take people with you by acknowledging the uncertainties and that you change course. So the guy we worked with who was, you know, most impressive, I thought, was Jonathan Van Tam, the deputy chief medical officer. And we worked together when the UK, you know, did admit that the AstraZeneca vaccine was causing these very nasty, well, which was absolutely, I think, pretty well first detected in Norway, causing these nasty blood clots, particularly in young people. And we worked on the communication of that and where we showed that the benefits of the vaccine went down massively. when you got younger, but the risks went up. And so there comes a point where you just shouldn't give the vaccine,
Starting point is 00:22:17 stratified by age. And he then said to the public, he explained all this, used our graphics, went through the numbers, treating the audience with respect, admitting the evidence had changed, and then said, we're changing policy. And everyone said, well, fine. And he said, oh, we're adjusting our course. It's not a U-turn. It's adjusting our course.
Starting point is 00:22:35 And there was no pushback from the media. There's no accusations. People really accepted it because he actually showed. showed the evidence to the public as he was explaining it to them. And he could understand it. A politician would have been hopeless at it because he wouldn't have understood what's going on. He wouldn't be able to explain it. And so that to me showed that, you know, if you can get good scientists actually doing the
Starting point is 00:23:01 communication and they're good and they're reliable and trustworthy, this can have an enormous impact on the public and on, well, public trust. The world is totally overflowing with data. How do you distinguish kind of the signal from the noise, so to say? Well, sorry, that's my entire career. You're asking me to explain what being a statistician means. That's the statistician's job, you know, trying to split the signal from the noise. And, of course, you can't ever do it.
Starting point is 00:23:39 And, you know, what's a signal and what's noise is never absolutely. a black and white thing at all, but it's by trying to understand, and this is a standard statistical thing that would have been said for the last century, the sources of variation. Just like in pre-war, all the statistics developed in Rothamstead breeding stations for plants, and it was understanding the sources of variation. What led to the variation between the crop yields, what factors led to it? And, of course, there is, in a way, unavoidable variation, which tends to be called noise or random error. And then there's, in a sense, predictable variation, which is due to factors that you might be able to control.
Starting point is 00:24:21 And that's what statistics has worked on for about the last hundred years. And it's not been bad. It's got some pretty good techniques for, you know, largely regression methods and so on. And so it's actually done quite well. But you notice that that is different really from a strict machine learning black box approach, which just throws the data in and tries to extract a prediction, you know, a classification, possibly with some uncertainty or a prediction of what's going to happen, where there's no, you know, real understanding there of where it came from.
Starting point is 00:24:58 Again, you can't, people will have real difficulty for explaining why a piece of AI came up with its conclusion. Again, people are really working on that now, just like this. are with uncertainty. But, you know, if you have rather slightly more basic statistical methods that you do, I which I, that's why I support them, you can then generate a much clearer explanation of why you came to that conclusion. And what are the important factors? In which area are you the most impressed by the improved predictions? Oh, well, I mean, AI is in terms of what you might call rather tightly controlled, old areas, it's done brilliantly.
Starting point is 00:25:40 I mean, the people at Google Deep Mind, who started, you know, started on games like chess and go and things like that, and then moved into, obviously, protein folding and then and also sort of medical diagnostic from images in terms of breast cancer and eye problems. They worked with the Moorfields Hospital. It's tremendously impressive in that way, in that they can take a really, quite a big area. But these are all tightly constrained problems. You know, there's a block of data.
Starting point is 00:26:07 You've got an image. You've got, you know, a set of data, and then you produce an outcome. And it's just brilliant at that. Where I'm much more skeptical is about, I think, you know, very unfounded claims that, oh, well, we can just put your medical record into AI and it'll tell you X, Y, and Z. So people make it have made a jump from these kind of quite tightly constrained problems, which are just brilliant into much more general problems. And it's hardly surprising they do that
Starting point is 00:26:38 when we look at large language models, how effective they are, at coming back with what is quite often a reasonable response to very generic issues because they've been able to mine vast amounts of stuff on the web. Now, how good that would be about... And of course, it can come up with a list of medical diagnoses and things like that.
Starting point is 00:26:57 Yeah, it's just fine. It can make suggestions and it can be extremely effective on that. But if you take another way, I mean, you've been around for a while. I mean, just during that period, the accuracy of weather forecast, for instance, is just mind-bogglingly different, right? Weather forecasts are fascinating what's going on, because there's a real competition, no, there's a real competition going on. I think fairly friendly competition about the totally different philosophies for weather forecasting.
Starting point is 00:27:23 Because traditionally, it's been kind of applied mathematicians and physicists, you know, building huge model, weather models based on Nevis Stokes' equations and third order differential equations and they build massive models of the of the atmosphere and then make a prediction you know six 10 a week 10 days ahead um but they don't use any data apart from the initial conditions really and then they vary the initial conditions and that produces in an ensemble model and they produce a probability of what's going to happen with the alternative approach which people like d mind have taken is to throw out all our knowledge of physics the atmosphere everything that's been learned in the last 300 years and just throw it all out and take
Starting point is 00:28:03 the massive amount of data and to do a pattern recognition essentially and make a prediction which has no ability to explain why it's come up with this conclusion particularly a pure black box and they're doing really well which one is going to win eventually well i i'm really pleased that the uk met office has got both teams working and collaborating because it's i think it's going to be complementary in the end. But I'm kind of, because my background in statistics rather than applied maths, I'm kind of secretly on the side of the black box machine learning people because I just love the thought of just putting the data in and out it comes with a prediction, without any ability to say why it's saying, it's just saying, well, in the past, when this pattern was
Starting point is 00:28:48 there, this is what happened. Well, actually, maybe that's as good as you can do. So I think it's a fascinating competition going on at the moment, which I'm watching with glee. Would you have like to be weather forecaster? I kind of think if I had to study something, meteorology would have been, I mean,
Starting point is 00:29:08 what I'm interested in is the, I don't care about the meteorology. That's why I'm quite like the database approach. What I'm interested in is the first main nature paper from Deep Mind on this didn't have uncertainties in it. And I think it's really important to have uncertainties. When I, you know, I, all the time I look at probabilities of rain and things like that.
Starting point is 00:29:27 I use those uncertainties. If I don't have them, I really would feel a bit lost. And so what I'm interested. So when you look at the forecast, you don't look at, is it going to rain or concern? You look at what's the probability of rain? Yeah, I do. And what probability do you need to bring your umbrella? Well, exactly.
Starting point is 00:29:45 I don't know. It depends how that's very personal about how it could. It depends what I want to do if I want to have a picnic or not. So, no, I need the probabilities. So this is, you know, this goes back to the 1950s with Glenn Breyer developing a scoring rule for probabilistic precipitation forecasts. And it's terribly exciting, I think. And the skill of these probabilistic forecasts is growing all the time and it will continue to grow.
Starting point is 00:30:11 Do you buy a lot of tickets? I bought one a few years ago when the expected return was higher than the ticket price. There'd been so many rollovers. It's still almost completely. impossible to win. But I thought, I've got to have a go at a gamble where the expected return is higher than the stake. In which cases do you look at probabilities where other people would look at yes or no? Oh, I got prostate cancer. And so I really interested in forecasting effect of people with cancer. And we've been involved in algorithms for breast cancer and
Starting point is 00:30:47 prostate cancer building the software to demonstrate those to people. And they're all in terms of probabilities, you know, roughish but not bad in terms of 10 years survival and how those will change depending on the different treatments you've got. And I think it's absolutely essential when somebody says, oh, my doctor told me I had six months to live. I think, what? For the start, I never believe they said that anymore. Maybe there's some doctors who'd be so stupid as to say that. But I can't believe that you have to show, we don't know how long anyone's going to live. We can put some broad bracket on it because it's a survival. curve.
Starting point is 00:31:22 Now, you could talk... But first of all, I want to say, I'm really sorry to hear that you have got prostate cancer. But given that you are a mathematician, what are your stats? Oh, mine. Oh, yeah. That's the problem. I've got locally advanced prostate cancer. So I've got some minor metastases, oligometastatic, it's called.
Starting point is 00:31:41 We've got a few of them. But the drugs now, the new drugs, the new hormone drugs, on Aberatara. And they just are so effective. Now, it's, I've got, my PSA is essentially non-measurable. But the problem is we're trying to get, they use the data from survival in clinical trials. I can't, that it's really depressing. When I go back on the trials for Abereratiron, they're terrible. You get sort of median survival of 18 months or something.
Starting point is 00:32:10 I think, what? No, I'm going to live longer than that, I'm sure. Because, you know, in the trials, they were trying this on very sick people and things have improve so much. And actually, I'm at an earlier stage than the people who got the charges. So there's no good data really on what are my survival prospects. It's very difficult. It's a shame. And I wish there were, you know, better, you know, database. I really wish I could just tap in and find out, well, out of 100 people who are most similar to me, what happened to them. But for a start, we don't know. We don't know. It's only been given to people like me for a few years.
Starting point is 00:32:53 So we've got no long-term follow-up. We don't know. I mean, some people got it earlier, my oncologist said, oh, I've had someone on this for 16 years and said. So that's the problem with something that's fairly rapidly changing when you're asking for a long-term prediction. You can't, you can't say. Well, all I can say is fingers crossed. Yeah, yeah. I mean, it does seem a bit pathetic for a statistician just to say, well, I hope I'm lucky, but I hope I'm lucky. Why are we so bad at predicting elections?
Starting point is 00:33:29 Oh, elections. Oh, well, for a start, because when you go out and ask somebody, all the election things are just ask people, what would you vote if you had to vote now? I mean, that's the question. You're not even asking them what they're going to vote in the election. You're asking them, if you had to vote today, what would you vote? And that's a very biased measure of what you're trying to estimate of, you know, what that person will vote in three weeks time, two weeks, time, one week, so, you know, three months time.
Starting point is 00:33:56 People change. People are not necessarily honest. And so people may vote. They may not vote at all. So I think that the basic data source is always going to be biased. And so it's not like predicting weather. You know, whether it changes, but in a sense, it's not changing because of what people feel. It's not like changing. So I think they're never going to be great because they're trying to estimate something that you cannot predict. You can't observe. No. You've been bringing statistics into some new areas. So, for instance, anti-doping, you know, the world anti-doping agency.
Starting point is 00:34:41 What kind of things were you doing there? Oh, then. I'm not sure what's happened. Then they had the idea of an Athletics passport because they wanted to, actually a passport which kept a complete record of their drug testing history. And it was to try to allow to a certain extent
Starting point is 00:34:57 for the fact that there is individual variation between how people do respond to drugs. And so when you take a measurement from somebody, they were particularly interested in people getting blood transfusions. just before an athletic event. And so you'd be wanting to look at someone's red cell level or something like that. Was it remarkably high? People vary anyway.
Starting point is 00:35:24 So to know whether it'd been pumped up high, you have to know something about their past history. So essentially you have to have a model for someone's variability and their natural baseline level for their hemoglobin level. And so it was quite complex. No, it was fascinating statistically, and I haven't, actually, I would be interested in whether it's what the current situation is about that. But it was in a sense trying to make the drug testing regime a bit fairer in order so it could adapt to the individual biology of the athletes. From drug testing to the financial industry, is there anything in the financial industry that puzzles you?
Starting point is 00:36:05 No, I've kept well away from that. I mean, for a long time. And why are you keeping well away from that? not interested. For a long time, because I was funded from the charitable arm of Winton Capital Management, a hedge fund, David Hardings, and he was great. He gave us money and let us get on with what we wanted to. And I think, I hope he felt it was a good investment. But I did nothing to do with the hedge fund business. And I just never had any interest in money. I hope it just puts me off. But why didn't, why didn't, you know, why didn't, uh, why didn't, uh, uh, uh, uh, you know, who's established a very successful firm.
Starting point is 00:36:43 Why didn't he ask you, hey, David, come and check out all things. No, no, no, no, no, it's not part of the deal at all. No, he had plenty of really good people. He's a mathematician, right? Yeah, yeah, yeah, yeah. So he had the whole place stuffed full of mathematicians and really competent people, much more competent mathematicians than me for start. And so for start, I couldn't have contributed anyway.
Starting point is 00:37:03 And it would have, it would have been really inappropriate, I think, for that to happen. And I just wasn't interested. I couldn't care less. Do you think people are worried enough about the really, really bad outcomes? So, for instance, bioterrorism, you know, the comeback of smallpox, that kind of thing. Nuclear war, you know, climate change, I would say. Well, nuclear war is relatively less dangerous than. by or terrorism.
Starting point is 00:37:41 Yeah, well, it could be on a larger scale, but yeah, it depends on the scale. But the nuclear war would not be great. And so I'm as tricky, I think, because in the end, we're all going to die, you know, and within a very finite, finite period. From a personal point of view, I would understand, because I think I do that, that people do not want to spend their time obsessing against about all the terrible things that could happen in the world. It seems to me that this is not beneficial to your mental health, shall we say, to be really obsessed. I kind of hope there are people studying it more professionally and who are trying to counter it with appropriate regulation, appropriate policing and so on.
Starting point is 00:38:32 but I personally do not want to spend my time waking up in the morning worried about smallpox and bioterrorism. So in other words, I think I can understand, because I don't do it, why this is not full of top of the agenda in people's concerns, catastrophic existential risks. For a start, but I'm hopelessly optimistic. I think actually they tend to be overrated. But that's maybe because I, my,
Starting point is 00:39:02 particular personality is far too optimistic. Was Norway lucky to find the oil? Oh, that's interesting. I don't know. I think Norway, I don't know about finding it, but it was extremely sensible and how it dealt with it once it had found it compared with the UK. So, and, you know, which is why you've got your job at the moment to some extent. And so I think the Norway dealt with it in the absolutely brilliant. way of seeing it as a national resource to be, in a sense of heart, you know, to be nurtured
Starting point is 00:39:41 for the whole entire community rather than just in the UK to sell off the rights in order to raise some money at the time in short term. How do you look at climate risk? It's difficult because I really... So you have people talking about it like some places in America, you sit there in the middle of ash, rain, forest fires, and you're kind of in the middle of it. You're sitting in the middle of it and you're saying, you know, there is no problem with the climate. I know, I know. No, it's just happening and that's it. You could just tell by the events,
Starting point is 00:40:16 which are just going to carry on. There'll be ups and downs and there are going to be more extreme events. I'm interested in attributes. What it's brought to the fore is attribution studies, which is not so much about climate risk, but about looking backwards and say, to what extent was this caused by man-made climate change, which is a really big growing area in research, and it's going to be a big growing area financially when people start suing fossil fuel companies for events that happen. And so the, you know, but it all requires models. You have to have a model of what we would, you know, how the climate has developed with man-made, you know, man-made interventions, and how we think it would have developed had we not been throwing all this muck into the air since the 1700s.
Starting point is 00:41:05 And you see how likely these events were under these two different scenarios. And the relative risk can, our meteorological office do, I've got an attribution center and they will give you a relative risk. Technically, that can be converted into a probability of causation. The probability that this hot weather event, this tornado was caused by man-made climate change was X percent. And officially, once that gets about 50 percent by the balance of probabilities on a civil court case, you could say that man-made climate change was responsible. Now, trying to then attribute it to particular companies, I think, is rather more difficult.
Starting point is 00:41:49 But it's brought, from a technical point of view, it's brought this fascinating idea of attribution. And of course, for that, you have to have climate models, and the climate models are used to make projections as to what's going to happen. A lot of uncertainty, and the models take a lot of time in dealing with that. They also have what I think is really good, independent teams coming up with different climate models, which are in the pool, and then they make it even more uncertain. They broaden the things out. And, you know, we're never going to know what's going to happen. And one should not state too confidently about what's going to happen. but we know bad things are going to happen.
Starting point is 00:42:28 And what's to do about it? Again, not my job. Not my job, I'm afraid. Talking about big things. In 2021, you said that AI poses an extreme risk, but that it perhaps is overrated. What do you think now? Oh, I think it poses an extreme risk,
Starting point is 00:42:48 and I think it's probably overrated. What could you explain? Yeah. I mean, it's amazing how strong it is. And it turns what you mean about extreme risk. It's certainly going to take some jobs and certainly going to, we're all going to have to adapt our jobs, our work to it. It's also incredibly useful and valuable. I use, of course, use it every day. And that, but in terms of sort of risks, well, people, when they talk about this, they're talking about some existential risks about, you know, the, say, self-aware AI that's going to start. having essentially a will of its own and deciding that these people get it right to get in the way of what it wants to optimize. And I think, of course, that is a possible scenario. And I think it's quite reasonable then that people respond by wanting additional overview of and guardrails on AI. So it's like a lot of these things, people say, it's a bit like COVID. Before, you know, COVID, the modelers in the UK said,
Starting point is 00:43:54 Well, there could be half a million deaths. And that still gets quoted. Oh, they said there were going to be half a million deaths. No, they said there would be half a million deaths if nobody did anything about it. If we all just sat there and let it wash over us, there would be half a million deaths, and there would have been. But they're now being accused and saying, oh, they said there would be half a million deaths and there weren't. So the point is that one could talk about there being risk, but I think it's limited the value of that if we're all going to assume we're just going to sit here and a lot. ourselves to be taken over by killer robots.
Starting point is 00:44:26 So I think that what it does do is, of course, call for far greater scrutiny of what's being done, in the openness about the guardrails put in, and so on. I mean, but are you seeing the necessary guardrails and so on being in place? I don't know enough about it. Again, it's not really my area. But when you look at AI, what are the type of things you look at in order to to gauge the risk? Oh, again, I don't know enough about these sort of existential risk and how those could
Starting point is 00:45:01 occur, the superintelligence and things like that. I really don't know enough about it, and I wouldn't want to claim to. And how do you use, you said you used it all the time? Oh, I use it myself. What kind of things do you use it? I use them writing my book. I use it for researching and I use it for coding and I use it for personal things like trying to work out where I'm going to go on holiday.
Starting point is 00:45:17 So I'll use it all that for all sorts of stuff. Where does it tell you to go on holiday? Oh, well, I don't just say where shall I go on holiday. Not quite that broad. But no, I use it all the time. And of course, we're almost forced to use it now because it comes up number one thing when we do a Google search. So I think it's incredibly valuable and it's unbelievable.
Starting point is 00:45:42 It's unbelievable what it can do. But the guardrails, of course, are already in there in many ways in terms of, you know, violent speech and racism and all sorts of stuff that it can't do that are built in. And I want those, the crucial thing I feel is that these should, of course, should be open and public and they should be a regulator to make sure they're being adhered to. So, but lots of people are saying this. I mean, there's a massive AI safety is such a massive thing because all the tech people are going on about it as well. So I'll leave them to it and hope that there's some decent people involved in it. But it is a crucial area, not my job.
Starting point is 00:46:22 Now, if you do look at your job and kind of your legacy, what is the one kind of idea or principle that you hope you will be remembered for? Oh, I don't know. I've moved around rather a lot. Yeah. Oh, one thing. Oh, I think it's the stuff I've been doing later. I think it's on trustworthy communication of evidence.
Starting point is 00:46:44 In a way, it's not my, I've done quite a lot of technical stuff. That's where I get all my citations from, et cetera, and I get loads of those, which is lovely. But in the end, what I'm just obsessed by is the need for trustworthy communication about evidence, as we said, not that it tells you what to do, but unless, I mean, we're sort of doomed if we don't use evidence in an appropriate way and don't communicate it properly. We are just left up to thinking fast in Danny Kahneman's turn. We just left up to gut reactions and emotional feelings in order to for everything. And where is the next step of this trustworthy communication?
Starting point is 00:47:28 Where is that kind of part of the science going? Oh, yeah. Well, people are concerned about it. They're obviously concerned about the quality of what's published in the scientific literature, which is a massive problem because there's so much junk out there, under it from paper mills and so on. I think that's, well, I can talk about in the UK. with actually, this is now, you've got a very high level.
Starting point is 00:47:54 There's a code of practice of statistics in the UK, which is a pretty dull document, but it's incredibly important. There's a new version coming out, which is really putting down, you know, the way in which all official statistics and even non-official statistics should be communicated to the public and based on these ideas of preempting misunderstandings, of not being misleading, of being open about limitations and so on. and that's all down there and people have to
Starting point is 00:48:20 government departments have to adhere to it they are actually bound by it and so it's I think quite setting quite a good because in the UK we've got an office for statistics
Starting point is 00:48:31 regulation which I think might be fairly unique around the world an actual body that is there as an inspector as a regulator for statistics and I'm a huge believer in that obviously and I would love to see
Starting point is 00:48:45 that model developed elsewhere that you have got a body who can really tell people off when statistics are being misused. My God, they'd have the work cut out in the US of them, wouldn't they? Oh, that's for sure. What's the big unanswered question that you still want to tackle? Oh. Well, I'd quite like to understand consciousness and whether there is such a thing as free will. But that's, again, going somewhat outside my professional expertise, show.
Starting point is 00:49:17 we say. But it is in my book. I do discuss it because I think it comes quite important when you start talking about whether to what extent is the world genuinely stochastic at random or whether to what extent is it actually deterministic but staggeringly complex in a way that renders it unpredictable. I think is an interesting issue. Oh, oh, interesting question. I suppose broadly, it comes down a bit to what we're discussing before, about whether, COVID was actually quite encouraging because in a crisis situation, good communication did rise to the surface. There were obviously people on each side arguing, I was right in the middle getting attacked by everybody, so I thought, yeah, doing the right thing.
Starting point is 00:50:04 And it wasn't just me, but there was an enhanced respect for the mainstream media. and largely people, you know, were interested and, you know, there was a whole body, everybody was discussing the data, it was very active community, and I think it was, as we discussed, a very exciting, positive time. Now, once a crisis has gone, everyone goes back to their normal stuff and loses interest in all of this. And I suppose my question our interest is whether that kind of interest and, you know, and attention and wanting trustworthy information can be retained in societies that are becoming
Starting point is 00:50:48 increasingly popular, you know, follow populist politicians who object to authority, who are distrustful of what they call elites and experts and so on. And as that happens, well, first of all, this is big thing is, can that be counted by having trustworthy experts? You know, people who do know something. I think it's so interesting. According to Bill Gates, it would take roughly a billion dollars a year to make sure the world is really ready for the next pandemic and the world is not spending that money.
Starting point is 00:51:23 Yeah, and the next pandemic will be different. And these people certainly wouldn't respond. I don't think in the next pandemic people would accept lockdowns. I just don't think they'll be politically acceptable. So I think So we'll all be Swedes in the next pandemic I think well as Swedes as they said The guy the minister said well we practice social distancing
Starting point is 00:51:49 Anyway in Sweden so I thought that's great So yeah I think we would be I think we would be more Swedes in the future And so but the crucial thing about that Is that what it says is that you know you said you can spend a billion But you can't change people how you do What are you going to do about people I mean how people react is
Starting point is 00:52:06 absolutely crucial. And you don't necessarily change the way they react by just spending money on things. So I think that particularly in pandemics, the role of human reaction to the situation is, which is the most important and the least predictable aspect. And actually quite difficult to research, very difficult, I think, to learn from the past pandemic about exactly what worked and what didn't, because things were so different across every bit of society within and between societies. So I'm not quite convinced that just throwing money at something is going to necessarily, you know, protect us against it. What do you read outside statistics? Oh, I read some crime novels, but I'm interested in I really like history and biography,
Starting point is 00:52:59 especially military history. I'm obsessed with the Second World War, so I spend my time visiting, you know, war sites around Europe and further and in India. So that's actually what I'm interested in. Why are you so interested in that? I've been asking myself that why am I so interested? I think it's growing up in this generation, you know, born in 1953 in the UK, the war was around us all the time in the films and the culture and the experience of the adults around us and things like that. We grew up with it as small kids, absolutely obsessed with it. And I never really have quite lost my interest. I think partly because every time I read anything about it,
Starting point is 00:53:38 I just sort of thank heavens for my, well, what's known as constitutional luck, the fact that I was born when I was born into a society I was born into, which was staggering constitutive luck. Now, if you were to apply or if you were to put on your professor in statistics hat and give advice to young people using some, numerical math or whatever, what is your advice for young people? How should they think about the life in statistical terms?
Starting point is 00:54:08 Yeah, well, I do. I mean, some of them even read my books, which is, I don't. That's a must. I had a 17-year-old come. I nearly burst in tears, but he came up and he said, oh, I really like your book. And I said, well, it's not actually aimed for you to you. I really, I was so moved that he liked it because I actually think there is some stuff in there about, you know, we have to face up to uncertainty. we don't know what's going to happen.
Starting point is 00:54:32 I didn't know when I was 18, so I had no idea what's going to happen in my life. As I said, I had an enormous constitutive luck where I was at that time, lots of opportunities, you know, a very secure situation, everything paid,
Starting point is 00:54:44 health care, university, everything all paid for. I was in a very, really privileged situation. And so, but there's still, of course, massive unpredictability, but I didn't mind. I felt that I'd got a good upbringing,
Starting point is 00:54:57 which gave me resilience. So the crucial thing, and because as relevant to corporations as it is to human beings, I think is resilience. It's the number one priority because that's how we deal with the deep uncertainty in all situations, the fact that we don't even know, we can't even list what might happen, particularly some way in the future. So what we have to do is cultivate resilience, which is an ability to deal with, if not, and learn from, benefit from, anything that can happen. whether it's good or bad. And for some reason, I think I developed quite a lot of it. I'm not quite sure.
Starting point is 00:55:36 Maybe it's a nice, secure upbringing I had and the good fortune I've had. But for some reason, I think I got it. I don't know. And I think, though, again, I would say this for young people that you have to go out and take risks. Don't be reckless. I say this, I do. I give talks in schools, and I say you've got to take risks. Now, don't be reckless.
Starting point is 00:55:57 You know, cover yourself from the major downsides, you know, just be careful, but take risks. So go out, you know, camping on your own in the middle of a moor, but don't be stupid. Let people know where you're going and make sure you've got the proper kit in Norway, because everyone knows about that stuff, but not in the UK necessarily. So go and have those adventures, but don't be stupid. And so it's through having, you know, those sorts of adventures and taking some risks and being in situations where you're not quite sure what's going on. that you develop resilience. So I'd just say to young people go out there and have adventures,
Starting point is 00:56:32 but don't be stupid. David, I had a 90% probability on this podcast being very good, but it's even better than I expected. I just absolutely love talking to you. So big thanks for everything you do to society and increasing the knowledge of stats and just for being such a wonderful communicator. Big thank you. Well, thanks so much. This was somewhat outside my comfort zone a lot of this. So anyway, so I suppose I had to take the risks of being. There's the biggest risk I've done for quite a long time as being on this podcast. I tell you that. Great. Big thanks. Thank you. Bye-bye.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.