The Wellness Scoop - The Extra Scoop: EMFs, AI, Radiation & the Digital Health Myths We Need to Debunk

Episode Date: June 12, 2025

We’re more connected than ever — but is our tech making us healthier, or quietly harming us? In this episode of The Extra Scoop, we’re joined by physicist and cancer researcher Dr David Robert ...Grimes to unpack what the science really says about technology and our health — from EMFs and mobile phones to AirPods, microwaves, AI, and the platforms that shape our daily lives. Dr Grimes explores why viral misinformation spreads so easily, how social media is rewiring our emotional responses, and why he compares today’s tech giants to the tobacco industry. We also dig into the long-term risks of algorithm-driven platforms and ask whether fear around radiation and devices is rooted in real science — or modern health anxiety. This is a conversation that cuts through the noise, challenges fear-based narratives, and empowers you to think more critically in a digital-first world. In This Episode, We Cover: Whether phones, AirPods and microwaves pose any physical health risks What the research says about EMFs and 5G The rise of misinformation — and why social media may be the new tobacco The long-term impact of AI and algorithm-driven tech on learning and mental health The real cost of constant outrage and viral content Simple ways to spot health misinformation and protect your wellbeing online How to stay informed without falling into fear About the Guest:Dr David Robert Grimes is a physicist, cancer researcher and science writer with a PhD in medical physics. His work focuses on misinformation, public health and science communication. He writes for The Guardian, The Irish Times and BBC Future, and is the author of The Irrational Ape, which explores why we believe falsehoods — and how to think more clearly in a world of misinformation. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 The Battle of Ontario is on, and FanDuel is your home for live betting the series, with features like live SGPs, build-a-parlay any game any period, or stack multiple matchups onto one slip with same game parlay plus. And what's better than playoff hockey? Overtime playoff hockey. Get more from the game with live overtime markets. Download FanDuel today and get more with North America's number one sportsbook. Please play responsibly, 19 plus and physically located in Ontario.
Starting point is 00:00:22 If you have questions or concerns about your gambling or the gambling of someone close to you, please contact Connects Ontario 1866 531 2600 to speak to an advisor free of charge. Whether it's a family member, friend or furry companion joining your summer road trip, enjoy the peace of mind that comes with Volvo's legendary safety. During Volvo Discover Days, enjoy limited time savings as you make plans to cruise through Muscogee or down Toronto's bustling streets. From now until June 30th, lease a 2025 Volvo XC60 from 1.74% and save up to $4,000. Conditions apply. Visit your GTA Volvo retailer or go to volvocars.ca for full details.
Starting point is 00:01:04 Welcome to the Extra Scoop, our expert-led bonus episodes giving you the need to know on the topics everybody is talking about. That's right, from gut health to sleep, hormones to skin, we are cutting through the noise with top experts so there is no fluff, just facts. Exactly, it's all quick, practical, evidence-based advice to fit into your busy lives. So this is the Extra Scoop. Let's get started. As our listeners may know, our Extra Scoop episodes are all about taking the topics everyone's talking about and quickly breaking them down for you, so that you have expert-led insights that lead you informed and empowered
Starting point is 00:01:45 when it comes to your wellbeing. Since the beginning of the Wellness Scoop, we've wanted to speak about the impact of technology not only on our mental wellbeing but also on our physical health. So we're really excited to explore this today with Dr David Robert Grimes, a physicist, cancer researcher, and science widely respected for making complex scientific ideas accessible. And of course, for his fearless approach to debunking misinformation around topics like vaccines, climate change, and tech health scares. So it's an absolute delight today to welcome Dr. David Robert Grimes. Hello. Dr. David Robert Grimes Good morning or afternoon,
Starting point is 00:02:24 depending on what time you're listening to this, I guess. Thank you so much for joining us. Honestly, there is so much to talk about today, so many topics that it's going to be a challenge on our side to keep this concise for listeners. But I guess just to get right to it, you have recently compared social media to the tobacco industry.
Starting point is 00:02:42 There is obviously, it's such a pressing conversation at the moment. We've seen shows like Adolescents come to the forefront. It really sparked this huge debate, I think, across the world on the impact it's doing. I wondered if you could talk us through your idea behind that and what your solutions are. And is it as bad as everyone says it is?
Starting point is 00:03:01 It's worse, is the general answer. Oh, great. Let's jump straight into it. The piece you're referring to is a thing I authored for Scientific American back in April. I made the comparison that the closest historical analogy we have to social media giants is the tobacco industry. Because they know their product is harmful, their own internal research shows their product is harmful.
Starting point is 00:03:22 Yet, because that engagement that they rely on as a source of revenue is so important to them, they have just simply disregarded the harms it causes, whether that's to democracy or to public health or to the mental health of the people on their platforms. Again, recently Sarah Wynne Williams' book, I just finished it, The Careless People came out as well. If you had a low opinion of social media before then, you'll have an even lower one at the end of it. There's a few reasons for this. Social media is, you might argue, the very tech libertarian version of it is, it's just a platform for people to express ideas. That's kind of true, except it's deliberately false. I'm going to try
Starting point is 00:04:05 to unpack that to see where that comes from. The idea that we are neutral arbiters of information is completely nonsensical. It doesn't make sense. I wrote a popular science book about this called The Irrational Ape. The psychology of how we deal with information is not that we're incredibly partisan. We tend to seek out information that confirms our biases. We are far more likely to emote first and reason later. For this reason, messages that are sent on social media tend to target us emotionally. They tend to aim for outrage and inflammation. They are the biggest single predictors of virality on online content if it makes you scared, angry, or outraged. So any negative
Starting point is 00:04:44 emotional feedback. We sit here and think we're very rational about how we process information. We are not. We are incredibly emotional. There's two major drivers. One is that we have cognitive problems that we, for example, we don't realize that a post on the New York Times is probably more reliable than our racist uncle's Facebook page. That's called source neglect.
Starting point is 00:05:04 We do that a lot. The other part of it is social effective. That means that if someone we like and admire is spreading a certain message that we do a monkey see, monkey do. If the Cardassians are telling you to take an MRI scan you don't need, you want to be a bit like the Cardassians for reasons that are beyond me, but some, they have a lot of followers, so people do. But there's lots of this. I mean, as a joke recently, I pointed out that, you know, people selling supplements online, things like that. That's also social effective. By the way, they're all the exact same. Just to make a point. I got my own supplements branded. It says in the back, we have my face on it. And it says in the packet that supplementation is
Starting point is 00:05:39 useless and there's no good evidence for it. I don't sell these. I give them as paperweights to friends as a reminder not to be sucked in by online scams. But the reason people go for that is because they trust the person putting it up. That's parasocial. Social media amplifies that massively. That's one of the major problems. The other problem is we are not very good at assessing the quality of information that we come across. This is why misinformation thrives. Apart from the fact that it is emotive, you never see a happy misinformation story. You see stuff that deliberately outrages you and makes you mad, right? Or makes you divided against something. That's what it typically tends to be. But because we don't parse that very well, we are very susceptible. The idea
Starting point is 00:06:19 that we are immune, there's just a platform and it makes a difference. When the algorithms are also pushing content towards you, if you click on a racist post, for example, so a lot of my work, I have to look at anti-immigrant, racist, invective, because it often is from the hard right conspiracy theories. I get so much of that put in my feed now because I clicked on it for research purposes before. I obviously know it's nonsense. I write about it a lot, but that's what it does. And this is an adolescence you mentioned earlier on. If you start looking at Andrew Tate content, for example, you'll get nothing. I had to write a piece on Andrew Tate a few years ago.
Starting point is 00:06:50 And I mean, for about six months afterwards, all I was getting was whether I wanted to join Andrew Tate's academy. But the algorithm will push content it thinks you want because you've looked it up before. What does that do? It silos you into echo chambers. Now, people will say, well, you know, newspapers are partisan too. Kind of. Newspapers have journalistic standards. They have independent bodies that can assess them for factual accuracy.
Starting point is 00:07:12 They can be held accountable for what they write. Social media companies cannot. And that's where the difference is. They're making a lot more money. They're also ruining journalism because they're giving that content out online and now newspapers can't afford to pay their journalists. So we're in a very strange space. I feel that's the rant. I'll stop now for a second, take a breath. It's fantastic because you've summed up, I think, everything that obviously is a bit twisted with the world right now and it preys on our negative emotions. We know that bad news spreads a lot faster. It's more click-baity, lots of people that have very polarizing views or really extreme things to say.
Starting point is 00:07:48 They're the ones that get projected on this algorithm. So can I ask you in particular, let's talk about the physical impacts now. So we talked, I'd say that's a little bit about mental health, what we've just discussed, and the workplace and the economy and everything that's kind of twisted in this strange world of social media.
Starting point is 00:08:06 But can we go into items that are also being fear mongered left, right and center? The only the other day I saw on my feed something like, you know, the headphones, you know, the new Apple, the AirPods basically are radiating inside my head like a microwave or so like wearing a microwave on my head. And as this guy walking around with this silly little cardboard box that looked like a microwave or so, like wearing a microwave on my head. And there's this guy walking around with this silly little cardboard box that looked like a microwave, he put it on top of his head.
Starting point is 00:08:30 So- He should have kept it on. So, you know, ideally that would be the end of that content. He just puts, I'm a silly person, here's a box in my head. That would be great, if social media had more boxes on heads. I mean, what he was saying though, David, was drastic.
Starting point is 00:08:42 So he was basically saying that it increases the risk of cancer. If you're using mobile phones, you're sleeping with it next to your head. You've got a microwave in your house. What are the risks here? Are there any risks? Right. This, this amuses me immensely because on my other hat as a cancer researcher, I've looked at, and before then as a physicist working with ionizing and non ionizing radiation, I want to say, I find this hilarious. I don't. I find it really draining. I've written papers on this at a review for JAM Oncology a few years ago about microwave radiation or radio frequency radiation to give it more broad spectrum
Starting point is 00:09:15 thing. To put things in perspective, your microwave operates on a really different principle of dielectric heating than mobile phones do. When people can try to compare it to a microwave, they don't actually understand what's happening. That's the first thing, because to make a microwave work, you need to have it in a chamber that's tuned. You need a resonator. You need a magnetron. There's a reason you don't just randomly have microwaves in your house. You need to actually have them in boxes. But put the physics aside for a sec. When we use our mobile phones or Bluetooth headphones or whatever else, we are using radio frequency radiation, usually between 2.4 and 5.1 gigahertz.
Starting point is 00:09:52 People hear radiation, they go, oh, that's scary. Radiation simply means the transmission of energy through a medium. You know, we're surrounded by radiation right now. Do you see the light around us? Light is a form of radiation. It's on the electromagnetic spectrum. And to put it in context, I point out that the lowest energy light, which is red light, right, is between 10 and 15,000 times more powerful than the highest energy radio frequency radiation. And you look at this as a spectrum, if you
Starting point is 00:10:20 were afraid of your phone, you should be afraid of light bulbs. Spoiler, please don't be afraid of light bulbs. But this is the don't be afraid of light bulbs. This is the level of what they're doing here. It's a real logical fallacy. I know I get really annoyed because I pick them apart in the book an awful lot. One of the problems here is some radiation, I say specifically, can be dangerous. X-rays, gamma rays, photons, they're higher energy photons. They are damaging because they have the ability.
Starting point is 00:10:46 They have enough energy intrinsic in those little quantum packets. I'll spare you the physics, but they're called photons. They're little packets of energy. They have enough intrinsic energy to damage DNA bonds, which is why, for example, X-ray exposure can cause cancers eventually. It can cause damage to DNA, which eventually becomes cancer. Visible light doesn't have that energy. Radio frequency doesn't have anywhere near that energy. Due to a nice quirk of quantum mechanics, it never can. You can add it all up to get enough energy.
Starting point is 00:11:13 That's why visible light doesn't cause us any harm. It needs radio frequency. So they take a grain of truth that some radiation can be harmful, and they extrapolate it to all radiation is harmful, without even realizing they've skipped orders of magnitudes on the radiation spectrum. But as you notice in that video, he said he was quite extreme. This person is one of two things. They either don't realize how uninformed they are, right? Which is possible because, you know, particularly if it's a man, no offense to my gender, but we're famous for being confident without being informed. And the second part of it, they may well know it's nonsense because this is old nonsense. I mean, writing about Bluetooth paranoia since what, 2012? I mean, I've been writing about
Starting point is 00:11:55 it a long time. I feel very old now. I've been writing about it a long time. It's nonsense, but it's engagement. Because I guarantee, every time one of these videos goes viral, and I've done a few videos debunking this stuff, even though I've done loads of stuff debunking this, I will suddenly get a message going, I'm really concerned about this. Should I be concerned? And I'm like, they've just reanimated the algorithm to get content. Because again, this shows the point. Fear spreads. Fear spreads. It's contagious. Me doing a video going, that's probably fine. I would not worry about that at all. There's much more important things to worry about, and that's not going to cause you any
Starting point is 00:12:26 harm. That's not going to get the clicks. What's going to get the clicks is if I put a box on my head and say, you're wearing a microwave on your head all the time. Oh my God. Particularly if I'm American for some reason, I don't know why I've done that in my head. And that's something we have to train ourselves to not react to. These people make money off that. So I don't know this person in particular. Firstly, they're not creative because this has been done a lot. But what I will say is that's really old stuff. And whether they know it's wrong or they are too uninformed to realize it's wrong, I think the end consequences are the same. They're scaring the bejesus out of people over stuff they do not need to be scared over.
Starting point is 00:13:03 Well, I mean, the clarity is phenomenal because I think it's such a great analogy for what we see online now. And I know for Ria and I, it's such a point of frustration on a day-to-day basis in the sense of, obviously for us, it's so much about what you're eating and your diet, but telling people to eat a vegetable stir fry
Starting point is 00:13:20 and how to cook it in 15 minutes just doesn't have the cut through. It's like, here are the five foods that will kill you tomorrow. And that, that sense of preying on people's fear is it's so, so powerful. And it is absolutely terrifying. And I think, you know, there's a trend going around at the moment that I've seen. It's like the things I have in my house to protect my health and things I've seen
Starting point is 00:13:40 as like EMF stopping stickers basically on phones. Like when we were like, I mean, I'm not sure if you're near my age. When we were teenagers, they used to sell these light up things. You could stick to the back of your old Nokia's. They were like two euro. I can't believe they're now selling them, probably like, yeah, bespoke versions of them. And they do nothing. They just light up. I saw one necklace is that the mum had given the children's is like to block the rays.
Starting point is 00:14:04 And we actually were moving house. We were looking at people's houses and we went to a house where they were like, oh yeah, we don't have Wi-Fi in the house now. We only have plugins, so there's no EMF rays in the house. Obviously people are scared of 5G phone masks. That's been a thing to talk about. And I just wanted to essentially for people like a tick list, microwaves, 5G phone masks, EMF blockers. It is essentially nonsense. What we don't want is too many x-rays in our life. Will Barron Absolutely. Yeah. Yeah. I mean, and that's why medical professionals will tell you, yeah, we control x-ray exposure.
Starting point is 00:14:35 Sarah Baxter Exactly. Will Barron We don't control your exposure to sun. Well, actually, sunlight, this is like caveat with sunlight because sunlight has ultraviolet radiation in the spectrum as well as visible light. So people go, well, the sun is harmful because I can hear people already in my head going, well, David, you said, well, actually, yes. The ultraviolet component is why you wear sunscreen because that is part of the solar spectrum.
Starting point is 00:14:57 But the visible light, if you shine a torch in yourself for hours, it's not going to give you a cancer no matter how bright you shine the torch in yourself because it's not capable of breaking DNA bonds, which is a prerequisite for something being a carcinogen. What I found really, this is a trivial thing, you can shield yourself from EMS if you wanted to with a Faraday cage. This is a physics thing. Years ago, they're basically chicken wire. It's just the way it induces things. Whatever is inside that gets shielded. There was a bunch of anti-5G people that bought these Faraday cages on Amazon to shield their house from 5G and they put them around the
Starting point is 00:15:30 house and they all went onto Amazon and complained that their, their wifi signal in the house dropped. But it was hilarious to me because it showed they really didn't understand the fundamental physics of the problem they were talking about. They were very confident about it. They were very insistent. They didn't need this stuff, or they needed this stuff, but they didn't realize what they were doing. And this is why a little knowledge can be a dangerous thing. I would always say, if you hear something scary, get three different sources to verify whether it's true or not and make sure they are trusted ones. And again, I keep going to the analogy in my head, you know, something that is in the New York Times, probably more reliable than, you know, your conspiracy theorist cousin's
Starting point is 00:16:09 Instagram page. And yet people treat these things like they are. I mean, one's more engaging. I mean, the article I wrote for Jam on Oncology on radio freeze radiation was just basically, yeah, we've looked at all the epidemiology, we've looked at all the biophysics and it turns out nothing. It's not going to do anything. That's not exciting. It's exciting for me, but it's not exciting. I can't, I'm not going to get major clicks with guys. It's fine. Don't worry. Chill.
Starting point is 00:16:34 Be cool, man. Maybe that's what we should have a brand just called be chill, dude. And see how far we can go with the be chill. Should we drink arsenic? Probably not chill, but can we use our phones? Yeah, that's fine. Yeah. David, I think what's really reassuring is the fact that the science obviously is concrete and when you explain it in that way, it's really reassuring. You mentioned something like a little bit of knowledge is dangerous. And I completely agree because it's almost I can relate to it. In my world of work, there's lots of health coaches or different people that aren't specifically qualified in nutrition, yet they go into that world. And of course, you can
Starting point is 00:17:09 do more harm than good sometimes. But unintentionally, a lot of people don't realize because they don't have the full grounding. It's such a difficult world to navigate. I think one of the questions before I move on to AI, because I really want to pick your brains on AI, is that you said microwaves should be in a confined space. So I can almost, you know, it's a little box and that's where everything happens and that's where the magic of the cooking and the process and everything begins.
Starting point is 00:17:32 But what about if you leave the door open? So I had so many questions from people that said- It'll make no difference. No difference. But here's the thing. A microwave works by dielectric heating and it needs a standing wave chamber. So it's only going to, like, you can do this. One of the experiments we used to do with undergraduates A microwave works by dielectric heating, and it needs a standing wave chamber.
Starting point is 00:17:46 You can do this. One of the experiments we used to do with undergraduates is you get a really long bar of chocolate and you put it in the microwave for just a few seconds, and you actually notice a standing wave pattern on it. They're called resonance chambers. They're tuned to work and to amplify the rays in a certain way. As you'll notice, the microwave door is actually, there's a little grill on it. That's actually a Faraday cage right there. Even if you did get, for some reason, a defect
Starting point is 00:18:08 in it that would bounce it back. Also, I mean, I need to hasten to point this out. Your house is not a resonance chamber. It is not exactly tuned and it doesn't have a giant magnetron underneath it. That's the heavy part of a microwave to make sure that all stays in one place. So worst case scenario, if you ran a microwave with the door off, you just get a really inefficient microwave. Oh, fantastic. Also, who was able to open their microwave door? Mine, since I've had them in the 80s,
Starting point is 00:18:33 they turn off when you try to open the door. You hit the button, the door opens, the whole thing stops. That's a basic switch. I think the fear is so big of these things now that it's almost that even when the microwave's off, but the door is open, it's gonna be admitting, look, we're all scared of things like cancer, aren't we, ultimately? I mean, I say cancer is a consequence of living long enough in a lot of cases. The primary
Starting point is 00:18:55 cause of cancer is aging. It's like going to Vegas and rolling an awful lot. You're going to roll snake eyes eventually because, again, cancer is probabilistic for the most part. We're exposed every day to agents that might cause cancer and our DNA is super good at repairing stuff, but one day it's going to mess up. You'll die of something else before it as well. Heart disease is the leading cause. So you know, you've got two ways to go. You're going to go heart disease or cancer. Take your pick. And if you want to look at a long-term way that we all die and something is going to kill us. But the obsession with fixating on things that have no real risk factor or nothing's going to be accumulative
Starting point is 00:19:31 and living your life based on that fear makes no sense to me. MS. I think that's such an apt point. I have one quick question, final one on microwaves, and then I want to pick up on what you just said. On microwaves, it was whether or not it nukes your food, which is, you know, and it damages all the vitamins and minerals on your food, which I think is the other concern. But on that point, I think it's health anxiety is such a huge topic at the moment. And it's so perpetuated by the online world, because to your point, David, that fear and misinformation are so deeply connected.
Starting point is 00:20:02 And that's what really fundamentally runs the algorithms now. And I just wanted to pick up on the health anxiety point, but first of all, could you confirm that if you do cook food in a microwave, you're not destroying all nutritional content? God no. What microwaves are very, very good at doing is vibrating water, right? That's how they work, which is why if you put a bowl of cornflakes, dry cornflakes in the microwave, you'll be surprised to find it doesn't heat up because it's dehydrated. There's no liquid in it to vibrate. So all that microwaves do is shake water and they shake it enough to cause friction that warms it up. That's why you can heat things very efficiently with them as long as they are liquid-based, which is most of our food. Most of us don't just eat cornflakes all the time. But yeah, it's not nuking your food.
Starting point is 00:20:43 It doesn't destroy the nutrients. It's still there. I know. I came armed with the whole, I've got my science of plant-based nutrition, but next to me and all the different methods of cooking. And yeah, you definitely retain nutrition by using the microwave. That's for sure. It's just don't overcook your food, guys. I think that's the same with most things. But can we move on from obviously health anxiety and microwaves and all sorts of things? We could go on forever because there's so much. But can we talk about this new technology, David? I've never actually had a chance to speak to you about this yet, so I'm really excited
Starting point is 00:21:12 to pick your brains on the subject of AI and chatbots. I just cannot get over the fact that they weren't part of our lives even during the pandemic and now do you think with this new technology, there's long-term risks perhaps for the younger generations because they're growing up with this tech at their fingertips? Do you think it's how they learn academically? My big concern with my kids is, of course, there's chat, GPT, and all these different things.
Starting point is 00:21:39 I don't want them to learn the skill to write. No, tools themselves, and this goes to a degree for social media tools themselves are in and of themselves, a ethical, they don't a knife, for example, is very good at slicing meat, but you could also use to kill someone. It's how you deploy it. The change is the ethics of it. Now with large language models, which is what we talk about AI, as you were talking about one of my frustrations and I I particularly hear policymakers talking about it, is they
Starting point is 00:22:07 don't seem to understand what it is. So my other hat that I wear is I'm a chartered statistician. All these models are is they search vast databases, AKA they steal stuff out of books, the internet, Wikipedia, Reddit, Stack Exchange, right? Massive amounts of data. And they use statistical associative models, which means they go, well, when people were searching for chicken and cook, they went together. If you do this enough, humans ask very similar questions. You can come up with relatively
Starting point is 00:22:36 good prediction models based on this. Now, we've known this since the 50s, by the way. The actual idea of a large language model is this is the 50s. But to do this, to make it convincing, you need massive computing power because you need to be able to take huge amounts of data and shift it all together. But essentially it's a shuffling mechanism. When people go like, why doesn't AI just... You know sometimes AI will tell you something that's just absolutely wrong. And you'll go, well, why did AI not tell me that was false?
Starting point is 00:23:01 Because AI doesn't have a concept of true or false. AI has a concept of what people have associated with what. It's like a slightly cracked out version of Google search. It's like, you're interested in this and that, and other people are interested in that. It doesn't intrinsically know if it's lying to you. It doesn't intrinsically know anything. Right? And the problem is people seem to think that AI models have some great wisdom. What you can extract from them sometimes is what associations they have made, and that can be useful. For example, I'm often coding. For what I do, I do a lot of coding of health interventions, and I have to actually write code in OR or MATLAB, whatever else. What saves me a lot of time is I know how
Starting point is 00:23:39 to write that loop. I've written a million of them before, but I'm going to say, Chat GP, just sort this out for me, and it'll give me a workable version I can use as a basis. The reason it does that really well is that it has stolen all the data on Reddit and Stack Exchange from programmers and statisticians over decades, so it has a great repository of expert knowledge, because that was experts. However, it also steals data from places that are absolute garbage, like 4chan. It's very easy if you don't know your subject area well enough for it to give you absolute nonsense. Also, often even though it steals good data for me, for example, half the time
Starting point is 00:24:18 I'll put the code in and it won't work because it's stolen the wrong kind of code and it doesn't know. It doesn't know it's lying to you. So while it's very good for writing a cover letter, for example, it has billions of cover letters on the internet. It's really good at them. So if you said, write me a cover letter, it'll write you a very passable cover letter. But if you wanted to give any depth or specifics, it's going to fail or it's going to possibly write something absolutely wrong.
Starting point is 00:24:40 BBC did a study a few weeks ago where I say studies are wrong, but they fed their stories into ChatGPT and said, summarize this. And in almost all cases, it summarized at least something wrong. A friend of mine works, I won't mention the newspaper, but they're journalists for a certain paper and they got an in-house AI software. And whenever they put in a suicide, notice it will come back and suggest that it might've been murder. Because online, that's always the, you know, it looked like a suicide, but it was actually murder. So, so much data has to train on that, that it actually, they have to check obituary notes to make sure it doesn't start saying that they were murders. So the problem is they're fine, but it's a
Starting point is 00:25:19 regurgitator of what already exists. It doesn't create anything original. And I think that's really important for people to realize. There's no originality. Intelligence is a misnomer. It's not thinking. It's doing large processing of statistics. And you have to take that with a huge pinch of salt. So David, listening to you there, it sounds like obviously social media over the last
Starting point is 00:25:38 decade in particular has kind of massively spread this new era of misinformation and challenge for consumers and people online to understand what's correct and what to follow, what not to. But it sounds like AI is going to really double down on that. To your point, if you are not an expert in something and you don't have that base knowledge, it's very difficult for you to decipher whether or not what you're being told is correct. Absolutely.
Starting point is 00:26:03 Imagine there was a version of Wikipedia that didn't actually have references. So for example, if I go to Wikipedia, I'm a huge fan of Wikipedia. I think it's one of the best things humanity have ever done on the internet. And that's, there's not that many good things we've done on the internet, let's be fair. But the English language version in particular has an army of volunteers, army of references. Like I always say to students, it's okay to start on Wikipedia, just don't finish there because everything is hyperlinked. Go in there and just check the source and check it for the GNN. Usually it is.
Starting point is 00:26:30 Although my Wikipedia is almost inherently entirely wrong. Yeah, mine is. Mine is as well. And I won't touch it at a principal. No, they won't let me change it. Every time I change it, they change it back. Like I'm born in the wrong place. So what you need to do is upload a Burt's cert basically with you right there and say, I actually, and the talk page, you have to go actually, I actually was born here. And of course it's not perfect, but I think it's one of the better things.
Starting point is 00:26:53 But imagine there was a no reference, no talk history. Just, there was a nice thing you went in and it changed every day. That's what chat GPT is. And people think it's, it's a reliable source. I mean, it's very good at making it like, as laziness in areas I know well. I tell it to write code for me when I'm too lazy to write the code myself, but it's code I could write and I can check from a scan if it makes any sense. I've also occasionally got to write like, you know, I need an ethics statement for a grant. And then I read it and go, okay, 80% of that is usable and 20% of that is garbage. And luckily, I know which 20% is garbage.
Starting point is 00:27:27 If I was starting from scratch and I thought these are the best things ever, it would be a big mistake. Again, it's like a kitchen knife, perfect tool for saving you some time on mundane tasks. Really bad tool if you think it actually has any knowledge or depth, because it doesn't. I think what's been so helpful about this episode is just A, a reminder that if you don't really know what you're talking about, which I do not
Starting point is 00:27:49 when it comes to technology and AI, things can appear very scary. And when you hear someone like David break it down and say, look, and he's not remotely worried about these topics and he's explained the science, it's extremely reassuring. And the AI is very new everybody, we're all trying to navigate it.
Starting point is 00:28:05 You know, David just saying, I write codes, I don't even know how to write code, you know, let's just break down the fact that we've all got different strengths and weaknesses and we need to focus on what we can actually action. So on that, that part, our last question, David, for you is what are three to five simple everyday steps that everyone can take to, you know, spot what is misinformation,
Starting point is 00:28:24 to feel more informed and basically get rid of this anxiety that we all have when we're navigating being online. I'm so biased on this because not only do I research this, I've written a book on this and this is what I say, look, to distill it down, there's a lot of strategies we can use, but the very first strategy is use information hygiene. Treat viral information like its pathogenic namesake. It could potentially infect you and you could infect other people with it to dull consequences. So when you come across a story that makes you scared, angry, emotive, stuff that makes you want to react, and social media encourages you to share that because that's
Starting point is 00:28:58 how they make their money, you need to be very zen about it and go, that's very scary sounding. I'm going to put that in the back burner and go, that's very scary sounding. I'm going to put that in the back burner and not take that very seriously until I've checked it. If you go to a third party source that's reliable and it doesn't verify it, you have to abandon it and go, no. That's the idea of protecting yourself from getting infected or not infecting other people. A common thing people do is they might share a video, kind of going, I'm not sure if it's
Starting point is 00:29:24 true, but I'm sharing it. And maybe it won't harm the person that shared it, but it might harm the person that views it. It's like if you get a cold and you're asymptomatic, but you infect someone else, they're infected. Treat information as viral, literally viral, and act accordingly. So before you spread, check it out. The other things I would say is if something feels like it's designed to make you angry or scared, realize that it probably is. Right? Realize that that is not how experts usually convey information. They're usually much more nuanced and balanced. If it's un-nuanced, be wary of it too. Always check your sources. Right? Again, I mean, I'm so sorry, I always tell my students,
Starting point is 00:30:05 if you saw on TikTok, it's not necessarily the same as if you read it in one of our textbooks. That is the level. Social media makes us a bit lazy. It's like, well, I learned something. Did you? Also, I always say that about podcasts as well. I mean, podcasts are really important,
Starting point is 00:30:18 but they're entertainment. If you really wanna deep dive into a subject, there's no substitute for actually doing the old book learning and talking to field experts and feeling like an idiot a lot of the time. Because once you start learning a topic, you realize how little you know. And there's a whole thing where you suddenly have to slowly build up your knowledge. Social media gives you an illusion, you understand things. Never fall for the illusion.
Starting point is 00:30:39 And that's really important. There are three things I'd say. I say a lot more. I know I rant a lot more in the papers I've written and books I've written where I do rant, but I'm gonna stop there because I will keep you here for the rest of the week with strategies if I'm not careful.
Starting point is 00:30:52 Honestly, that was amazing. And I have to say to close on that, we touched on this a little while ago, but to your point on TikTok, 56% of Americans said they go to TikTok for health and wellness advice. And one in three cited TikTok as their main
Starting point is 00:31:05 source of health information. And so that kind of sums up why we're here. Ultimately, not to judge people for doing it because it's become the norm, but just to your point, like, caution that it's just not the place for health advice. It's the place for videos of cats. Yeah, yeah. Videos of cats, yes. Health advice, no. And I think that's a good note to leave people with. Well, David, thank you so, so much for coming on The Extra Scoop today. Thank you so much and I'll hope to see you soon. Thank you guys so much for listening to us on The Extra Scoop. We are a community-based
Starting point is 00:31:38 podcast. We want this to be helpful for you. So any requests, we want to hear it. Absolutely. Let us know which experts that you want on the Extra Scoop and we will see you on Monday. Can't wait!

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.