Big Technology Podcast - Are We Too Obsessed With AI Predictions? — With Carissa Véliz

Episode Date: April 22, 2026

Carissa Véliz is an Oxford philosopher and the author of Prophecy: Prediction, Power, and the Fight for the Future, from Ancient Oracles to AI. Véliz joins Big Technology Podcast to discuss whether ...society has become dangerously naive about prediction as AI systems shape decisions around jobs, loans, justice, surveillance, and war. Tune in to hear a debate about predictive algorithms, generative AI, prediction markets, and whether forecasts are actually tools of knowledge or instruments of power. We also cover privacy, policing, protest anonymity, flood prediction, and why humor might be one of the best defenses against a prediction-obsessed world. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. More on Carissa: https://www.chartwellspeakers.com/speaker/carissa-veliz-2/ Find the book: https://www.amazon.com/Prophecy-Prediction-Future-Ancient-Oracles/dp/0385550979 Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 If prediction is the basis for today's cutting-edge AI, shouldn't we examine the nature of prediction itself? Let's talk about it with Oxford philosopher, Carissa Valise, right after this. This episode is brought to you by ServiceNow. If you want to see where Enterprise AI is actually headed, Knowledge 2026 is the place to be. It's Service Now's annual conference, May 5th through 7th in Las Vegas, where thousands of business and tech leaders come together.
Starting point is 00:00:25 Expect headline keynotes from ServiceNow chairman and CEO Bill McDermott. Real stories from companies running AI at scale and major partnership announcements turning AI ambition into actual business results. I'll be there in person sitting down with some of the most influential voices in the space and we'll be bringing those conversations back to you here on Big Technology. Welcome to Big Technology podcast, a show for cool-headed and nuanced conversation of the tech world and beyond. We have a great show for you today. We're here with Oxford philosopher Carissa Valise, who as a, new book out this week called Prophecy Prediction Power in the Fight for the Future from ancient oracles to AI talking all about prediction and what it means for our society. And prediction
Starting point is 00:01:10 really is everywhere in our society. Wouldn't you agree, Carissa? Absolutely. Thank you so much for having me, Alex. You bet. I mean, I was thinking about it when I saw that your book was coming out. I said, we have to have this conversation because, and we're going to get into the AI stuff in particular in a moment. But just from a big picture standpoint, I mean, everywhere we look today. we're trying to predict everything, right? AI, of course, or gender of AI is the nature of predicting the next word. We also have the older versions of machine learning, which has, like, lots of different predictive capabilities, predicting whether you're likely to default on a mortgage.
Starting point is 00:01:44 And then, of course, we're, like, in the middle of this, like, prediction market mania. What is happening? Exactly. As you say, it's everywhere. Prediction sounds like the Holy Grail for everyone. Everyone wants to know what's around the corner because everyone's anxious about the future. that's where we will all be spending the rest of our lives. And whoever can get a glimpse of the future has a competitive advantage.
Starting point is 00:02:05 But that script alone makes a lot of assumptions that are very problematic. Because it seems to suggest that the future is written. And our task is to discover what's there, to kind of discover this script that has been written for us. But actually, the future isn't written. And even though it's frightening, The most important events in your life, in your personal life, but also in your business life, and in our lives as a society, are the ones that are the most unpredictable.
Starting point is 00:02:38 So it's very easy to see what's ahead when the road is straight. It's the curves that are really hard to see, and in some cases, impossible. And those are the ones that will change your life. So you're saying that, okay, so we have this world where algorithms are making all these predictions that could influence us, it could steer us, and are you saying that we should, just do away with those predictions or we should be mindful of the fact that, you know, there might be something hidden underneath because there clearly is. Yeah, I think we should be mindful.
Starting point is 00:03:07 I'm not saying that we should do away with predictions. I use them. And in a way, predictions are part of how we make decisions. But we should be much more enlightened about it. I think we're being so incredibly naive. And in some cases, sure, we shouldn't use prediction. Let me give you an example. So take the justice system or any system in which we really.
Starting point is 00:03:28 care about fairness, in which fairness should be the value that is more important than efficiency or than profit. In those cases, it's very tricky to use predictions, because when you predict that somebody is going to fail, you affect their lives. So say, you use an algorithm to determine that someone is unemployable and you don't give them a job. But because everybody's using more or less the same algorithm, trained more or less on the same data, that person will never get a job. And the company is going to, the company that runs the algorithm is going to say, oh, see, our algorithm is 99.9% accurate. But it may be producing that accuracy through creating the reality that is purporting to predict, rather than that person really being unemployable.
Starting point is 00:04:09 And here's the interesting thing. Self-fulfilling prophecies are like the perfect crime, because it's like a murder weapon that disappears upon striking. It leaves no record. It creates no error signals. We will never know how that person would have fair. because they will never get the job and that data will never get collected. And so it seems like nothing untoward is happening when in fact great unfairness may be happening and being covered up. So you're talking in that example specifically about like AI filtration of resumes through job sites. Exactly. Yes.
Starting point is 00:04:41 Okay. Here's my pushback on that one. All right. I think that that example leaves out the agency of people to bend things. the job application process to their will, to a degree. I think if we are just at the mercy of like job application portals, then I would say, sure, you know, this stuff is probably bad. But isn't there, and I think this probably applies to all your arguments, and so let's have it out.
Starting point is 00:05:17 Isn't there the ability of people to just be like, I don't want to be at the mercy of this algorithmic job portal. I'm going to write straight to the hiring manager and make the case myself and sell yourself. And outside of this sort of algorithmic filtration thing that the hiring manager knows we'll miss people and that the sort of workplace in need understands is imperfect. Yes and no. So, for example, I've met someone who is really good at their job. a computer scientist, but every time they apply for a job through the normal procedure,
Starting point is 00:05:54 they get filtered out. And he doesn't know why. There might be something in his CV that makes him look quirky, and algorithms don't like quirky. And then people get to know him, and he gets offered this high-paying jobs from the same companies. However, there are many systems in which we're not allowing that leeway anymore. So there are many systems in which you've tried to find the email of the manager and you can't find it. More and more, we're being limited to these automatized processes, and that leeway that is so important that you're talking about is disappearing a bit. That's one side. But the other side is you might have people who are brilliant at what they do, but they don't have that kind of personality of looking for the manager.
Starting point is 00:06:40 They might be a particular kind of nerd, right, who may be a genius at, I don't know, programming or a genius on writing, but they're socially not as savvy to try to break the system. And society wants that talent. We're missing out on important talent when we streamline everything. But isn't that to a degree encouraging passivity? Think about the example of like, well, their email address might not be listed. I think that, you know, think about how long it takes to filter, and sorry to the hiring managers, because if people listen to this, your email inbox is going to get blown up, but I don't actually really feel that bad about it. But for instance, like, we're talking again about algorithmic hiring, going through these
Starting point is 00:07:30 processes, it's arduous. I think you could spend half the time guessing email addresses until you get the right one. So to say, let me just throw it out there, to say these AI, algorithmic systems are by, like, shouldn't be used because of this unfairness. Maybe you have actually a better advantage if you do try to break out of the system and be active a little bit and decide not to be at their whim. Like, people have agency at the end of the day. Again, you're assuming that you can break out. But even if you're right, that you can break out, the other side of the coin is that
Starting point is 00:08:11 actually you're incentivizing something like stalking. So the guys that will end up getting those jobs are the ones who are most insistent, who are most willing to break the rules sometime. And one of the concerns I have, I don't know what it's like in your world, but in my world in academia, I think we have a serious problem of fraud, of people who are very well-known and who have been very successful and who have fudge their data or who have committed other kinds of academic fraud. And it's precisely these kind of people, very active, very insistent,
Starting point is 00:08:44 is this kind of profile. And I don't think we should incentivizing that either. So, and I think we should get the best of both worlds. So we want the active people. We want to have a system that encourages them in the right way. And we want the people who, I wouldn't call them passive, but who have other kinds of talents. You know, some people are introverted,
Starting point is 00:09:05 and they tend to have different kinds of talents on the extroverted. And to put all our betting coins on the extroverts is, I think, losing a great pool of talent. Okay, first of all, I'm definitely not encouraging stalking. No, I know. I think this can be done outside of the realm of stalking. And I also think that you don't necessarily need to be a fraudster to go make your case outside the system. Absolutely not.
Starting point is 00:09:34 But it's the kind of incentive that attracts that kind of profile sometimes. Yeah, and I think we shouldn't let this sort of take away from your broader point here. because I have seen these systems. I mean, I've been lucky enough not to have to apply for a job for a while, but I have friends who have gone through these processes, and I'm kind of stunned at what job hiring software looks like today. They filter for personality and, I mean, I understand for an employer to want to have some indication of what somebody's personality is like,
Starting point is 00:10:12 but they do it to a degree where it's like you have a great candidate in front of you and like one little misstep on a multiple choice and a poorly word question filters them right out of the pool. I think that's actually a bad thing for employers as well. Yeah, or using AI to read people's emotions in an interview. There are so many assumptions and so many glitches in technology that is very, very questionable. Another really interesting example is loan applications. So if I'm a bank and you apply for a loan, and I have clear criteria about what you need to get X amount for a loan,
Starting point is 00:10:50 those are verifiable facts. So if I say, Alex, you need $10,000 in your bank account to get this amount of loan. Either you have them or you don't. If I reject your loan, but you do have the $10,000, you can prove me wrong and then we can solve it. But if you apply and I reject your application on the basis of a prediction, there's no way you can contest that because predictions are not facts. At best, they're educated guesses. And because they're not facts, you cannot prove it to be false.
Starting point is 00:11:19 And so it's a way to shroud a lot of injustice and to lessen accountability. Okay, for the sake of argument, let me now take the bank side. Yeah, absolutely. There are great machine learning companies like C3 AI, for instance, that will evaluate mortgage applications. for instance, and they sort of put you in a category and forgive me if I don't get this exactly right, but it's what my research points to me to is that they'll put you in a category in terms of likelihood to pay back a loan. Green, very likely, yellow, all right, kind of borderline,
Starting point is 00:11:53 red. Statistically, you probably won't pay it back. If I'm a bank, my job is to put money out and recover it, right? That's the whole point of being, let's say, a mortgage officer, is to loan that money out and do it with a high degree of confidence that you're going to get that back. And because that exists, the mortgage system can't exist because we give people all this money, you know, that they otherwise wouldn't be able to obtain to buy a house. So if a bank can use this software to be able to determine or be able to do this job more effectively through prediction, what's the problem? So even though banks are businesses and we want them to do well, we depend on them to do well, really.
Starting point is 00:12:42 I mean, we can see the financial crisis in 2008 and what happens when that doesn't happen. It's also a very important opportunity to give a loan to someone is life-changing or to deny a loan to someone is life-changing. And so there are also considerations of fairness going on. So if you have an algorithm that is not very accurate, and that it's not very fair, but it's profitable enough. If it was just about profit, then it would be fine. And there are some areas in which, frankly, it's just about profit, like maybe retail. And that's fine.
Starting point is 00:13:15 But because in this area, it also has to do with life opportunities, when you scratch the surface of those algorithms, for example, the markup had a very long story a few years ago about how two people who had applied for a mortgage had been denied, and when the Markup investigated, their file looked exactly the same or very similar to other two people who happened to be white. And it turns out that they were black. So you start getting all these correlations that are very unfair. And when you have clear and contestable criteria, one of the important things, there are two important things.
Starting point is 00:13:56 One is that it's usually causally related to whatever you want. So if you have $10,000 in your bank, that means that you've probably been good enough to save, and that means that your likelihood of paying back this amount of loan is high. But it's a causal relation, because sometimes machine learning picks up on spurious correlations. If you have three credit cards, you're more likely to pay back because it just happens to be that people with three credit cards have had better luck paying back. But the other really important thing is that if you don't fulfill the requirements, you know what to do to change a decision. So if you have only $9,000 in your bank, you know that you need a thousand more. And so you know exactly what to do to get the kind of answer that you want.
Starting point is 00:14:39 When it's a black box statistical pattern matching, you have no idea what you need to do to get the loan. And in some cases, the best way to get the loan would be to have a different race. And that seems not only unfair, but also irrational in some way. Yeah. So first of all, it's great having you here because we, have recently especially we've had a lot of people from industry on the show and i always love to feature the critics because it's important to hear your voices uh and and talk through and so don't don't take my uh pushback here as a as me being a stand-in for industry it's
Starting point is 00:15:19 absolutely you need to have this conversation and i'll you know the same way i'll ask you know sort of probing questions to the people in industry i'm going to ask you some some more um so let's just keep going with this. By the way, this is all sort of like old school machine learning, which is predictive. We're going to talk about more of the generative AI side of things in a moment. But let's keep going with this because it is rich and interesting, you know, to talk through. So there really is a question is the question is, again, like if this system helps a bank do a better job, shouldn't the answer be instead of throwing it out, investigate it for bias. If it's biased, fix that bias and if not let it run. Like for instance, I'll just talk through this three credit
Starting point is 00:16:06 card example. Okay, people with three credit cards for whatever reason are better at paying their bank back on the loan. Now, it might seem like totally like irrelevant, but at the end of the day, if you have three and you're, that's a statistical correlation to that you're more likely to pay the bank back, then that's actually maybe an additional loan that they could make, that they wouldn't make otherwise if they didn't have that data. So instead of saying this system is rotten, throw it out, shouldn't the right response be investigated for bias and inaccuracies, but overall maybe keep it? Well, there is value in that. Okay. For sure. However, even if you investigate for bias and keep it, there's still the problem that the prediction will affect that
Starting point is 00:16:56 life. So if you don't give someone the loan, they will do financially worse. And then, you know, you can claim accuracy. But accuracy at the price of creating that reality is not what we're looking for. It's not the kind of accuracy we're looking for. You can't give everybody alone, though. No, you can't give everybody alone. But the thing is, when you say, well, let's investigate for bias or investigate for inaccuracy, there is a limit to what we can do because we will never have the counterfactual. This is not a randomized control trial, right? And you still have the problem that without clear criteria, you can't make it a contestable process and you can't give the person the conditions under which they would get a different
Starting point is 00:17:39 response, which seems like an important thing to do. We are building systems that are very Kafkaesque, that are impossible to navigate. And I don't know if you've had this experience in which they are becoming so alienating and so Kafkaesque that people start having like magical thinking about the algorithm, attributing it believes and trying to figure out what it wants. And this is something that the philosopher Hannah Arendt's warned about. Because, you know, back in the 1930s, there was something similar with very opaque bureaucracies that were very random. And what it does to people is it creates a sense of alienation, a sense of not being able to understand the rules by which you are ruled.
Starting point is 00:18:21 And that is incredibly toxic for human psychology. You know, it is interesting because sometimes you do really get the bad outcome here. I think this is a real thing. There was a tweet over the weekend that somebody told JetBlue that they have a $230 increase in a ticket after one day, and that's crazy. And they're just trying to make it to a funeral. And the JetBlue account says, try clearing your cash and cookies or booking with incognito window. We're sorry for your loss. you're right that sometimes these algorithms
Starting point is 00:18:56 I mean there are times where they just clearly break down and they do become Kafkaesque or just like really tough to navigate and so many times there's no one to complain to there's no one who can understand you who can fix a mistake it's just a machinery that's right well I mean I think I think this really sort of gets to
Starting point is 00:19:14 it gets to one of the tougher parts of this which is that there's a lot of AI AI is being used here, whether it's predictive AI or whether it's generative AI. And a lot of this stuff will make decisions and you just have no idea where the decisions are being made. I mean, within AI there's this large field, probably should not large enough, but it's a large field called interpretability. And it's like, yep, the whole job is trying to figure out how the gender of AI systems work. And it's like you're putting this out there, people are relying on them.
Starting point is 00:19:52 And then, oh, I mean, I guess, I don't know, and along the way, you're trying to, like, figure out how they work. You're trying to interpret them. Like, isn't that backwards? Yeah, it is. And something really interesting, it's a bit of a metaphor. So I'm not saying it's exactly the same thing. But we can really learn a lot from ancient Greece and ancient Rome. Because our current, you know, we started this conversation by just pointing out how much we're relying on prediction.
Starting point is 00:20:16 And we've always relied on prediction. But I think there are times in history when that goes up and goes down. I think this is a piece. And another peak was ancient Greece and ancient Rome. And if you were to interview an ancient Greek person and say, what do you think about the Oracle of Delphi? They would say, oh, it's cutting-edge technology. It's the best we have to make decisions.
Starting point is 00:20:36 And how does it work? Well, we're trying to interpret it, right? And the same thing with astrology. It was a very technical thing about how to read the stars, how to measure the distance between the stars. So in a way, we've seen this before. Even though the technology is different, the political role is actually quite similar.
Starting point is 00:20:54 Okay, but this one also. All right, the Oracle of Delphi, right? The Oracle of Delphi didn't know anything. I mean, it's a story, right? But like, let's say you have an Oracle back in the day. Yeah. They'll be a great famine. They're pulling.
Starting point is 00:21:06 That's like, that's total bullshit. Like, they don't know what they're saying. But an AI system can actually predict that there will be a famine. Let me give you an example where prediction could be really good. All right. Google is, you know, say what you will about Google. One of the things that they're really working hard on in Google research is flood prediction, which we know, like, kills way too many people because it's totally preventable.
Starting point is 00:21:32 That's not, you know, do we know every single thing about how these machine learning algorithms make these predictions? Maybe not, similar to the way that we didn't, you know, know what an Oracle was going to make the prediction. But in the real world, we can tell. whether they're accurate or not. And they have been accurate and they have saved people's lives. That to me is a great form of prediction that AI can help us with. Now, is this something in full disclosure that Google holds up and says, look how good our AI is? You know, look over here while you don't look at the rest. Yes. But it doesn't change the fact that that's, I think, an undisputed good.
Starting point is 00:22:10 And this is part of why it's so important to have this conversation, which astonishingly we haven't had before as a society about sure there are kinds of predictions that are very good like weather prediction I look at my app every single day multiple times a day and I continue to do so but then there are other kinds of predictions that are clearly very problematic and the interesting thing is that there's no formula there's no way to say okay if you check this box this box and this box and it's fine it's a it's a public debate that we need to have and that's why it's so important with Google I haven't looked at the flawed thing but let's say that's correct That doesn't mean that every kind of prediction that Google does is equally valid.
Starting point is 00:22:50 So one very fun example is, well, not fun, but I mean, interesting example is when Google tried to predict flus and pandemic-type events, and it tried for years and years and years, it increased the complexity, it increased the data, and it could never do it, and eventually it shut down, partly because it was relying on people's searches, people doing searches, and when you search for symptoms, sometimes you search for symptoms because you're having the symptoms. Sometimes you search for symptoms because your sibling is having the symptom or because you're worried you might have them.
Starting point is 00:23:24 And so it was too confusing and they couldn't do it. And again, even though there is no checkbox and no easy way to tell which predictions are acceptable and which are unacceptable, one thing to take into consideration is, is this a prediction about a thing? Flods a thing? Yeah, thing.
Starting point is 00:23:43 or about something more social. Yeah. Well, let's go back to the pandemic example. So that's the first I'm learning about this Google example, but there are other versions of, you know, prediction, AI-based prediction that are helpful when it comes to a pandemic. Wastewater analytics, for instance, is really interesting where like there are companies. We've had them on the show, actually, that can, let's just take the COVID example.
Starting point is 00:24:09 They see how much COVID or how much virus there is in the wastewater. and then they look at the rate at which it's advancing, and then they can actually predict a spike. That could free people, because if there's no, if there's no prediction involved in, like, when the spike is going to be, like the answer might be locked down everybody. The other side of it is if you can predict that there's going to be a spike, you can be selective in when you want to shut things down versus open them up.
Starting point is 00:24:37 Yeah. And one important thing is the closer you are to the present, the more likely your prediction is reasonable. So if you hear somebody predicting what's going to happen in a thousand years, take it with a big, big pinch of cells. Or in fact, just kind of laugh it off. Now, the people that come on this show, they won't predict like a year into the future
Starting point is 00:24:54 because this AI world is changing so fast. But, yeah, a thousand is... But like long-termists, in effect of altruism, are thinking about the world a thousand years from now. We got to, yeah. Or, you know, some people are thinking about the world in 50 years or 25 years. So the more you ground yourself in the present, Like if you see the analytics of the wastewater right now, that is very useful information.
Starting point is 00:25:17 And depending on how much you know about the virus and how much experience we've had, you might be able to make some useful predictions. Now, that doesn't mean that we will be able to predict the next pandemic. It might be a virus that we've never seen before, and we don't know how it behaves. And one very important kind of warning is beware of people who will promise a prediction in exchange for huge service. Because the price to pay for surveillance, so mass surveillance, is a police state. It leads to authoritarianism.
Starting point is 00:25:48 And so often we're willing to surrender our privacy on promises that are never kept, that are very problematic, even if they could keep it. And we're sort of selling our democracy. Okay, but we need an example here. So, like, where is the surveillance happening? All right. that leads, where are these trade-offs? So, I used to live in New York City, and I hadn't been in the city for a while, and I've noticed how there are many more cameras than when I used to live here. Right.
Starting point is 00:26:21 Many people claim that, well, we need these surveillance for safety. The more surveillance we have, the more safe we are. Right. But that is empirically inaccurate. So the safest countries in the world are not the most surveilled ones. So Spain is one example. It has some of the lower statistics for any kind of crime, including homicide and so on, violent crimes. And it's not better surveilled than the US or the UK.
Starting point is 00:26:49 And in fact, the UK is the country in Europe that is most surveilled and it has more crime. And so that's one example. But it's important because when you have a protest and in particular a peaceful protest, it's very important to have anonymity. That is one of the bedrocks of democracy. And when you have cameras all over the place, and now with facial recognition being so easy to use, you are eroding one of the most important tools in the toolbox for democracy. Oh, I have so many questions about this. I mean, first of all, I'll just say, like, have you been to China?
Starting point is 00:27:27 I've read a lot about China. Okay. I was in Beijing for a day. Right. But that was enough to see the level of surveillance there. A lot of cameras. Now, there is a feeling of that society is safe, but it's not a society I'd want to live in. Exactly.
Starting point is 00:27:43 But there's like, there is some sort of spectrum there where like you probably do want some cameras up. So you can, like for instance, a security camera in some areas. That's good, right? Without any video footage, you probably solve less crimes. So isn't it a matter of like finding where on the spectrum you would live? you want to live. Yes, but I think we're getting it very wrong. I think that the practical question we're asking by having this amount of surveillance is
Starting point is 00:28:12 how much surveillance can liberal democracy take? And I'm afraid that we might find out. And I don't want to find out. Yeah. Because I don't want to live in China either. And the illusion of a world without crime ignores the fact that that would be a world very full of a very different kind of crime. Authoritarianism. Exactly.
Starting point is 00:28:31 Yeah. Yeah, that is a problem. So talk a little bit then about how generative AI sort of comes into this. I mean, I think we kind of hinted at it at the beginning and then sort of went to this earlier version of machine learning that's everywhere. But there is this now there's a trust towards chatbots. And yeah, you can really, I mean, you can really steer your life towards different outcomes based off of what chatchip-t tells you. and it's probably worth at least like thinking about that before diving at first like I often do. Absolutely.
Starting point is 00:29:08 And maybe just to end the previous topic, surveillance is important because the whole machinery of surveillance is there to feed the machinery of predictions. So these two machinaries are intimately related and that's why it matters. But we're not living in like minority report though. We seem to be walking in that direction. and I would like for us to walk in a different direction. I mean, but we're, let's just talk it through. We're not like arresting people on crimes they may commit, are we? No, but we're using predictive algorithms in the justice system for sentencing,
Starting point is 00:29:45 for many aspects in the justice system. And for the reasons that we explored with insurance or with loans or with jobs, that's very problematic. We talk a little bit about how those predictive algorithms are used in the justice system. And then we're going to get to this gender-of-a-I question, but you keep taking this interesting routes. Well, it depends on the place. They vary a lot.
Starting point is 00:30:04 Right. But some algorithms are used to assess the risk of a person committing a crime and on the basis of that deciding whether they might get bail, for example. Parole. Parole. All these things. All these things. Another case that worries me that I think people are less aware of is.
Starting point is 00:30:28 whether, for example, an insurance company decides to cover a lawsuit, because they only cover a lawsuit if the case has a 51% chance of succeeding. And that makes sense in some ways. You can see the rationale behind that. But at the same time, when we make the justice system about probabilities, we're losing its principled approach. And so you make it very easy for the bad guys to get away with it because you don't have to make it impossible for people to challenge you or even very hard. You just have to make it slightly unlikely for them to win. And then you get scot-free. So there are all kinds of distortions of justice when we introduce probabilistic thinking into an area that I think should be more based on principles. Okay, I got one more for you.
Starting point is 00:31:16 Why, I just want to hear you talk it through. Why there's a right to privacy if you protest? I'll tell you what my fear is. All right. And it's good to just talk it through. Like, if you end up having, now I'll say something negative about algorithms. We have a world where algorithms drive people to extreme positions. The more extreme you are likely you are to get play in the algorithm. And part of that is anonymity, right?
Starting point is 00:31:52 You can say these things as trial balloons with anonymity and sort of see how people respond. to them and then double down. And I think one of the fears with with anonymous protests, and I'm not, I'm not, I'm just talking it through, I'm not taking a position here, but is it takes some of those online dynamics and brings them into the physical world where like you can, if you're, if you're unidentified, the temptation to move to the extreme or the ability to move to the extreme is further and further. I believe in free speech, but I also think incentives matter. So talk through what you think about this one. Absolutely. I have a paper called online masquerade, which I'm going to send to you. And the gist of it is that even though it's very
Starting point is 00:32:40 intuitive to think that way, when you look at the empirical data, it shows that people who are identified online tend to be more aggressive and then they tend to be more followed and more successful in that aggression. And, you know, we see this in the public's domain. We know important politicians who put their name on things and who say very outrageous things and it works. And so anonymity is not necessarily leading to more aggression or more toxicity. The second thing is that if you're in the public square and you're protesting and let's say you're protesting peacefully and there's one person who is aggressive or who is showing something, who does something illegal, then of course the police can always arrest them.
Starting point is 00:33:23 But we don't need to have mass surveillance in order to have that, to have accountability. We didn't have mass surveillance a few decades ago. Right, but I'm not saying the mass surveillance. I'm just saying the anonymity part. Like, if everybody protests in a mask, you know, doesn't that? You think that leads to better outcomes than if they don't? Well, we shouldn't need a mask because we shouldn't have this kind of surveillance that identifies us. of the mask.
Starting point is 00:33:54 Yeah, exactly. But like, even if somebody were masked and, you know, they break a glass or whatever, then have the police arrest them and take off the mask, you know? Right. But you can't, I mean,
Starting point is 00:34:04 okay, I'm just going to, I'll let that sit. I don't want to spend the whole day debating this. But it's an interesting, interesting to hear you talk about it. All right. Now talk about the gender of AI side, finally. Yes.
Starting point is 00:34:14 So if we have these systems of prediction in our world, I mean, again, like people who are building Gen. Gen. AI tools, they care very much about prediction, predicting the next word, predicting outcomes, and when they can predict outcomes, then their agents can take the next step. Where is that leading? Yes.
Starting point is 00:34:31 So some authors make this distinction between predictive AI and generative AI, and I am not sure it makes sense, because both kinds of AI are essentially predictive. We might use them differently and they might look differently. But essentially, they're both machine learning and what machine learning broadly does, it has some data and it projects data that it doesn't have, based on data that it does have rough. whether it's predicting the next word or predicting whether somebody's going to be a good employee or not. And in the case of generative AI, it's fascinating. I don't know where to start. It's fascinating how it got trained. I mean, that's one thing, you know, with copyrighted material, with personal data.
Starting point is 00:35:10 And so that's one kind of thing. We can spark it, but just notice. And in the way it works, it's a very sycophantic system, as we know. It likes to please people because that's the way it. gets you to be engaged. And so it will tell you things like, oh, that's a brilliant idea, and it will continually validate you. And they were built, they were designed to do that. They were designed to make people feel satisfied instead of being designed for another thing, for, for example, for being truth tracking, which would be much more useful if, say, you're a researcher.
Starting point is 00:35:48 And I think sometimes we lose sight of that. And what, One way to put it is in the philosophy world and there is this philosopher called Harry Frankfurt who wrote a book called bullshit on bullshit. And Frankfurt says that bullshit is very dangerous for democracy because the truth teller and the liar are playing the same game on opposite side of the court. But the liar has to know what the truth is in order to lie and care about the truth. The bullshitter doesn't care about the rules of the game. They're not playing the game at all.
Starting point is 00:36:19 And that's very toxic for democracy because it's very hard to have a debate. or a conversation with someone who doesn't care about the truth, who will say anything to just have the kind of reaction they want with no regard for the truth. And that's essentially what a large language model is. It has no regard for the truth. It wants to please you. If what pleases you happens to be true, great.
Starting point is 00:36:42 But if it's not true, then it doesn't care one way or another. But is that true? Because, I mean, the labs have done a lot of work to ground these models in truth. And in fact, like, if it was a bullshit or the way that you would explain, there would be very little economic value. But we can see now that there's real economic value. We don't know whether there's real economic value.
Starting point is 00:37:06 The jury is still out on that. But, yes, the labs have done more. You don't think, I mean, I guess, like, it seems to me like we're past that point where there's a real question here. Now, maybe it's not going to be broad economic value in a way that makes the boom appear. justified, but if you think at places, look at places like coding, like there are areas where we can see today that there is definite real economic value there. I don't know. I'm not saying there isn't. I don't know. Because sometimes these systems create mistakes that then are very expensive
Starting point is 00:37:37 to fix. And it's not easy to make the calculation of whether we are getting economic value. There was a paper recently at the Harvard Business Review that suggested that even when people think they're being more productive with AI, when you have researchers look at it, they're being less productive because they're spending a lot of time fixing what the AI gets wrong and not noticing that. So I don't know. Maybe we do, but I'm not as, it's not crystal clear to me. Okay.
Starting point is 00:38:07 But even if we do. It's nice to have somebody with a different perspective here, which isn't have the same people all believing the same thing. No, of course. And I grant that I don't know. You know, I'm not just saying something I just don't know. But even if they do, where were we? That this is really the key question about Gen A.I.
Starting point is 00:38:26 Where your argument is that it's a bullshitter, and I will just throw out there. And this is something I really do believe. These companies are spending lots of hours and dollars trying to ground these models in reality because if you do that, they become much more useful. And they've become much better at it over time. Absolutely. But the interesting thing is the way they've become much more. better has been by getting away from this probabilistic and statistical thinking. So, for example,
Starting point is 00:38:56 when you start chatting to a chatbot and then they realize that what you're looking for is for something, say, in a manual, in a PDF, then they refer to the PDF and that's how they ground themselves in reality. Or when they realize you want a calculation, then they plug into a calculator because these systems cannot calculate, as we know. And so that's interesting that the way to make it better is to move away from this probabilistic thinking. So part of my criticism is not about AI or any kind of AI, but about prediction, about how we're using prediction and how naive we've been about using prediction. And I think if these systems had been designed differently from the start, they would have needed less patches. And how do we think about this going forward so that we
Starting point is 00:39:39 design systems from the stars to be truth tracking rather than fundamentally about engaging people for profit. But how impressive is it that they know, okay, actually my knowledge actually stops here. I should use the calculator or I should actually go look in the PDF. And I would say the argument that the model makers would make is you can't have the tool calling before you have the base model. And it took a couple of years for these base models to get smart enough to know when to call those tools. That sounds great. And I'm on board. But in practice, there's still not quite there. So for example, I'll give you a very recent example. It's weeks old. If you ask one of these chatbots, I have a box and I'm going to put two bunnies in it, and then five months later,
Starting point is 00:40:29 I take five bunnies out. How many bunnies are there? And it will say minus three bunnies. So they still don't have enough understanding to always figure out what they need, right? In this case, they might have gone to a calculator, and that wasn't appropriate, right? Because they don't understand that bunnies can reproduce. So yes, with new ones. Yeah. I mean, there are people, examples of people asking the most advanced models, like how many peas are there in straw peri? And it's used to being asked how many R's are there in strawberry, and it gets it wrong. Exactly. Let's just end this segment. We do need to go to a break, but let's just end this segment sort of
Starting point is 00:41:08 with your broad thesis here, which is that like we, And you tell me if this is the right way to encapsulate it. We live in a world where there's a lot of prediction, more prediction around us all the time. Prediction in the AI models, prediction that's influencing the jobs we get, whether we get alone, all these things. And rather than just take this notion of prediction for granted, we should probably pay attention to the nature of those predictions themselves. Is that sort of what you're saying? Yeah, exactly, because predictions can be weapons of power. They can be power plays in disguise and we need to be less naive and smarter about them.
Starting point is 00:41:48 Okay. Well, that is all being put on steroids in these prediction markets because oftentimes you'll see a prediction in a prediction market and the question is, is that somebody manifesting an outcome? Is it someone with direct knowledge of an outcome or is it actually just a market for what might happen? We'll cover that when we come back right after this. Starting something new isn't just hard. It's terrifying. So much work goes into this thing that you're not entirely sure will work out, and it can be hard to make that leap of faith. When I started this podcast, I wasn't sure if anybody would listen.
Starting point is 00:42:21 Now I know it was the right choice. It also helps when you have a partner like Shopify on your side to help. Shopify is the commerce platform behind millions of business around the world and 10% of all e-commerce in the U.S. From household names like Allbirds and Cotopaxi to brands just getting started. With hundreds of ready-to-use templates, Shopify helps you build a beautiful online store that matches your brand style.
Starting point is 00:42:45 Get the word out like you have a marketing team behind you. You can easily create email and social media campaigns wherever your customers are scrolling or strolling. It's time to turn those what-ifs into with Shopify today. Sign up for your $1 per month trial at Shopify.com slash big tech. Go to Shopify.com slash big tech. That's Shopify.com slash big tech.
Starting point is 00:43:09 I've interviewed a lot of great tech founders on this show, and one surprisingly universal challenge comes up again and again, finding the right domain name. It's something I ran into myself when launching big technology. The names you want are often taken, and it's tempting to just settle and move on. But the founders I respect most don't settle on fundamentals, and your name is one of them.
Starting point is 00:43:30 It should immediately signal what you actually build. That's what I appreciate about dot-tech domain names. It just makes sense. It tells the world your customers, your investors, anyone Googling you, that you're building in technology. clean, direct, no qualifiers. And I'm seeing more serious startups lean into it. Nothing.com, Onex.com, Aurora.com, CES.com, Ultra.
Starting point is 00:43:49 Tech, Alas.com, Neuron.com, blaze.com, tech, tech, and so many more. If you're building something tech first, don't settle. Secure your dot-tech domain from any registrar of your choice and make your positioning obvious from day one. Your IT team waste half their day on repetitive tickets. And the more your business grows, the more requests pile up. Password resets, access requests, onboard. all pulling them away from meaningful work. With Serval, you can cut 80% of your help desk tickets.
Starting point is 00:44:17 While Legacy Players bolt on AI, Serval was built for AI agents from the ground up. Here's the transformation. When a manager used to onboard a new hire, the old process would take hours. They'd think Slack, email IT, wait on approvals. Meanwhile, new hire sit around for days. With Serval, a manager asked to onboard a new hire in Slack.
Starting point is 00:44:33 The AI provisions access to everything automatically in seconds with necessary approvals, IT never touches it. Companies like Perplexity, Verkata, and Mercado, Mercur automated over half their tickets immediately after setup. If I needed this product, it's exactly what I would use. Serval powers the fastest growing companies in the world like Perplexity, Mercur, Mercada, and Clay. Get your team out of the help desk and back to the work they enjoy.
Starting point is 00:44:56 Book your free pilot at serval.com slash big tech. That's S-E-R-V-A-L-com slash big tech. And we're back here on Big Technology podcast with Carissa Villez. She's an Oxford philosopher and the author of Prophecy, Periction Power, and the fight for the future from ancient oracles to AI great title. So what do you think about prediction markets, Carissa? They scare me. Okay.
Starting point is 00:45:21 That is, after our first half conversation, I'm not surprised. What particularly about them scare you? So the argument for having them is that they can be a source of knowledge, right? When people bet with their own money and if they get it wrong, they lose, they'll try to get it right. And when you have a lot of people placing bets, in theory, we can harness the wisdom of the crowds, all of which sounds great. But it assumes that prediction is a kind of quest for knowledge, and it doesn't consider that sometimes prediction is a quest for power. So, for example, if you want to influence public perception and you have enough money, you can bet heavily on something or someone to make it look more popular. And we already have examples of politicians betting on themselves.
Starting point is 00:46:03 Great use of campaign funds. Yeah, exactly. Make it look inevitable. That's what every campaign tries to do. Exactly. And when you start having these prediction markets have deals with newspapers in which newspapers are reporting on the prediction as if it was a fact, then it gets to be really smart way to invest your campaign funds.
Starting point is 00:46:22 Another example which is concerning are ones in which there are many ways to make a prediction come true. And one of those ways is to make it come true after the fact. So I don't know if you read that there was a case in which an Israeli journey had reported about a strike in the conflict. And some people bullied him to try to get to change his story because they stood to win $900,000 from a bet. Yeah.
Starting point is 00:46:53 It's like fantasy sports. Yeah. Another case that is concerning is six anonymous accounts earned $1.2 million on a prediction market, betting for the attack on Iran. and some of those wallets were funded hours before, which suggests that they might have had insider information. And if they had insider information,
Starting point is 00:47:15 did that conflict of interest lead to a different kind of decision? Another concerning case is cases in which an adversary might be using those prediction markets to inform their own tactics. And so it might change conflict itself. And even when there isn't any bad player, even when it's just people who are well-intentioned, I worry that many people thinking that there's going to be a war makes it much more likely for there to be a war. Because the other country interprets it as a threat and then they escalate and then we escalate in response and suddenly it's a spiral that nobody wants it to happen.
Starting point is 00:47:52 But our expectations shape the future. Why do you think people are so enthralled by these markets? I mean, they're having a moment because they've been accurate. and I think better than the polls in some cases. But just the outsized attention and interest in them is very interesting. What do you think is behind? You're a philosopher. I don't know.
Starting point is 00:48:17 I don't know. These are hypotheses. But one hypothesis is that we have truly become so accustomed to thinking in these betting terms that we are exporting that kind of mentality to more and more spheres of life. and I think that's a very bad thing. It's also having to do with gamifying life. And there's something to me very disturbing about standing to earn money from a bet in which if you win, somebody's going to suffer greatly,
Starting point is 00:48:48 like in the case of a war or something like that. Because you might say, well, you know, prediction markets aren't that different to the stock market, right? It's also like a kind of bet. But the stock market, when you invest in a company, you're actually contributing capital to that company in a way that is an important contribution to society. Whereas the prediction market is just a bet.
Starting point is 00:49:09 And the only value they might have is if they're accurate, but accurate at what price, and accurate in one sense, and accurate when. And there's a lot of noise. And even if in one instance you might say, in this case, the prediction markets were more accurate, well, what does that mean? What does that really mean?
Starting point is 00:49:25 And it doesn't nullify all of the other problems. We don't want to gamify every. thing. But I think maybe another reason why they're so popular is because there is this general sense that we're living through times of high uncertainty. And that is, you know, leads people to be anxious. I can feel it as well. But I would like to invite people when they feel that anxiety about uncertainty to realize that uncertainty is actually good news. Because it means that the future is not written and that means that we can intercede in it, that we can influence. influence it. And that is a great news. If you knew exactly what is going to happen tomorrow,
Starting point is 00:50:05 you probably live in a police state. Yeah, but then, I mean, even if there's a prediction market out there, you could probably also intercede. I don't think you have to give up. Like, same thing with political polls, right? Like, you could say the same thing about political polls as the, as the prediction markets, that they become self-fulfilling prophecies, because they do in many cases. Yeah. And why do we do political polls? In a way, we do it for entertainment. And is that good enough? I'm not sure it's good. Another reason might be, well, you might be well informed. You might want to be well informed because depending on how things are going, you might vote one way or another, right? Tactical voting. But I'm not sure we should incentivize people to be tactical voters.
Starting point is 00:50:47 The ideal democracy, I think, is one in which people vote according to their conscience. And that says more about what they want. And that is more democratic, it seems to me, that when we push people to think tactically. Yeah, I don't think I'm going to stand on the table for political polls. Okay. It kind of annoy me also. Fair. All right, let's end with this. I mean, you have a perspective that you got to use comedy in this era,
Starting point is 00:51:16 and that's sort of a counterweight to some of these ills that you see. Talk a little bit more about that. Yeah, it's really funny because my first book, Privacy is Power, is kind of gloomy in a way. because at the time, everybody was so excited about tech and not seeing surveillance, and I felt that somebody, we needed a warning, more of a warning. But now it seems like we're in such a gloomy space in which so many people are making horrible predictions about the future.
Starting point is 00:51:42 And I talk with my students and sometimes I don't know if young people can even imagine a bright world. And if they can't even imagine it, how are we going to get to that kind of bright future? that I wanted to emphasize the good things that we have. And two very good things that we have and are very important resources and tools are first, the analog world. Sometimes we forget about it.
Starting point is 00:52:04 We are so dazzled by the digital that we forget the world of things of your favorite coffee shop and your favorite bar and the people you love and your dog and the ecological world and trees and rivers and to ground ourselves there and cherish and protect that. But the second thing is humor.
Starting point is 00:52:21 And humor is quite important. not only because it's a way to make life more fun and get through the hardest parts of life better, but it's also a very important tool in the toolkit of democracy. When you lose sense of humor, you're probably also losing some amount of freedom and democracy. And for example, Milan Cunderra, the novelist, wrote a novel called The Joke, making exactly this point given his experience with communism. And so the way that we, one way to confront all these gloomy predictions are first, noticing that their predictions, predictions are not facts, they can be defied and thinking, okay, is that the future I want? And if not, what am I going to do to create the future that I want to live in? But secondly, to treat it out a little bit with less seriousness. I'm not saying I'd be mean or anything, but just like laugh a little bit about the absurdity of life. And humor is also a kind of, you know, kind of. of intelligence. It's a kind of noticing the absurd and noticing what's off. And one example I give
Starting point is 00:53:28 in the book is that of Seinfeld, because it's also about curtailing predictions. When something's funny, it surprises you in a certain way. Part of what makes a joke funny is that you're expecting something and then you get something else and that makes it funny. And Seinfeld was brilliant at this, is brilliant of this. And the show is a very interesting case because it's exactly the opposite of what an algorithm would select. So the show was incredibly unsuccessful as a pilot. Focus groups thought that it was weak and people didn't like it. It wasn't what people wanted to watch.
Starting point is 00:54:05 So if we had had algorithms back then selecting what people want to watch, Seinfeld would have not been one of those cases. But there was one executive in NBC who really believed in the show and championed it. And the first few seasons were a bit successful. It had like a niche following, but it was still small. And then it took off. And part of why it took off is because it changed people's sensibilities. It changed our sense of humor.
Starting point is 00:54:33 And that's part of what great comedy or great art or great literature does to us. It makes us look at the world different. And when we use prediction too heavily, when we only predict what's going to be successful based on what has been successful in the past, we are missing out on those innovations that will make us look at the world anew and different. Yeah, and to your point, the one thing that LMs do the worst is humor.
Starting point is 00:54:59 They cannot make jokes. And it's because, I think, like, as you point out, they're just used to the average of averages and not throwing curveballs. Exactly. And also because there's no one there. There's no one being irreverent towards power. And part of comedy is that.
Starting point is 00:55:13 It's like the court jester. What makes it so funny is that, you know, they are challenging the king in a way. And they are the king. And they are the king, yeah. The book is Prophecy, Prediction, Power, and the Fight for the Future, from ancient oracles to AI, Chris Lafleeves. So great to have you. Thank you for coming on the show.
Starting point is 00:55:32 This is fun. Thank you so much for having me, Alex. It was great. Awesome. All right, everybody. Thank you so much for listening and watching. We'll see you next time on Big Technology Podcast. Frozen lasagna, medium power, 15 minutes.
Starting point is 00:55:59 Sounds like Ojo time. Let's play. Feel the fun with Play Ojo. The online casino with all the latest slot and live casino games. What you win is yours to keep with no wagering requirements, instant payouts, and no minimum withdraws. Hey, I just won. Woo-hoo! Feel the fun!
Starting point is 00:56:16 Play Ojo! Honey, forget about the lasagna. Let's celebrate! 19 plus Ontario only please play responsibly. Concern about your gambling or that of someone close to you, call 16-3-3-1-26-100 or visit Connexontera.ca. Before you knew what a stock was, you traded snacks. cards, turns. You knew what something was worth
Starting point is 00:56:32 because you felt it. That instinct to trade didn't disappear. It just grew up. TD Easy Trade taps into that instinct so you can build something real for the future with no minimums, no monthly fees, and 100 free trades. You already got this.
Starting point is 00:56:49 Because you are made to trade. And TD Easy Trade is made to help. Download it now.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.