StarTalk Radio - Winning Money with AI – with Matt Ginsberg

Episode Date: March 4, 2022

Can artificial intelligence predict the future? On this episode, Neil deGrasse Tyson and co-hosts Chuck Nice & Gary O’Reilly explore algorithms, computing, and how to win Warren Buffett’s March Ma...dness money and beyond with AI expert, Matt Ginsberg.NOTE: StarTalk+ Patrons can watch or listen to this entire episode commercial-free here: https://www.startalkradio.net/show/winning-that-money-with-ai-with-matt-ginsberg/Thanks to our Patrons Robert Bork, Nick Fugal, James Trager, Brian S, Nightfall, Chris Hernandez, Mithat Sezgin, Luke Fertal, Rhett Hogan, and Patrick Creech for supporting us this week.Photo Credit: Tracy O, CC BY-SA 2.0, via Wikimedia Commons Subscribe to SiriusXM Podcasts+ on Apple Podcasts to listen to new episodes ad-free and a whole week early.

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to StarTalk. Your place in the universe where science and pop culture collide. StarTalk begins right now. This is StarTalk Sports Edition. And of course, I've got Gary O'Reilly. Gary. Hi Neil. Gary is the only street cred we have for this programming, having been a professional athlete himself
Starting point is 00:00:28 and a professional sportscaster over in the UK in the soccer realm. Yes. And Chuck, nice. Always good to have you here, man. Always a pleasure. Yeah, the one person who has done no organized sports in his entire life. I mean, actually, you could have just said has done nothing. Nothing.
Starting point is 00:00:46 And just left it there. And you would have been spot on. Spot on. So, Gary, March is looming. Yes. And so what do you have in store for us today? All right. Perfect.
Starting point is 00:01:02 A little bit of March madness. The NCAA's March madness, college basketball is approaching. And if you work, and I'll say this in an English way, and then again in American, if you work for Berkshire Hathaway, or Berkshire Hathaway, right, you nail a bracket, because it's all about brackets, brackets, brackets. You get some of Warren's money. Really? Yes. Who? He said this? Yeah. So you have to be a company employee. Sometimes they open it up to the public, but generally it's a closed shop for Berkshire Hathaway employees only. You can get up, you can get a hundred thousand if you do a decent job. Or if you nail a perfect bracket, you get a million dollars a year for life.
Starting point is 00:01:45 Right. In other words, play the lottery, people. Just go with your state lottery. Chuck, I think he knows nobody's going to win that. So he's letting you do it, and he ends up with more money at the end. Yeah, that's my point. It's like you got a better chance of winning your state lottery than you do of picking a perfect bracket.
Starting point is 00:02:10 So that's just it. How do you pick a perfect bracket? Now, if it's beyond our human brains to achieve that, and we know Warren's a very, very wealthy man because he doesn't give his money away. So he's holding on to his money. But is there a way? Is there a way to break the system, break the bank and get Warren's dough? Possibly. But for that, we need an AI expert. I know of only one person for that. And he's a friend of StarTalk, has appeared before. And that's one and only Dr. Phil.
Starting point is 00:02:43 Only Dr. Phil. You'll have to explain that in a moment. Matt Ginsberg. Matt, welcome back to StarTalk, dude. Thank you. It's always a blast to be here. Yeah, so we call you Dr. Phil because those in the know know that you programmed a computer to solve crossword puzzles, and you named that program Dr. Phil.
Starting point is 00:03:05 That was just Phil, F-I-L-L. Brilliant. How's that working out for you? I see you making crossword puzzles. So Dr. Phil beat all humans and it and I retired from Crosswind. Ooh. Yeah, well, once you've beaten every single human,
Starting point is 00:03:30 it's time to go on to something else, right? Well, it's – So you've got to get a pedigree out here. Let me finish the man's pedigree, please. Okay. He's got a PhD in mathematics from none other than Oxford University. Yeah. And I don't think that's Oxford, Mississippi.
Starting point is 00:03:45 I think it's Oxford. Oh, no. UK. A scientist, entrepreneur, author, and you're a specialist in AI, especially when it comes to solving hard problems rather than just sort of taking over the world. And now gainfully employed at Google.
Starting point is 00:04:04 So what do they have you doing at Google? By the way, of course you're working at Google. Of course. Right. We could have guessed that, I guess. Well, you know, I'm building Chromebooks. That's a terrible misallocation of resources. That's all I can tell you.
Starting point is 00:04:20 They bought me in. I'm an AI specialist. And they were like, all right, we need you to build some Chromebooks. Yeah, because everyone at Google is brilliant. He's at the low end. So I actually, I repair Chromebooks. I don't get to build Chromebooks. I'm aspiring so that a few years from now, if I get a job repairing Chromebooks, they'll move me up to actually building Chromebooks. And in my spare time at Google, I work for Google X, which is a moonshot factory. We basically try and solve incredibly hard problems, any one of which is likely to fail. But between them all, we expect to deliver a reasonable amount of useful technology to the real world.
Starting point is 00:04:59 So that's a name inspired presumably by the XPRIZE, which is money given to impossible tasks solved by brilliant people. They haven't told me where it came from. I think it's been around. It may have been around for a long time. I do want to say. No, trust me on that one. I think I got that one. I think you probably did.
Starting point is 00:05:20 And I just want to say that all of this disclaiming, Chuck saying I have never played a sport. He sounds like he's never even watched a sport. When I was at Oxford, I was the captain of the bridge team. And we actually tried to get bridge recognized as a real sport so that we could get Oxford Blues and put them on our shirts and be official athletes, they were having none of it. So Chuck, you and I are in the same boat here as far as... Okay. Well, then I'm in very good company. I feel better. All right. So tell me then about game theory broadly, AI specifically invoked, to predict brackets in athletic events. And the most famous of all brackets, as Gary said, is the March Madness with college basketball.
Starting point is 00:06:17 You know, when I think of game theory, I think of elements of chance and how you can sort of maximize those to your favor. But how much do we think of athletic contests as exercises and chance? I think, so I was thinking as Gary introduced this topic and he said, you know, can we use AI to build a perfect bracket? And I can say, I used to say a lot of time here. No. That's it. No. And. Okay, that's the end of the show.
Starting point is 00:06:48 Thank you very much. That's great. This has been a real blast. So I think what you have to understand, when you're betting, whether you're betting against Berkshire Hathaway or you're betting against Vegas or you're betting against your buddy who lives down the street, you have an opponent. And
Starting point is 00:07:09 AI can make you better. But when you try and make a perfect bracket, your opponent at some level is like God. You're actually trying to predict the future. Not better than the next guy, but perfectly. AI can't make you better than God or as good as God or anything.
Starting point is 00:07:28 But what it can do is it can make you potentially better than the guy who lives down the street so that you rate to make money. It can make you maybe better than the guys in Vegas who are quoting me up. So you go to Vegas and you rate to make money. And I think that's where we can potentially add value. Okay. Very important point to distinguish there. Just being better than your opponent versus having all knowledge of all of creation. You just want to make some money tomorrow. That's all. So could you explain just in as few words as possible, what do brackets mean?
Starting point is 00:08:09 Just, I don't know why someone who would be listening to this program would not know about brackets, but just, I'm an educator. I just want to make sure we're all on the same page. Describe the concept of brackets. So the March Madness Tournament. They're, roughly speaking, they're 64 teams. And it's a single elimination. So every game, one team is eliminated. If you want to get down to a winner, you got to play 63 games to eliminate 63 of the 64 teams. So they divide it, and they try and set it up so that better teams are playing worse teams, at least initially. And that way, the better teams
Starting point is 00:08:44 sort of have an easier path, and they get to the end, and the better teams are playing worse teams, at least initially. And that way the better teams sort of have an easier path and they get to the end and the better teams keep playing. And if you can predict who's going to win each of these 63 games, you've nailed the bracket in the garage. If you get one wrong, a team
Starting point is 00:09:00 that you thought was going to be eliminated plays on and you didn't get the bracket right. So if every game were a coin toss, they're asking you to predict 63 coin tosses in a row correctly. And your chances of doing that are, you know, two coin tosses is one in four. Three is one in eight. Sixty-three is one in some giant number that I'm
Starting point is 00:09:25 not even going to try and compute. You can say it. It's astronomical. Say it. It's astronomical, Neil. It's just astronomical. Thank you. So, if they have 64 teams, you know, normally we
Starting point is 00:09:43 see it like as a diagram. There's like 32 teams on one half, 32 on the other half, and then they whittled down and then you have 16 and 16, 8 and 8, 4 and 4, 2 and 2, 1 and 1. So these are, this is the bracketing. Is that how we can think about it? It is. And people, people think of it as the first,
Starting point is 00:10:02 you're going to play 63 games. The first 32 games are the first round. They sort of happen simultaneously. After those 32 games, 32 teams have been eliminated. So you've gone from 64 teams to 32. The second round, you can play
Starting point is 00:10:17 16 games simultaneously. That brings you down to 16 teams. The Sweet 16, then the Elite 8, and then the Final 4, and then the Final Four, and then the Finals, and then... But if you know that one team is going to win, you don't care about who else loses, right? So why do you have to predict all 64 games correctly? Because that's the only way, that's the only way Warren will give you his money, is if you get them all right so picking who's going to win that's not so bad i mean at worst that's one in 64 it's random you have a one in 64 game but picking who's going to win every
Starting point is 00:10:52 game that's 63 coin tosses which is what i think it's called astronomical okay I just calculated the odds if all each were a coin flip, what it would take to predict 64 outcomes. So I get basically 2 times 10 to the 19th power. So that's— A lot. A lot. The answer is that's a lot. That's— Sorry. So that's 20.
Starting point is 00:11:27 Let me get it. 20 quintillion is what that is. One in 20 quintillion. Yes, that sounds right. So I'm on the right page. I'm on a good page right now. Chuck, Gary, you take it over. Yes. All right.
Starting point is 00:11:41 So if we say that 20 quintillion is just too much for us to really grapple with, what will we need, Matt, to understand and comprehend to build a system to do really well at brackets? Are there some fundamentals that you have to bake into an algorithm to be able to bring forward something that would have some success? So my question is, what does really well mean? And this is why your opponent matters. If really well means getting it all right and getting Buffett's money, the answer is pick another project. It's just too hard. If doing really well means getting enough of them right that you win money in Vegas, now we can talk because now there's all sorts of stuff I can tell you about. So in terms of why it's so hard, imagine you've got two reasonably evenly matched teams playing.
Starting point is 00:12:42 And you can really predict it. You can look at the individual matchups and you can say, oh, this is actually going to go really well for these guys. You look at who's injured. But what you don't know is that halfway through the second quarter, somebody is going to slip on a little bit of moisture on the floor and twist their ankle and be out. You can't predict that. But the game can easily hinge on it.
Starting point is 00:13:09 So there is just stuff that makes this more like a coin flip that you just can't figure out in advance. Now, the good news is both teams are probably equally likely to slip on that motion. So if you conclude team one is 80% to win the game, then team one probably is about 80% to win the game. But if you have to get 63 right in a row, 80% isn't good enough.
Starting point is 00:13:35 Because you just need to be sure. Is there no way, Matt, to factor in? I mean, in a minute, I'm going to ask you what an algorithm is. Because I think we all, me, if I look at it from my own personal point of view, I can say algorithm. I know they're used, they're in my smartphone, they can dictate trends on a YouTube channel, all those things. But what is an algorithm? I talk about it, but I don't know exactly what one is, what you would bake into an algorithm. Do you have to have the outcome set in your mind in advance and then build it backwards from there?
Starting point is 00:14:10 So if we dismantled it like a car engine, what do we get with an algorithm? So algorithms are simple, right? They're just telling an idiot, a computer, but fundamentally an idiot, telling an idiot a sequence of steps to take to get the answer to some problem. A really competent idiot, yes. Well, I don't know the competent. They're not really actually that competent.
Starting point is 00:14:40 You can't ask them anything sort of coherent. But they're really fast. But they really have going for them. My favorite line for computers from decades ago was, a computer doesn't do what you want it to do. It does what you tell it to do. Yeah, what you program it to do. And that's not always what you want it to do.
Starting point is 00:14:57 Yeah. So John McCarthy was one of the people who invented AI. And he said AI was the attempt to get machines to do badly what people do bad. That's brilliant. Guys, we've got to take a quick break, but when we come back, we'll pick up that conversation on algorithms.
Starting point is 00:15:17 But we also want to know, in what detailed way is AI operating to make these decisions when StarTalk Sports Edition returns. We're back. StarTalk Sports Edition. We're talking about bracketology, if there is even such a word. We have friend of StarTalk
Starting point is 00:15:46 Matt Ginsberg, who's all about AI and betting and winning bets, not against God, apparently. This was first disclosed in this episode, but against your opponents. So, if you beat God, that's a whole other kind of... What else you got cooking in your basement there?
Starting point is 00:16:02 So, let's pick up the algorithm question. Gary, where were you? Where did we leave off? Well, from my own point of view, I mean, I look at it, I say I'm familiar with algorithms, but I don't actually know what one does, how it works, why it works.
Starting point is 00:16:17 So I just wondered if we had that, seeing as how it's difficult to play God as an algorithm and produce perfect brackets. Maybe we could get Matt to just break down a little bit of what an algorithm can do and why it does it and how it does it. And how can it be predictive? It's one thing to have the machine do what you tell it to do, but how do you tell it to predict something that maybe has not happened? So the thing to remember here is that, like I said, algorithms are incredibly simple. They're just doing what you tell them to do.
Starting point is 00:16:56 I could say, I want you to predict the weather tomorrow. And if I lived in the Sahara Desert, I could write an algorithm. It would say sunny and hot. Every day, just say sunny and hot. You'll be fine. But if I live in Pittsburgh, where it's notoriously difficult, you need much more sophistication. And somebody else decides, I want to use this algorithm to predict the weather.
Starting point is 00:17:30 The algorithm is just a series of steps. The fact that it happens to predict the weather is sort of magical. It's what makes one algorithm better than the other. If you decide, I want to predict who's going to win the Super Bowl, you need an algorithm. You're going to put in all these steps. And at the end of the day, it's going to generate an answer. If the algorithm is good, it'll tend to predict the Super Bowl. If the algorithm is bad, it'll predict randomly. If it's incredibly bad... There are two elements to this. One of them is, are you doing the right calculations with the available data? Another one is, do you
Starting point is 00:18:12 have all the data you need in order to predict with accuracy? So do you know at any given time, if you have all the proper information, you're just not smart enough to stitch it together to make the prediction? You always know that you don't have all of the information. So if you go back to my example. That's very humble of us all. You don't know that there's moisture on the floor. You'd love to know that there's moisture on the floor. You'd love to know where it is, but you don't know that.
Starting point is 00:18:40 Because depending upon where it is on the court, one team is going to be significantly more likely to get hurt than the other. I would love to know that. I would love to know that so-and-so was out partying last night and didn't tell the coach. I'm not going to know that. So there's always a ton of information that is inaccessible. And the question is, how good a job can you do with the information that you do have? can you do with the information that you do have? And as AI has gotten more sophisticated, it has become the algorithms that the AI scientists produce have become more capable
Starting point is 00:19:14 at drawing effective conclusions from the data to which they had access. But they're still just doing a bunch of specific steps. They have no idea what's actually going on. They're just add register 5 to memory location 37 and put the result in memory location 84. And they have a huge... And they have an astronomical number of steps like this that they're executing sequentially. And at the end of the day, one of the steps says, report the answer.
Starting point is 00:19:52 So they report the answer. And then the next step probably says, wait until something happens. But they're just running steps. Now, one of the interesting questions is whether we're actually doing the same. Can you take the view that our brains are actually just neurons firing and the neurons don't know what's going on? So it's very curious. You know, we could just be running. Okay, but Matt, I'm distinguishing what you just said for an algorithm duly answering Gary's question from what I think of, at least in my romanticized view of AI,
Starting point is 00:20:27 where the AI writes its own damn algorithm and doesn't even need you. Uh-oh. Talking about sentience now. So... This is when they take over. So a division of Google called DeepMod has just produced code that will write code. And on the internet,
Starting point is 00:20:44 there are these programming competitions where people go and they write some code to solve a problem that's described in English. And these guys at DeepMind have written a program that actually reads the English description and produces code. And it's about average in terms of how good it is. So the people entering these programming competitions,
Starting point is 00:21:05 it's about average. I don't, again, it doesn't know what it's about average in terms of how good it is. For the people entering these programming competitions, it's about average. I don't, again, it doesn't know what it's doing. Does that matter? Maybe. Well, then it's not intelligence. It's just a fast human brain doing calculations. So if code can write code now, and we call them machine learning, right? At some point, it will evolve to write better code,
Starting point is 00:21:30 just like we've evolved to do better things with ourselves. Let's go back to Warren Buffett's money. If we're not getting our hands on Warren, this is all about the dough, all right? If we don't get our hands on Warren's money, what if we take a trip to Vegas with our nicely polished algorithm? Are we going to fare any better? Rather than try and predict the 63 coin tosses, why don't we just go in there and have a little bit of a smart move on that part of the world? Because they'll break your knees. You know what, Chuck? Here's my thought. My algorithm, right? I've invented my algorithm
Starting point is 00:22:08 in the garage, and I'm really proud of it, and I'm going to take it to Vegas, and I'm going to be really clever. Do you think the guys in Vegas don't have their own garages and their own algorithms? And so here we go, Matt. If I started to use an algorithm to win against the bookies, do they have their own sentinels, their own AI sentinels waiting for guys like me to turn up? They certainly, Chuck is sort of right. If you start winning money regularly and in large quantities, yes, they will notice. They absolutely will notice
Starting point is 00:22:48 and they will be very concerned. I'm not saying they'll break anybody's knees or that they won't, but they certainly will notice and they'll fret about it. I'm going to chuck on this one. They're going to break your knees. Yeah. I'm pretty sure my knees are going to be in a bad place. Not even right now, because now they're using auto shufflers to make it almost impossible for you to count cards because they shuffle after every hand with eight decks in the shoe. But before that, even card counters, they knew you were counting cards
Starting point is 00:23:24 because the eye in the sky was like, that guy right there keeps winning. All right, now let's go and focus on him even tighter and let's give him more scrutiny. And so you would have to be kind of a manipulative genius to win enough and then lose enough and then win enough and then over a period of time, you walk out with the money. And who can really do that? So there's a distinction here that I think is incredibly important that we're not making.
Starting point is 00:23:54 And it's the difference between beating blackjack, where you are taking money from the house, and going in and playing poker, where the house is just taking their raking. You know, they're taking a small rake off every hand. And if you win money, you're winning it not from the house, but from the other player. And sports is OPM, other people's money.
Starting point is 00:24:16 And sports is in this weird limbo because let's say that for some reason, everybody, and I literally mean everybody wants to bet on the Bengals in the Super Bowl, the odds are going to change because the house doesn't want to, they want to just, they want to make their rake. They don't want to be taking a strong position on who's going to win the Super Bowl. So if you, at some level, to win in sports betting, you don't have to be so much smarter in the house. You have to be smarter than the other people who are betting on sports betting. They're never going to find out you're winning. They're not going to come and break your knees.
Starting point is 00:24:52 So in sports, there is an element of both. Vegas does set the line. And if they get the line wrong and you come running in and you're constantly beating them because they set the line and they're making a mistake, they'll notice. But they're also setting the line based on everybody else's bet. And at some point, you're betting against the other players as well, and it becomes a little safer. Wow. Okay. But in March Madness and in football as well, but any of these sports betting,
Starting point is 00:25:19 the common ones that you find in the office place, it is a redistribution of wealth, right? I bet and I lose, but that goes to Chuck, right? And Chuck and I bet and we both lose and that goes to Gary. So I guess there's no bad will there because everyone is participating with this understanding. But if I learned that you used AI and the rest of us are just sort of reading the paper, yeah, we're going to break your knees. Right? So Matt,
Starting point is 00:25:49 what parameters are people putting in other than hunches to beat their opponent in bracketology? What do you put, what are you programming into your computer? So you obviously put in, well, fundamentally what you put in is all of the play records from all of the previous games. And that's basically all the information that's available. And maybe you can put in that somebody got hurt, but mostly it's just all of the play records from all the previous games. So then you actually can find out if these two people are on the floor together, surprising things happen. Well, then you know that going forward, you can find out this guy,
Starting point is 00:26:31 he hasn't gotten any rest for the last two weeks. And historically, if he doesn't get any rest, he plays terribly. So now you know that. So you're looking mostly for patterns in data that you sort of, the community has agreed is sort of the starting point, which is all of the play records from all the teams. And that would include what teams, not just simply whether they won, but what teams they beat. No, it includes everything. It includes that this guy made a three-point shot
Starting point is 00:26:59 at two minutes and 22 seconds left in the second quarter. It includes absolutely everything. And these are the other players who were on the floor at the time. And it includes every, you know, this guy committed a foul at this point. This is who else was on the court. This guy was absolutely everything. And this data is available. I was absolutely everything.
Starting point is 00:27:23 And this data is available. So you can actually get everything about every play and so forth and so on. Now, one of the things that typically people work with play records, these days there's also video. So you can potentially look at the video and see where does somebody shoot from. So you, so there's a ton of information out there. There's a difference. There's a difference between data and knowledge. So what you do with that information might be different from what someone else does.
Starting point is 00:27:55 I guess that's the magic of who, whose algorithm. That's the magic. So the algorithms process the data to make predictions. Roughly speaking, we're all working with the same data, but you'll have some people with better algorithms and some people with worse algorithms. If people are using vastly different data, that's when people start getting upset
Starting point is 00:28:14 because they feel like they've been cheated because I didn't know so-and-so twisted his ankle, and you did. That's not fair. But when you work with the same data, it somehow sort of seems fair, and now it's really just a matter of who's smart enough to create the best algorithms, who has the computational horsepower to run those algorithms, because they're becoming very computationally intensive, and thereby making better predictions.
Starting point is 00:28:39 So the raw material for this, Neil, to my mind, is the quality of data. Because if you've got access to some of the random things that can happen, like, for instance, someone was partying the night before when they shouldn't have been, someone is carrying an injury that you don't know about, that that hasn't come out into the public domain, that quality of knowledge is what will enable an algorithm and a builder to make a better algorithm. Am I right there, Matt? I think there are two ways you can win. One is you can have better data.
Starting point is 00:29:15 That sort of feels a little unethical. The other is have a better algorithm that does a better job of processing the shared data. have a better algorithm that does a better job of processing the shared data. A better algorithm means you're, now maybe I'm a scientist, so maybe I just like it when smart people win. A better algorithm means you were smarter than the guy with the worse algorithm. Better data means you followed the basketball players the night before to see what they were actually up to.
Starting point is 00:29:45 And that somehow just doesn't seem right. It seems like that's an unlevel playing field. But I like it when better algorithms work because I like making good output. We're going to take a quick break. When we come back for our third and final segment, we'll finish up on that topic, but then we're going to, you know, chew the fat and just get into some of the more philosophical dimensions of this topic, applying AI to predicting sports brackets on StarTalk Sports Edition. We're back, StarTalk Sports Edition We have a friend of StarTalk AI expert
Starting point is 00:30:27 Mathematics All-around math guy Matt Ginsberg, Matt, great to have you back on StarTalk How do we find out, do you hang out anywhere On social media? I have a website, mattginsberg.com And that is practically it I have a Twitter account that I never use
Starting point is 00:30:44 I don't belong to Facebook. I'm really as quiet as possible. Wow, that means you probably read and do things with your spare time. I do. I do. What a concept. Okay. Yeah.
Starting point is 00:31:01 That must be torturous. Transitioning out of the last segment, let me just ask Matt, before we get into our more philosophical questions, isn't there this force operating on who wins that doesn't show up in the data, such as how badly do they want to win? I wanted it more. Exactly. And for equal talent, that could elevate someone's performance.
Starting point is 00:31:30 Why is it that you can do this best in track and field, that in big events, many people have their personal best? Whether or not it's a world record, they just perform better in that moment under those circumstances than they ever have in their entire life. And so something was going on within them that didn't show up in their previous data. The last NBA Finals, the Milwaukee Bucks, Giannis Ataka, whatever his name is, because I can't say it. It's widely held that he just rose above his game and above his team's performance
Starting point is 00:32:07 to carry them almost single-handedly to a championship. And you look at Bob Beeman's world record-setting long jump in the 1968 Mexico Olympics, beating the previous record by over a foot, and no one came near that for decades. So I don't see how that shows up in your AI algorithm. It is totally in the game. Oh, this is fascinating.
Starting point is 00:32:37 Ooh, throw down, throw down. Here you go, go on. You can look at the previous performance of athletes as a function of whether or not it was a clutch situation. And if there are athletes who reliably perform better when the pressure is on, it's totally in. And if there are athletes who reliably perform worse when the pressure is on because they choke, that's in. athletes who reliably perform worse when the pressure's on because they choke, that's in the game. And if there are people who just perform the same independent of pressure, that's in the game. So now it is the case that being in the Olympics is special. Being in the Super Bowl is special. But you can also look at where somebody, for a generic player who performs 5% better in a clutch situation,
Starting point is 00:33:27 how much does he perform better in his first Super Bowl? That's in the data. Not this guy who's about to play in his first Super Bowl, but I know how this guy responds to pressure, and I have data telling me how much pressure the Super Bowl really is. Put it all together, and I'll know how this guy's likely to play in the Super Bowl. Some people are going to have good days. Some people are going to have bad days. But all the stuff you've been talking about, it's all in the day. So how about the fact that some coaches give better pep talks at halftime than others do?
Starting point is 00:34:00 That's in the day. All right, guys. Now, listen. We're down 212 to nothing. But I'm telling you right now, you've got to reach inside yourself and find something. One point at a time. This is the value of the coach. But the value of the coach is in the data.
Starting point is 00:34:16 That's in your data, too, I guess. Because you know who coached the team? It's in the data. So what's not in the data are there's a there's a wet spot on the floor or this bit of astroturf was replaced badly that's not in the data but all right chuck you had an interesting philosophical question back a couple of segments ago what was that i'm saying like like, if you look at the question of whether or not an algorithm or computer is doing pretty much the same thing we do, I'm trying to figure out how it's not when, if you look at semantic memory, episodic memory, aren't we just taking the collection of all of our knowledge, making associations, coming to a conclusion, then making a decision?
Starting point is 00:35:10 That's exactly what Matt's been saying. So, Matt, why are you better at this? Is it because you can handle more variables? But so are we, right? I can think of, you know, what is Tom Brady thinking when he's hiked the ball? He's got the whole field in his head, the history. He's got it all. And no one is touting.
Starting point is 00:35:29 There's no adding machine that you can point to. It's almost at an intuitive level going on in his head. And the same way. He's looking for patterns. He knows how to read defenses. He's seeing that certain guys going in motion or being out of position means certain things. Or moving left or moving right.
Starting point is 00:35:47 So why is that any different, Matt? And why should we trust you and not Chuck's intuition? I don't think it is different. And I think, you know, one of the things you said, Neil, earlier on, you said, well, then why is it intelligent? said, Neil, earlier on, you said, well, then why is it intelligent? And I don't know that intelligence has meaning other than sort of a performance. If you can do smart stuff, you're intelligent. And I don't care how you got there. If you can always throw to the right place, you're a good quarterback. And I don't care what the processing that happened inside your brain to make it happen is. Now, there are some people, I think probably the best known is Roger Penrose, who actually believe that there is stuff going on inside of us that is not computable,
Starting point is 00:36:38 that it depends on quantum mechanics, that it depends on this magical relationship between our minds and the world of mathematics that a machine can never duplicate. He could be right. But just a quick bio on Roger Penrose. He's a physicist at the University of Oxford who, in fact, just got the Nobel Prize a couple of years ago for his early work showing that Einstein's general theory of relativity has to give you black holes. Like it's a necessary prediction from the foundational physics that's there. And he's long respected in physics and in astrophysics. Then he stepped into the world of consciousness
Starting point is 00:37:17 and into a world where we really don't have good explanations. So you want to, those are the kind of places that attract smart people. And he was just imagining that, you know, we can think of the brain as electrochemical, but let's go a step deeper. Is it also quantum mechanical? And if that's the case, then you're,
Starting point is 00:37:35 then all bets are off because you're not predict the quantum physics creates an unpredictable randomness to it. That may be the source of all of our wisdom. Is that a fair way to think about his contributions to consciousness? It is. I mean, he was my thesis advisor, and he and I have argued about this. And you have described his position fairly. And what's interesting is, I mean, when we argued about this, it didn't take long for us to realize that we had this fundamental disagreement.
Starting point is 00:38:06 Is what's going on in your head quantum mechanics or is it sort of classical mechanics that a machine can duplicate? And we realized we were on different sides. We realized neither would convince the other. I believe, Chuck appears to believe, that what's going on in your head you could capture as a process. to believe that what's going on in your head you could capture as a process. If somebody eventually makes a real AI, an actually intelligent artifact, then Roger will be wrong. If we keep trying and never get there, maybe he's going to turn out to be right. But right now, nobody knows.
Starting point is 00:38:40 They know which side I'm on. Just to be clear for Chuck, Matt mentioned classical physics. So what he means there is if everything that's happening in your head is classical, then it means you can point to all causes of all effects and trace it back and how they interact with each other. If there's a quantum domain going on,
Starting point is 00:39:00 then the cause and effect gets disrupted and then you can't then classically describe it. But so Matt, then, what about, I mean, you're at Google. What about the future? I'm pie in the sky here. What about quantum computing? Will that be able to mimic what might be quantum influence in our ability to predict the future?
Starting point is 00:39:20 So I don't know. And the right person to ask is Roger, because I actually think we don't need quantum computing because I think we are classical artifacts. What's going on in our head is classical. I don't know where he is on that question. I know Google does quantum computing. Everybody's doing quantum computing these days.
Starting point is 00:39:42 It's exciting. It can break encryption. It's an amazing process. Maybe they're afraid to put you on that topic because then you might take over the world. Or at least take over Google. They've got you in the Chromebook assembly line just to keep you out of trouble.
Starting point is 00:40:02 The Chromebook repair line. I aspire to this. If it's data-driven, this whole thing about, and if we don't know it, we don't know it, do we then not point AI in the opposite direction and say, go and find what we don't know? Bring it back, and then we'll build even better. Am I seeing this in the wrong way completely?
Starting point is 00:40:24 I think that's a great question. And I think that... And by the way, let me interject here. What we do when we send robots to other planets, they often have a serendipity mode where it looks around and if there's something it doesn't recognize that we've programmed into it, then that is targeted for extra curiosity. But you have to be able to notice something that you don't recognize. That's an important step. And not otherwise just look right past it. So just pick up that.
Starting point is 00:40:57 I just want to put that out there. I think that the consensus these days is that there is a huge amount of scope for improving our ability to draw conclusions from the data we have. So when you look at all the stuff about Big Brother and tracking terrorists with data, and Amazon predicting which book you're going to want to buy next if you read books, Netflix predicting which TV show you're going to want to watch next, it's amazing how well these systems perform.
Starting point is 00:41:28 It was a huge surprise. All they're doing is they're doing a better job of exploiting the data we already have. And the consensus is we should be focusing on improving our ability to use the data we've got rather than looking for more data when we're not even using our existing data as well as we could. By and large, acquiring more data is expensive, it's hard, it's intrusive. And I think it's the consensus of the AI community that there's still so much scope for doing more with what we know, with what we have, that that's where we should be focused. Interesting. That's very interesting.
Starting point is 00:42:08 And so when you say is it just the amount of work that it takes to get more data? Is that why it's just... No, it just gets more invasive. You have to go deeper and deeper. What was your dentist appointment like?
Starting point is 00:42:23 It's like the way Netflix always keeps trying to get me to say, please rate this so that we can better understand how you, and I'm like, screw you. I'm not giving you any more information about me than you already have because you have everything about me. Chuck, do you have an Alexa? I do. I have an Alexa.
Starting point is 00:42:42 I have a Google Home, and I have an Amazon or whatever the other thing is. I don't use any of them because the first time I used it— So you think. Oh, no, they're unplugged. So you think. Well played, sir. Well played. They're tapping into the electromagnetic energies of the universe.
Starting point is 00:43:07 Yeah, but the first time I used one, it scared the hell out of me because it immediately started to, as they say, learn. And that was frightening to me. So that is Amazon or Google or whomever. Do you use your cell phone, Chuck? Unfortunately, yes. And I see it every time I use it. I see the results.
Starting point is 00:43:29 I see it come up with the ads that I never looked for. And then all of a sudden, I'm like, hmm, I should buy diapers. I don't even have a baby. But then a diaper ad shows up. Chuck, you should move your bowels about now. You have done so in the last three days at this time. And by the way, here's a great product to help you do that.
Starting point is 00:43:52 This is the length that these companies, and I now work for them, will go to get more data. Gotcha. That's the more data part. That's the more data part. So Matt, for March Madness, are you entering that contest? Because somebody's going to have, out of 20 quintillion possibilities, somebody will come in ahead of everybody else because that's how it works. But they're not going to get all 20 quintillion.
Starting point is 00:44:20 They'll at least get the higher grant. But there's going to be someone who does better than everybody else. And then they want to say, well, how did they do it? And then you get a peek at their AI. Will everyone focus around them? Or are we going to say it's random dumb luck? Hard to know, right? If they win by a lot,
Starting point is 00:44:35 they're like way better than whoever's second, people are going to pay attention. If they just edge out the guy who was right behind them, then it was going to have enough of a luck element that there's probably not something to know. But I think it is the case that if somebody fails to get Warren's money
Starting point is 00:44:52 because they only predict 62 of the 63 games correctly. Right. If it's marginal, then it's just like Emily down in shipping who wins the pool because she was like, I like the teams with the bright colors. And then it's like, and she beat everybody.
Starting point is 00:45:09 You're getting an email from Emily. You know that. Emily's going to be all on your case. So I think this came up in a previous episode, Matt, with you as a guest, where if everything you say, and if you're as good as you say you are, why isn't this podcast filming you at your villa? You know what Matt's going to say right now? Because this is an actual screensaver that I have.
Starting point is 00:45:42 Why aren't we on your yacht? why aren't we on your yacht why what why are we in some corner why what i have made my life about solving hard problems and that is a choice that i have absolutely i regret immensely no i have that's a choice that is a choice that is a choice that makes my wife extremely upset with me. I have had a better life and I have healthier children because of it. Yeah. Well, no. That's probably true. Definitely.
Starting point is 00:46:16 Yeah. That's pretty special. Yeah. You know what, though? will say as a point of admiration, Matt, that that is philosophically a much higher position to take. Nobler. It's a nobler. Nobler is what I'm saying.
Starting point is 00:46:36 That is a way higher level. Yeah. You like to solve the unsolvable, you know, the hard problems. Are we pointing enough AI at the problems this planet has right now? Good question, Gary. To find solutions. Why are we finding brackets to basketball anyway? We should be finding solutions to cancer.
Starting point is 00:46:53 Let's go make some money, or let's go solve some really big, gigantic problems. So the first project I worked on at X was to use AI to help decarbonize the electricity. Generate electricity, distribute electricity without burning fossils. That's as important as it gets. That's really important. And I think that... And you failed at that, and so then you went to basketball. Okay.
Starting point is 00:47:22 And you failed at that, and so then you went to basketball. Okay. I think that there is a significant amount of work spent on these incredibly important problems. It is also the case that if you look at getting a computer to play chess, that doesn't save anybody's life. But the work on computer chess moved AI forward an enormous amount and allowed people to take what they learned from chess and apply it to problems that actually matter. So a lot of the magic, I think is probably the right word, of being a scientist is about finding
Starting point is 00:48:06 problems that are hard, but still just barely tractable, and have the property that when you solve them, you will learn something that you can take elsewhere to make the world a better place. Yeah. Yeah, I mean, you look at the medical industry, which is now... He just composed a beautiful ending sentence, and now you've got to keep talking. Oh, okay. All right. I was just agreeing with Matt, but that's okay.
Starting point is 00:48:35 His sentence was like, you could... Belongs on a statue somewhere. Okay, Chuck, go on. No, no, I was just... I was just agreeing with him. Like, you know, the fact that it is, it's becoming increasingly expansive and it's making life better for so many people.
Starting point is 00:48:51 It is. And many other branches beyond. And this is what mathematicians have known forever, that they're an engine of so much advance in culture and civilization in ways that people don't really appreciate because the advance is somebody else's invention based on what they did.
Starting point is 00:49:10 And the other person gets the credit, not Matt. But we love you, Matt. Thank you. We got your credit. I'm delighted. We're going to start a GoFundMe, Matt. So something you said about AI playing chess, right? And that then inspired people to think in a certain way.
Starting point is 00:49:28 Is this the legacy that we're finding AI is leaving us? It's making us think better, think differently. And is it actually teaching us? There's so many things I could say. I mean, one of the things that people have often joked about in AI is that as soon as anything actually becomes successful, it leaves AI and becomes part of autonomous driving or some other discipline. AI is teaching us about ourselves. Certainly, we are learning about what we can do and what we can't do. My belief has always been that computers solve problems
Starting point is 00:50:02 differently than we do, and they always will. And that's going to mean that we can solve more problems with machines at our side than we can solve by ourselves or they can solve by themselves. And it's tremendously positive. It's tremendously optimistic. They're tools. They are tools. And we will be able to do more with them than we could ever have done without them. And we should be able to do more with them than we could ever have done without them. And we should all be ecstatic about that. It's not going to be like the Terminator.
Starting point is 00:50:30 It's going to be like figuring out how to get carbon out of the electricity. And making the world better. On that note, we've got to land this plane. We've got to land this plane. Oh, I just wanted one more thing to know. Who's going to win the Super Bowl? Please. Okay, Chuck. No, okay. Oh, I had, oh, I just wanted one more thing to know. Oh. Who's going to win the Super Bowl? Please. Okay, Chuck. It's not, no. Okay. Okay, Chuck.
Starting point is 00:50:49 Chuck, go. Make it fast. Really fast. Back to, it's in the data. Is it possible with all the data of every war ever fought on this planet to be able to predict when we might be going to war and if that war will lead to our ultimate destruction?
Starting point is 00:51:06 Probably. Okay, there we go. And with that, we say good evening to you all. Okay. Thank you, Matt, for that succinct reply to Chuck's doomsday question. We got to call it quits there. Matt, always good to have you.
Starting point is 00:51:23 Thank you, Chuck. Chuck, Gary. Pleasure. Making this work. Thank you, Matt. Matt, always good to have you. Thank you. Chuck, Gary. Pleasure. Making this work. Thank you, Matt. This has been StarTalk Sports Edition. What do we call this? We call it Bracketology, if there's even such a word.
Starting point is 00:51:32 There is now. You've got it. I've been Neil deGrasse Tyson. You're a personal astrophysicist. Keep looking up. .

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.