StarTalk Radio - Artificial Intelligence, Real Competition with Liv Boeree and Matt Ginsberg

Episode Date: June 4, 2021

Can AI have a poker face? On this episode, Neil deGrasse Tyson and co-hosts Gary O’Reilly and Chuck Nice discuss poker and playing against...the machines with former poker player Liv Boeree and arti...ficial intelligence expert Matt Ginsberg. NOTE: StarTalk+ Patrons can watch or listen to this entire episode commercial-free here: https://www.startalkradio.net/banner/artificial-intelligence-real-competition-with-liv-boeree-and-matt-ginsberg/ Thanks to our Patrons Ricardo Torres, Mason Dickson, Alireza Sefat, Henk Van der Merwe, Derek Eilertson, Erdem Memisyazici, Sriram Govindan, Christian Murmann, Derrick Thurman, and Cayman Freeman for supporting us this week. Photo Credit: Santeri Viinamäki, CC BY-SA 4.0, via Wikimedia Commons Subscribe to SiriusXM Podcasts+ on Apple Podcasts to listen to new episodes ad-free and a whole week early.

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to StarTalk, your place in the universe where science and pop culture collide. StarTalk begins right now. This is StarTalk Sports Edition. And today, we're titling this one, The House of Cards. I got with me, of course, Chuck Gnice. Chuck. Hey, Neil. Yeah, yeah.
Starting point is 00:00:26 So though you are a stand-up comedian, you have tremendous sports fluency. So that's a good thing. Just want to put it out there. Give you some props for that. It's probably the only fluency I might have. Okay. Too much information.
Starting point is 00:00:41 Or because you're still working on that comedy thing, right? Yeah, exactly. And I got Gary O'Reilly. Gary, former soccer pro from the UK. Always good to have you here. Good to be here. Yeah, and I'm your host, Neil deGrasse Tyson, your personal astrophysicist. And let me just give a brief overview of where we're going to go on this show. I'd like to say the stakes are high because the theme is poker poker and uh who's the
Starting point is 00:01:09 famous poker i guess uh james bond would would like his uh chips stirred not whatever no he played shake him not stirred oh back or that's right he played baccarat that's right and of course crap uh craps and and maybe sometimes the roulette. But, yeah, less on the poker table, I think. Yeah. Yeah. And so we'll also find out what state-space complexity is. I don't know what that is.
Starting point is 00:01:36 We're going to find out. And whether Nash's theory of equilibrium, this is the famous mathematician from Princeton for which he got a Nobel Prize, whether it's a key component in a winning hand. And we're also wondering whether probabilities and thinking about them can be an asset in life. Or are they just confusing and you should ignore them?
Starting point is 00:01:55 And so all this is going to happen in this program. And we've got several experts. One of them in particular is Liv Boree. Liv, welcome to StarTalk. Thanks for having me. Yeah, yeah, you're our Boree. Liv, welcome to StarTalk. Thanks for having me. Yeah, yeah. You're our first guest today, and you're a TV presenter. I get your resume here. A writer, science communicator. Love science communicators. We need more of them. Let's get more of them, and then I can go to the Bahamas, okay, and leave you all. That's what
Starting point is 00:02:19 I want here. You're a former championship professional poker player and an advocate of effective altruism. That's good. That's good. And also, of course, when you have that much stuff going for you, they get you on TED, TED Talk here. So you had a first-class honors degree in physics and astrophysics. Love it. There it is.
Starting point is 00:02:40 My people. My people going out to the world at the University of Manchester. And what else did I hear? World Series of Poker and European Poker Tour Champion. Have I seen you playing poker on TV? I must have. You might have done. I don't know.
Starting point is 00:02:53 If you watched a bit of it, then you might have seen me, yeah. I have to see if you have shifty eyes here, just so I'll know. Definitely. Do you have a poker face, Liv? Is there an actual poker face that you don while you're at the table? I mean, you're looking at it. Everyone always thinks that a poker face is about being completely stoic and robotic, which, to be fair, if you're starting out, that's a good way to conduct yourself at the table
Starting point is 00:03:22 because it's just easier to maintain. But really what a good poker face is, is just being relaxed and natural and showing that you're comfortable even when you're not. Why can't a poker face be the opposite of what kind of hands you got? If you've got a two, four, six, and seven, right? All different suits. You look at it, and then you're like, that'll just freak everybody out, right? You just look around the table and be like,
Starting point is 00:03:52 you're going to lose. You're going to lose. Sure, you can do that. But the thing is that then, if I'll very quickly catch on, it'll be quite obvious to your opponents that you're doing that when you have bad hands and you're doing a scared face when you have good hands.
Starting point is 00:04:04 So we'll just adapt accordingly you just have you just have a portfolio of faces you draw from well that's if you can if you can randomize well uh between you know okay this in this situation i'm going to be 50 i'm going to be 50 50 doing or 70 of the time doing my scared face and 30 of the time doing my happy face um right and you can perfectly randomize that. Then, yes, you're being unpredictable. But in reality, it's very hard for humans to be random. To randomize.
Starting point is 00:04:31 What will happen is you'll get baseball coaches, given the probabilities of what you're doing per hand, and then they'll talk about, you know, because they have people who shift places on the field. If you always pull the ball, they're all on the left side of the field, right? And so I keep thinking, why don't you just punch it the other way and you get a base hit every time, but then they would learn that and then compensate. So there you have it. There's probably a Nash equilibrium for that. Yeah, we'll get to that in a minute. But tell me first, what is state-space complexity?
Starting point is 00:04:59 Oh, state-space complexity. I don't know what it is. State-based complexity is one of many different measures of the complexity of a situation. Specifically, it's used to describe the complexity of games. It's one measure of that. So what it technically means is the number of possible states that a game can legally be in from start to finish. So you're playing tic-tac-toe.
Starting point is 00:05:27 It's a grid of three by three. That corresponds to around, I think there's somewhere around 700 or so possible moves. So that's its state space complexity is 700. And then a game like chess, with its, you know, it's eight by eight board, that has a state space complexity of about 10 to the 40. So as you can imagine, because of combinatorics, the numbers scale up really fast,
Starting point is 00:05:50 depending on sort of the number of possible positions and the different types of pieces and so on. So that's why four-year-olds can play tic-tac-toe and not chess. Right. Yeah, it generally corresponds to sort of, and not always to the difficulty of the game, but certainly for the difficulty of the game when it comes to building an AI to play it. And if you sort of look at the history of AI over time where it's been pitted against humans in different types of games, the more simple the game, the earlier it was either solved
Starting point is 00:06:24 or sort of, you know or the best humans were defeated. So like Tic-Tac-Toe was technically solved by a computer back in 1952, I think. Kinect 4 was somewhere in the 90s. And then Chess, Deep Blue beat Garry Kasparov in 1997. And then there was a big gap. And then you might have seen the AlphaGo documentary. But that one was a big deal because
Starting point is 00:06:48 Go, its state-space complexity is 10 to the 170. So just astronomical. That is crazy high. I have to put in. Wait, wait, wait. I have to put in. The name Google, when spelled correctly,
Starting point is 00:07:04 is the number 10 to the hundredth power. Yes. So if anybody's going to solve something that's got at least 10 to the hundredth things going on in it, it's going to be the company Google. That's a good point, yeah. Google G-O-O-G-O-L. I was going to say, but don't they spell it wrong? They can't even spell it.
Starting point is 00:07:22 That was early when they didn't know how to spell it. Now that is how you spell it. There you go. Oh, my God. That's how you do that. That is so true. That's how you do that. That is how you spell it now.
Starting point is 00:07:33 Oh, my God. That's hilarious. So tell me about the Nash theory of equilibrium. When I was at Princeton, I spent some years there. So I just see him walking, not talking to anybody, just kind of with his head down, sort of bobbing in contemplation. Solving the mysteries of the universe. Yeah, yeah, yeah.
Starting point is 00:07:49 So tell me, how does Nash Equilibrium... It's one thing to know the state complexity of a game, but so what? Okay, AI needs to know about it, fine. But now you want to strategize given that fact. So tell us about the Nash Theory of Equilibrium. Yeah, so a Nash Equilibrium is basically the way you would describe,
Starting point is 00:08:13 say you and I are playing poker against one another just in a one-on-one game. In theory, there's... I would lose. Clearly, I would lose. Just put that out there. Maybe over the long term.
Starting point is 00:08:25 But you could win in the short term. Because there's a bit of luck in there, which we'll discuss later. But so if we're playing, technically there's a strategy, a sort of strategy that I could employ where it's so sort of perfect that your only option is to adopt a similar strategy against me. And when we're both doing that, we're basically breaking even against one another over the long run. And it means, and the reason why it's an equilibrium is because it is not, there is no other strategy that either one of us could sort of try and do in order to improve our situation.
Starting point is 00:09:07 So it's basically a sort of stalemate, effectively. And what it means is basically we're unable to exploit one another any further. And we're going to be breaking even. Does Nash equilibrium apply only to people interacting in games and other sort of people-based systems? Because then just call it the Nash stalemate, right? But equilibrium has a lot of use. That word means a lot in physics, you know? Of course.
Starting point is 00:09:31 And so to hear that a physics term get used at a poker table, I find a little disturbing. Well, it doesn't have to be between people. It can be between, technically it's between agents. So an agent could be an AI. It's anything that is. Right. It could be two computers playing one another.
Starting point is 00:09:46 Okay. And if they're programmed the same way, then they've reached that stalemate position, right? Right, exactly. And so what it is, it's just basically a mathematical solution. I don't know if it's... Okay, okay.
Starting point is 00:09:57 It's technically like a local minima, I guess, and possibly a maxima. But the point is that all it means is that neither one of us could try and do something differently to what we're doing and expect to make more money. In fact, if we did that, the other one would now be able to start exploiting us
Starting point is 00:10:14 because they're technically playing this perfect strategy. I've been hogging you this whole time. Chuck and Gary, take it over. No, I was just intrigued by that conversation you guys were having because with respect to poker, there is always an X factor. So every hand has a built-in probability because there's only a certain number of cards, right?
Starting point is 00:10:36 So let's look at the equilibrium. You and I are exactly matched. The X factor is how I'm able to manipulate you psychologically. How do you account for that? No, wait, Chuck, did you just invent the X factor here in this conversation or is this a known? No, I'm just inventing it. I'm making this up right now. The X factor. Okay, Chuck. By definition, it's already encompassing any possible X factor you could think of. Otherwise, if I hadn't encompassed it, if there was something that you could do, some X factor you could suddenly pull out to exploit me, then I'm not playing what's called game theory optimal. I forgot to mention that before.
Starting point is 00:11:21 So it's called a game theory optimal strategy where you're unexploitable by your opponent. And the only thing you can do- So this works in a laboratory because in real life, I could never know you so well as a human being that I could account for every single thing that you might do. However, for the most part,
Starting point is 00:11:43 you can get really, really close. But could I, are you saying that human beings, just like machines, can be predictable right down to the thought that you are going to have? Well, it's not so much about, it's not, if you're playing game theory optimal or an AI is playing game theory optimal poker,
Starting point is 00:12:04 it's not saying that it can perfectly predict what their opponent's going to do. It's not making a statement about that. All it's saying is that it's playing a strategy without getting too much into the weeds. It's perfectly randomized between bluffing and not bluffing, depending on each little minutiae decision point, whereby the opponent cannot take advantage of that. So it's not that it's perfectly... And it works.
Starting point is 00:12:37 It's actually independent of the type of player you're playing against. But the interesting thing is, just because you're playing game theory optimal, so therefore your opponents aren't able to exploit you, what it also means is that you could be missing out on certain things that they are doing. Say we're playing rock, paper, scissors, and you don't know anything about me. Your best option is to just perfectly randomize between the three because that way I can't predict anything that you're going to do.
Starting point is 00:13:08 You just use a random number generator and 33% each different thing. But if we're doing that for a while, and then you notice that actually I throw rock every time, well, now you'd be stupid to carry on randomizing because you're missing out on this opportunity to throw paper every time and exploit my dumb play. So that's sort of this difference um and that's a situation where you'd want to deviate from your past sort of game theory optimal strategy in order to sort of capture all this this value that i'm losing by being an idiot so that's but it refers to two perfect players then i mean
Starting point is 00:13:40 that's the thing that's what i was saying like to be in a pure equilibrium then yes that's what you need yeah yeah so that's the framework of your strategy the real world is on the clock so how long does it take you to put this into effect successfully well i mean for me i mean i'm retired now so i am far if i went to sat down at a poker table i'm i'd be far behind the curve of what the, you know, the latest, you know, the people who are getting the closest to playing a game theory optimal strategy. But even interestingly, no human can actually perfectly play it. In fact, no computer can even perfectly play it. You can only sort of asymptotically approach it kind of like, you know, the speed of light. And so in terms of like, how long would it take me to get to the level these days of a world,
Starting point is 00:14:28 you know, a true world-class player who's still sort of playing and studying all the, uh, game theory optimal solutions. I mean, I don't even know if I could, to be honest, these days I've been out the game too long, but I mean, it would take, it would take probably a few years of intense, intense studying. And I think the interesting thing about poker is that since we discovered that there are these game theory optimal solutions, these sort of mathematical solutions, the whole sort of style of the game has very much changed. And when I got into it back in 2005, it was still very much a sort of a black box. No one really understood what the mechanics of the game really were. And it was just very much more a sort of people reading intuitive game, where people with sort of
Starting point is 00:15:10 the most sort of street smarts and human experience were often the best, just because they could pick up on weird quirks of human behavior better than their opponents. But then since the lid has been lifted, and we've seen the mathematical workings of the game, largely due to sort of improvements in computation and software, now the best players are the ones who just mimic computers, basically, and play in this very mathematical, semi-robotic style. And there is less of this intuitive people reading.
Starting point is 00:15:42 Although there is still some. That's a weird fact that I think requires a pause and even perhaps a moment of silence. Okay? We play poker. We invent a computer that's better at poker than we are. And then we imitate the computer. Just let us give a moment of silence for the human mind there. Because we are no longer the best thinkers on this.
Starting point is 00:16:09 What we invented as a thinker is the best thinker. And what Neil is really saying is, let us have a moment of silence for the death of the human race. Because it's over for you. It's over. That's the beginning of the overlord. There you go. We're moving into the Nova scene, as James calls it.
Starting point is 00:16:32 So if you're playing, everybody seems to play poker online. So therefore, you don't need a poker face, right? Because you're not looking at people, or are you? And then basically, you've got no chance of physical tells. There's no, oh, I have a twitch, they scratch their nose, whatever. Is that why it's all disappearing? Because poker's played online so much now? Yeah, great question. To an extent, yes. As you correctly said, you can't physically see your opponent. So there is none of that type of information. And so that
Starting point is 00:17:03 forces you to rely on the more mechanical information, such as like the amount that they bet, the types of cards that they bet on. The only sort of read you can make in terms of sort of something physical is how long they take to click the button. You know, you can see how long they think, and that can be information like, oh, they really thought for a long time on this one, whereas they normally act really quickly. That can be informative to an extent could you give us we're about to bring in a an ai expert on in game theory uh what what has been your experience with ai before we make that transition to our next guest um my my experience with ai is just one of a silent obsession um i've just been fascinated with the concept
Starting point is 00:17:45 of particularly super intelligence sort of for the last 10 years, really. Following, I mean, I was introduced to it through the effective altruism community who are- Tell me about that. How does that relate to you? This is a not-for-profit that you started. Is that correct? Do I understand that correctly? This is a not-for-profit that you started, is that correct?
Starting point is 00:18:05 Do I understand that correctly? Yeah, so I mean, a few poker players and I started a not-for-profit that sort of operates under the effective altruism principles. So those principles basically are, you know, there's limited resources, both time and money, that both any individual or any society or group could ever donate to philanthropic causes. And because these resources are limited, it's crucially important that we sort of take a really, we take a step back and sort of take stock to figure out what are not only the biggest problems, but the most urgent and the most neglected, the most comparatively neglected, to ensure that the money is donated to actually the best place where, you know, there's either the strongest evidence or
Starting point is 00:18:51 the highest probability of having, you know, the optimum impact. Because unfortunately, most philanthropists, by far the majority of philanthropy has at least historically been very sort of emotion driven and reactionary and hasn't had. And a little bit vanity driven, right? It's a person's pet project and they want to solve it and they're not thinking about math when they do this. It hasn't, it deeply needed a scientific approach. And fortunately in the last 10 years, a sort of combination of scientists
Starting point is 00:19:26 and business people, actually it was a couple of hedge fund managers really who got it going. Because they got the money, yeah. Well, sure, but they've got the money, but they've also got the, they're statisticians at heart and they care about data
Starting point is 00:19:38 and they care about evidence and no one bats an eyelid at a business trying to ensure that it gets the biggest bang for its buck. So why is it sort of why do people find it strange to think that charities should also try and get the biggest impact for buck? Particularly when, you know, when we give to one place, that means we're not giving to somewhere else. And if that other place you would have given to was actually going to save 10 times as many lives for the same donation. It's kind of a tragedy, if you ask me me if we don't give to the right place. So based on this sort of general concept, there are a number of charities that have been identified as being
Starting point is 00:20:16 by far the most cost effective in terms of like they will say, you know, your money will just go so much further if your goal is to improve people's lives. This makes so much sense. I mean, it's embarrassing. We all should be embarrassed that it wasn't around long before this. Yeah. And this is, everyone out there, study your math, okay? Amen. Don't ever say, I'll never need this again.
Starting point is 00:20:39 Well, maybe you will, not that you'll need it, but you'll want it to do something innovative like Live Worry has done. And the other thing you could do Liv is just figure out a way to Robin Hood Jeff Bezos. Just steal from him on an annual basis an amount of money
Starting point is 00:20:57 that he will not realize is missing and give it to people who need it. But specifically if he with his philanthropy not only thing yes it's a few billion that's all give it to people who need it i mean a few billion but specifically if if he with his philanthropy um not only you know it's one thing for him to do more but even more crucially is it for him to make sure that it goes to the most uh cost effective places that is the most right that and that's the key thing and a lot of times because um you know so much pressure is put on like no do something
Starting point is 00:21:25 now give away this now and it's like well it would surely be better for someone to take to take a year to figure out okay where do i get 100x improvements as opposed to give it all now just because of like social pressure so you know it's not as simple as give it all away now mr bezos um okay so chuck this is how i would rather use the X factor. She said 100X improvement. That's an X factor. That's called it. In fact, you can rename it the X factors. Quantified X factors, yes.
Starting point is 00:21:52 Yes. Quantified X factors. There you go. Guys, we've got to take a short break, and so we have to say goodbye to Olivia. Oh, we don't want to say goodbye. Not to Olivia. Okay.
Starting point is 00:22:02 Stay with us because AI expert and friend of StarTalk, Matt Ginsberg, will join us on the other side. And we'll be talking about competitive AI when StarTalk Sports Edition continues. We're back. StarTalk Sports Edition. House of Cards is the title. With Chuck Nice and Gary Riley. Guys. Hey. Chuck, you're tweeting at Chuck Nice.
Starting point is 00:22:44 Thank you. Yes. That is correct, sirReilly, guys. Hey. Chuck, you're tweeting at ChuckNiceCon. Thank you, yes. That is correct, sir. Yes, yes. And Gary, my three left feet, we're still sticking with that, no matter what. We are, no matter what. All right.
Starting point is 00:22:56 In this segment, we're picking up AI and trying to see what role that plays in gaming, in probability, in making decisions. And we're going to juxtapose that with all that we learned from Liv Borey's commentary about playing poker. And I wonder if she and Matt Ginsburg are like the nemesis of each other, right? Because they both come at the same problems, but they could be competitors because gaming is a big part of this. Let's bring in Matt Ginsburg. Matt, welcome back to StarTalk. Hey, Neil.
Starting point is 00:23:38 Great to be back. Yeah, and you've got this resume where you're a young PhD at Oxford. That's kind of cool. Although I think we have an over fascination with young precocious kids. When you're older no one says, boy that was amazing. You did
Starting point is 00:23:55 that before you were 40. I mean why we have such a fascination I'll never know. But it's there. And we had it with you at age 24 getting a phd from oxford in mathematics um very cool and you studied computer science at oxford it's of course in the uk and you're a scientist i studied i was a i was a mathematician and a physicist mathematical physicist i am so old that when i was a student you couldn't study computer science. It was not yet a discipline.
Starting point is 00:24:25 Man, that's old. That is really old. Older than dirt, as my kids tell me. So the abacus, you would grease the abacus. So you had to invent computer science so other people could study it. The first computer and I
Starting point is 00:24:42 date to about the same. Really? Yeah, the first commercial. I mean, the first computer and I date to about the same. Really? Ooh. Yeah, the first commercial. I mean, your birth. I'm just too old. I'm just too old, yeah. Okay.
Starting point is 00:24:52 Actually, my wife's PhD is in mathematical physics. Cool. So I get my dose of that when she wants to think about the world with that as a lens. That works real good. think about the world with that as a lens. That works real good. So you've provided statistical support for sports teams, something we covered in a earlier visit that you've granted us.
Starting point is 00:25:19 And we loved your name of your crossword puzzle algorithm. Please tell us the name. Dr. Phil. That is the best name ever. Ever. That was my second choice. And my first choice, I asked a bunch of crossword constructors what I should call this program. And my first choice was actually Deep Clue. Oh, good.
Starting point is 00:25:39 So what happened to that? IBM wouldn't let me do it. I was going to say. Because of Deep Blue? Because of Deep Blue. Because of Deep Blue? No, but that would create resonance. They're idiots there. Anyway, so Dr. Phil was a very close second,
Starting point is 00:25:54 and that's what we wound up calling him. Okay. All right. So you're a specialist in AI? I am. And general solver of hard problems. And you're now gainfully employed at Google. So tell me about how you think about the Nash equilibrium
Starting point is 00:26:11 when you either program AI to do what you need to do or when you're thinking about gaming in general. For me, the Nash equilibrium stuff, that's just a tool. So when you look at the work I do on statistical support for selecting a play in the NFL, you're really solving a fairly simple game with a relatively simple payoff matrix. You know, if I do a passing play and they call a blitz, what do I expect will happen? So you have this little payoff matrix and you're just, and you know.
Starting point is 00:26:43 Wow. So it's much simpler than games. It's much simpler than full up games that have, you know, 10 to the 100 possible states of existence. Right? I think it's fair to say that if you're trying to invert a matrix 10 to the 100 on a side, you're done. So you have to, whatever problem you're solving, you have to reduce it to something that's computationally tractable. All right, so when we think of poker, there's a, you know, bluffing is a fundamental part of it.
Starting point is 00:27:16 So if AI plays poker, does AI bluff? Absolutely. Or does it have to? Or does it have to? Of course it has to. If you don't bluff a poker, you suck. Part of how I bluff is I try to read my opponent. I'm not playing the cards, I'm playing my opponent.
Starting point is 00:27:35 Can AI do that? Or is it just calculating things? Potentially it can. There's been a lot of work on poker and AI. A lot of the foundational work involved translating the poker problem into this linear optimization problem with, I believe, billions of variables and then just solving it because you can. It's not 10 to the 100th.
Starting point is 00:27:58 10 to the 9 is big enough. So you solve this giant optimization problem and it says okay if you're in this situation in terms of betting and you hold these cards this is how you should bet and it doesn't know but it's a bluff but it'll say even though you've got unsuited 2-4 you should still bet a lot with this fraction of the time so there's your bluff And it's coming out of the fact that you're looking for an optimal strategy
Starting point is 00:28:32 according to the Nash definition. So an optimal strategy could include a bluff is the point. It has to, right? If you never bluff, then your opponent who, I mean, one of the nice things about the Nash optimum is that if you tell your opponent, I am playing by the Nash optimum, he can't exploit you. So he can play against you for days and see,
Starting point is 00:28:53 Oh, he's just playing the Nash optimal. I can't exploit that. But if you're not playing the optimal strategy, he will play against you. And wow, it's like this guy never bluffs. And that's the,
Starting point is 00:29:03 there's the point of exploitation, right? Correct. Interesting. Interesting. Cool. There's the point of exploitation. And then you're done. Right? Correct. Interesting. Interesting. Wow. That is... Interesting.
Starting point is 00:29:09 I don't want to hog Matt's questions. Chuck, Gary, what do you have for him? Okay, so we build an AI program to play poker. Simple enough, are we cheating? Wait, wait. So we're asking here, if you were assisted by AI in a poker competition, does that count as cheating? And more broadly,
Starting point is 00:29:26 how does AI play out in the rules of games about whether people will declare someone's cheating or not? What's the arc of that line of thinking? So, if you're using an AI to help you, and you don't disclose it, yes, you're cheating.
Starting point is 00:29:42 Okay. Because you're not supposed to. It's like going into a high jump with rockets on your shoes. You can't do that. With hidden rockets. With hidden rockets on your shoes. I think the interesting question is whether the organizers should allow it. So you go in and you do have a computer assistant. You say, hey, I'm being assisted.
Starting point is 00:30:02 Or conceivably, I just am a computer. What should the organizers do? Should they let you play or should they not let you play? Yeah, but put one interlocutor in there and you have coaches attached to computers on the sideline, giving instructions to active players
Starting point is 00:30:17 who themselves are not using the AI, but the coaches are. Well, we do that in football. Isn't that the same thing? That's my point. Isn't that the same thing as you're describing, Matt? That's using the computer and not telling anybody. Just because you have some guy in the middle. I mean, it's like there's this scene in Galaxy Quest where Sigourney Weaver, her job is to, when the computer says something, she repeats it to the crew.
Starting point is 00:30:41 Right. And she says, it's a stupid job, but somebody's got to do it. And if you're doing, you're still, they're still using the computer. And the fact that Sigourney Weaver's in the middle doesn't, that doesn't matter. So if you have a computer that's telling you something and you've got a guy with a walkie talkie passing the information in, you're still using a computer. Yeah, but that's allowed apparently. I don't think anyone's prevented that from the beginning. They suppose they could have.
Starting point is 00:31:04 No computer help. No one ever said that. They just, and by the way, even without the computer, they'd have somebody doing analytics. Baseball is rich with that as a history. So what if a computer now does it? And yeah, it's AI, but so what? Well, there's a difference between doing something offline and doing something in the game. So in a chess game, for example, you adjourn a chess game,
Starting point is 00:31:26 and now everybody's going to run to their computer and do an analysis. Is that allowed? Is that allowed? It's allowed. But what you can't do is you can't have an earbud, and somebody's telling you what a computer thinks you should do. That's not allowed. That seems like an artificial rule boundary on this.
Starting point is 00:31:44 If I can, on a break, I say I got to go pee, and I go, no, that would be cheating. But if there's an official break, because I saw this in the Queen's Gambit, right? They all went and analyzed the games. That's before computers, but you get other experts there, and that's the same thing. That's the same thing.
Starting point is 00:31:59 And what happens? The boxer between rings sits down, and the coach tells him, hit him here, not there. And so it's an AI, just not a fundamentally different thing? It's just a matter of degree? I think the answer is sort of. But I think the distinction between as the game is going on and during a break, I think is an important one.
Starting point is 00:32:23 Because the bottom line is you can't police the breaks. You can't tell somebody we're going to adjourn this chess game, come back tomorrow and don't use a computer. How do you enforce that? How do you know? Whereas it's relatively easy to say, we don't want you using your computer while you are playing. That's much easier to, then you can say, you know, now take that thing out of your ear. We don't like it. But you can't say, you know, don't go to bed tonight. Don't go home. We'll just have a camera on you. It's too much. So I think that's a rational way to do it.
Starting point is 00:32:55 I'm just thinking, do the casinos have their own AI programs? So if I played poker online with AI that they can work out, I'm cheating. Actually, Gary, I'd love that, but I want to pick that up in the next segment because we just ran out of time. With pleasure. So when we come back, we're going to talk about whether the house can use AI so that people like Matt don't walk in there and exploit their ignorance because of his smarts when StarTalk continues. Patreon patrons Ricardo Torres, Mason Dixon, and Alireza Safat.
Starting point is 00:33:46 Without you, what would we do? Well, we probably wouldn't make as good a show. So thank you. And for those of you listening who would like your very own Patreon shout out, please go to patreon.com slash startalkradio and support us. We're back. StarTalk. We've got Chuck Nice. I've got Gary O'Reilly and Matt Ginsburg.
Starting point is 00:34:16 Matt Ginsburg is our resident. He's not in residence, but he's in arm's reach of StarTalk. He's our AI expert with his new gig at Google. But we've got him on here to just sort of unpack gaming and what role AI might play. He's in arm's reach of StarTalk. He's our AI expert with his new gig at Google. But we've got him on here to just sort of unpack gaming and what role AI might play in the present and future of gaming. And Gary, I cut off one of your questions. Why don't you put that back on the map? So do the big casinos, when you play online,
Starting point is 00:34:37 have their own AI program to patrol to make sure guys like me don't use my own AI program to win. So it's the AI police for the AI. Yeah. What's to prevent them from using AI, right? To make sure you never win. And how long before we're in the weeds? So let me tell you what I know, and I don't know the answer here.
Starting point is 00:34:58 I do know that they worry about this. So there was a guy who was just mopping up on one of the online poker sites, making money at a phenomenal clip, and they went crazy trying to figure out what he was doing. And there was an assumption that he might be a computer or using a computer. And it turned out it was just a guy who was really good at poker and he won for a while and then didn't go so well. But they're very concerned about that. On the chess sites, it is a substantial problem where somebody comes along and they're using a computer and you can't tell. They just have a computer running in the background and they're playing chess online and they're creaming
Starting point is 00:35:33 everybody. And there is a lot of interest in protecting the other players from that. So computers have a particular style of play. And you look for that. One of the things computers do in chess, for example, is they almost never play really fast. Even if they have just one move, they somehow, for some reason, the software is arranged, so it's going to take them a little while to get out of check the only way they can. And if you see somebody who just never plays fast, that's sort of a tip-off that he may be using a computer. So people try and figure out, I mean, they're aware it's a problem, and they try and figure out what they can do about it. It's a problem in computer bridge, in online bridge. There,
Starting point is 00:36:19 you're not looking at computers, you're looking at people just cheating by calling each other up. Hey, I have the queen of spades. And you got to figure out if that makes sense. So in all of these cases, whether you're using an AI and not exposing it, whether you're talking to your partner, you're cheating. And finding cheaters has always been hard and I think always will be. But people want to do it. So let me push back a little bit with you on the chess setups. Because as I understand it, when you play a game, you are presumably honest about what your chess rating is, and then you play other people that have approximately that rating. And if you wipe the floor with everyone because you're assisted by a computer,
Starting point is 00:37:00 your rating goes up. You can no longer play those people. And it keeps going up until they will force you into a place where other people will have an equal chance of beating you because you're playing people with an equal rating because the rating tracks your success. So isn't that a self-limiting fact that prevents AI from running rampant in chess sites? Well, computers are way better than we are at chess. So if I wanted to do this, I would go to chess.com and I'd start an account and it would rate me at $1,300. And let's say I can't play chess at all. And I would, on the side, I'd have a computer.
Starting point is 00:37:32 And I'd say to the computer, hey, play at $1,500. And then I would do what the computer says and my rating would go up to $1,500. And then I'd tell the computer, hey, play at $1,700. And I would always be a little bit better than my opponents. And there you are eating lunch while all this is happening. Yeah. As you ascend. Okay, yeah.
Starting point is 00:37:52 And I would just march up the ranks. Okay, so Matt, you've thought way too much about cheating. I'm a little worried about you. I worry. Well, we'll come visit you in prison, okay? Okay. That would be good because I'm going to be bored. Yeah.
Starting point is 00:38:01 We'll come visit you in prison, okay? Okay. That would be good, because I'm going to be bored. Yeah. He was like, and by prison, I hope you mean Fiji, because that's where I'll be. Kind of like an AI version of Andy Dufresne. Sitting there just plotting and doing the governor's books. Although, just to be clear, Andy Dufresne from Shawshank Redemption.
Starting point is 00:38:27 The novel written by Stephen King. And the movie, which most people saw. I just want to be equally clear that I have no desire to be in a prison somewhere where the warden is a horrible sadist and all I can do is think about
Starting point is 00:38:42 artificial intelligence while I wait for the guard to beat me up. That is not a future that I really look forward to. But one way the guard doesn't beat you up, that one will be okay. It's better. That was a list of seven offenses there. All right, so Gary, did we come through for you? Yeah, absolutely. I mean, if AI, I mean, AI will be better at poker than humans. I mean, I kind of think we understand that now. So when we watch poker tournaments, are people going to care if AI is playing? So the World Chess Championship remains incredibly popular. People care how good Magnus Carlsen is.
Starting point is 00:39:23 Right. Yeah. Because they're not giving a ticker tape parade to an AI algorithm. But I will tell you, I don't get it. So my son just started playing chess and he's having a great time playing chess and he's playing chess all the time online.
Starting point is 00:39:36 And I'm like, but computers are so much better. And he says, Dad, I don't want to be the best. I just like playing chess. It's fun. And I like watching other people play chess. Right. I think, you know, watching Usain Bolt run was amazing.
Starting point is 00:39:55 Not, and it was just amazing. And there are, there are things that move faster than Usain Bolt, but watching him was just, it was like almost a privilege just to watch. You know, Matt, I have to deeply agree with you there. Not that this is the first time I have agreed with you. Because when you think about the vicarious participation in something, you want another human being to do the act
Starting point is 00:40:23 because you're a human. And my analogy here is the space program, right? It matters that a human being takes a step on the moon. Did you know that we had landers on the moon before that? But did anyone celebrate that? We knew it in the science community and the space community, but we were watching for the humans because they can come back and tell a story. You could touch them and you could, like I said, you could put them in a parade.
Starting point is 00:40:47 So I got to go with you on that, that there's real value to knowing it's one of your own, your own species in a high performance. But what about the fact that now we have computers that can experience and describe and tell that same story in a very human fashion the same way we do. I'm still not giving it a ticker tape parade, damn it. I think the thing to realize is how different computers are than we are. So when a computer plays chess, it's not playing like we do. When a computer plays poker, solves a crossword, plays bridge, they're just not solving things like we do. And I think Neil is right. That makes it harder for us to relate to them because they're aliens. And I think there's also a wider message, which I think is much more
Starting point is 00:41:46 important. And that is, there are things we're good at, and there are things machines are good at, and they're going to be different things. And we're going to be able to solve problems working together that we can't solve by ourselves. I think that's incredibly important going forward. Games are interesting. We are the world. We are the children. Let's solve the problem together. Well, I'm sorry to sound like that, but I think that's how it is. You totally sound like that. I agree, but I think that's how it is.
Starting point is 00:42:15 So what do you think about this then, in light of what you just said? The grafting of that same technology into human beings so that our intuition, our method of thinking is enhanced and augmented by the sheer computational power of an artificial intelligence. We'll just make our mistakes that much faster.
Starting point is 00:42:43 Oh my God. That is horrifying. What you just said is... our mistakes that much faster. Oh my god! That is horrifying! What you just said is... Oh my god! That is so scary what you just said! So this is sort of interesting and I think it's certainly worth trying
Starting point is 00:42:57 but if somebody said to me I have a horse and it has all these amazing properties and I have a horse and it has all these amazing properties. And I have a dolphin and it has all these other amazing properties. So let's make a horse dolphin. That might not be the right thing to do. It might be that for the horse stuff,
Starting point is 00:43:20 you want to use a horse and for the dolphin stuff, you want to use a dolphin. Well, they did that with the camel, the camel. That's a hybrid. It's the dolphin stuff, you want to use a dolphin. Well, they did that with the camel. The camel, that's a hybrid. It's an idea hybrid, right? Sometimes, yes. But sometimes you really want to, you know, we're good at what we're good at.
Starting point is 00:43:36 And I have no idea. You know, Elon Musk wants to put microprocessors in our heads. And I think I'm going to be very interested to see what happens when you put a microprocessor in someone else's head, but I don't want to be early here. You're not a first adopter. No, I'm not down for that.
Starting point is 00:43:54 Okay, Matt, your program, Dr. Phil, crossword, solver, but language is organic. It's always developing, right? So how do you program something to develop with something that incorporates slang and just goes in random different directions? But just to be clear, not everywhere does language develop. I mean, France has an official board of language that restricts the entry of some words
Starting point is 00:44:22 and protects the presence of others. So you're talking about a place where communication channels are free to utter any syllables you want to mean whatever anyone else thinks it means, right? So AI would not do this at all. So there, that's why we're better than AI, because we can make stuff up. I'm trying to find out how we can still hold on to our dignity here. Well, just think, Neil.
Starting point is 00:44:47 Words that were slang words, street words from the 50s, 60s, weren't in the lexicon maybe a few years afterwards. But now it's almost like a common part of a sentence. So our language evolves from street slang, and it becomes incorporated into the language. I see what you're saying. It elevates its way up. So in a crossword puzzle, if I write a puzzle and I have a clever word that's recently invented,
Starting point is 00:45:12 but not yet in all the formal dictionaries, but everyone conversationally knows what I'm talking about, your Dr. Phil is going to miss it. Isn't it? That is correct. And it'll be a little bit behind. There was a puzzle recently that included the field GPSs. I think it was in the Crossword Tournament this year. And GPSs wasn't GPS got to be a P, it's got to be an S, it's got to be an E and an S. So I don't know what GPS is, but I'm sticking it in because it just has to be right. Everything else works.
Starting point is 00:45:54 Everything else works. So it's also the case that Dr. Phil's, you know, one of the things that happened this year with Dr. Phil is I started working with the natural language group at Berkeley. and this year with Dr. Phil is I started working with the natural language group at Berkeley. And they brought in this, it's a machine learning natural language processing tool to do sort of question answering. That is constantly getting trained on new crosswords, new data, new books, new everything. So you do see AI evolving and you see tools like this evolving.
Starting point is 00:46:26 So it's not a static dictionary. And it's not, nothing is static. And by the way, the same way that we become exposed to colloquialisms, the AI would be in that same position too. It might be a little behind, but at some point it's going to have an exposure as well. It might be a little behind, but at some point, it's going to have an exposure as well. Yeah, and if you look at the work that Apple does with Siri, when you ask Siri a question, that question goes to Apple. And it's part of their database of uttered language. So when you use slang, off it goes.
Starting point is 00:47:05 And the first time somebody asks Siri a question that has some weird word in it, Siri's going to have no idea. But then it just keeps getting, keeps showing up. This word keeps showing up. So Siri eventually figures out what it means. And is it true that, like, well, all the search engines who use the natural language, that when they ask you back, did you mean, when you put something in, it's because they're actually learning if that's how people ask for stuff? A little bit. Mostly when they say, did you mean,
Starting point is 00:47:30 it's because they don't know. Okay. And their natural language module said, oh, he might mean this and he might mean that and I don't know which one. Okay. And I'm just going to ask him. And then they're just trying to do it.
Starting point is 00:47:42 Okay, Matt, let me ask you, because I think I heard something, what you just said in an answer. AI doesn't have the capacity to guess. It has to be accurate. So I think one of the Berkeley guys actually put this incredibly well. Machine learning systems don't know what they don't know.
Starting point is 00:48:04 So I remember, so in the Crossword Tournament, the humans solve the puzzles, and then they get passed into a room where they get scored. And I typically do some of the scoring as well as running Dr. Phil. And I remember once I got a puzzle back, and it was done incredibly quickly because you'd grade it on speed. And I got this puzzle to grade, and the guy had solved it in like a minute and a half. And then it had one corner that was just all random and wrong. And I remember grading it and thinking, this guy's an idiot. He took a minute and a half to solve the puzzle, and he surely knew that this corner was totally wrong.
Starting point is 00:48:45 And I looked at it, and it was Dr. Phil's puzzle. It was my puzzle. Oh, my gosh. I'm the idiot. But then the problem was that Dr. Phil had solved this corner completely incorrectly and had no idea it was wrong. It doesn't know it's guessing. It just says, well, this is the best answer I could come up with, so I'm going to go with it. And I don't know if it's right, but I think it's right.
Starting point is 00:49:06 It's probably right. One of my favorite clues that I've ever seen, it was four letters, and the clue was to come in second. Lose. Yeah, to lose. Lose. But that wasn't where my first urges were. Come and say, is it win, place, and show?
Starting point is 00:49:22 Is it this, this? And we don't think of the person coming in second as losing because I guess it's the we want everyone to get a trophy or a medal or something. I think in ancient Greece there was no second or third. You either won or you didn't win in the original Olympics, as I have read. You're not competitive. Second place is the first loser. I mean, that's really, that's all there is to it.
Starting point is 00:49:42 And that's how I was growing up. Tough crap, Tough crap. Tough crap. That's the, Jerry Seinfeld has a whole bit on that. You know, of all the losers, you came in first. So Matt, tell me about random numbers and their role in this. Do we have a perfect random number generator yet, or are they just good enough? We certainly have perfect random number generators.
Starting point is 00:50:08 No, I thought that was not possible. Well, cosmic ray detectors are perfect. Oh, fine, fine, okay, fine. But not in a computer. We will never have a perfect synthetic random number generator. That's what I was asking, okay. But you can come very close. So, for example, you can have a clock that measures time in billionths of a second.
Starting point is 00:50:27 And you can look at the last three digits of that clock to start your random number generator. And that's almost random. It's hard to imagine it being meaningfully correlated with anything else. Right, right. And so this matters when you're calculating probabilities in your gaming software, isn't that correct? Yes, because there are times when you need a random number generator. Going back to the example we started with, Nash equilibrium. What you want is you want to be, say, you want to run a passing play 30% of the time and a rushing play in 70% of the time,
Starting point is 00:51:02 you need to flip a coin. And you need to decide, do I go with the 30 or do I go with the 70? And you have to generate a random number. So anytime you're, you know, in a poker player, obviously you can't bluff all the time. You can't bluff none of the time. You need a random number generator. But I think we have random number generators that appear to be good enough.
Starting point is 00:51:21 Okay. So that's an update for me on the state of that effort. But guys, I think we're like actually ran out of time on this subject. Speaking of random numbers. Which is tragic. Which is tragic.
Starting point is 00:51:33 Ran out of random numbers. We just put in the last random number ever. Anyway, Matt, are you active on social media? Can you remind us? I don't understand social media. I mean, I wrote a book, Factor Man, that I try and active on social media? Can you remind us? I don't understand social media. I mean, I wrote a book, Factor Man, that I try and promote on social media.
Starting point is 00:51:49 I will certainly tweet about the fact that we did this today. And that's about as clever as I get. I don't have a Facebook. And how do we find you? Is it just Matt Ginsberg? It's mattginsberg.com. And your Twitter handle? What do you have?
Starting point is 00:52:05 It's Matt L. Ginsberg. Matt L. Ginsberg.com. And your Twitter handle? What do you have? It's MattLGinsberg. MattLGinsberg, okay. And MattGinsberg was taken. And that's all I know. All right. And so good luck at Google. Sometimes you need a little bit of that too. Thank you.
Starting point is 00:52:18 Yes, much appreciated. All right, guys. Gary, always good to have you, Chuck. And Matt, thanks for being a good sport with us. It's always fun. Literally and figuratively. And you know we'll come back to you because this is a hot topic. Cool.
Starting point is 00:52:30 All right. This has been StarTalk Sports Edition. Neil deGrasse Tyson. Keep looking. We'll see you next time. Bye. Bye. Bye.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.