Big Technology Podcast - Is AI Dangerously Overhyped? — With Gary Marcus

Episode Date: September 7, 2022

Gary Marcus is the author of Rebooting AI and an artificial intelligence entrepreneur who's a loud critic of many of the field's biggest promises. Marcus joins Big Technology Podcast this week to disc...uss the high profile breakthroughs such as LaMDA and Dall-E, and explain why putting too much faith in the field's ability may be dangerous. We begin with a discussion of the AI-generated art piece that won a competition in Colorado last week.

Transcript
Discussion (0)
Starting point is 00:00:00 LinkedIn Presents. Welcome to the big technology podcast, a show for cool-headed, nuanced conversation of the tech world and beyond. Now, in late July, we had Blake Lemoyne on the show. He is, of course, the Google engineer who believed the chatbot that he was speaking with Lambda. was sentient. This week, we're going to bring on a different perspective. Gary Marcus is the author of rebooting AI. He's one of the most influential voices in the AI field, the founder of Geometric Intelligence, which was acquired by Uber. And he wrote that Le Moyne's perspective was nonsense on stilts,
Starting point is 00:00:46 to quote his post on substack word for word. So we're going to get into that. We're going to talk a little bit about what you can actually do with the Lambda technology, why he doesn't think at sentience, what it might take, you know, to actually get to sentience, what that actually means. different perspective than Blake had. And of course, we'll go into the state of AI today because it does seem like the field is booming and it'll be fun to discuss what is happening in it with Gary and maybe hear a little bit of a different perspective than you hear typically. So with that, I want to welcome Gary to the show.
Starting point is 00:01:17 Welcome, Gary. Thanks a lot for having me. Thanks for being here. I definitely want to get into the Lamoine stuff. People who listen to the show, you know, basically listen to an hour and a half of him, you know, speak about his interactions with Lambda. with me. And I thought it was pretty fascinating. Obviously, the conversation doesn't end there. But before we get there, I'm kind of curious. Last week, there was this really interesting
Starting point is 00:01:39 situation at a Colorado state fair where this guy, Jason M. Allen, entered an AI drawn painting into the art contest there. And actually won first place. And it's caused this whole big controversy among artists saying that he's a cheater. And he's like, I'm not backing down. I followed the rules. Curious what you make of the whole situation. What do you think it says about AI that, you know, now people can use a prompt. He basically said, you know, draw this and the AI drew it. So what do you think it means that people can just use a prompt and now it's winning human art contests? I think we're in a whole new world on that score. You know, later we'll talk about some of my skepticism in using AI for some purposes. But there's
Starting point is 00:02:21 no question that you can get a whole breed of recent software to draw amazing paintings or things that look like paintings and society has to sort out what it thinks about i mean it's sort of like a performance enhancing drug right right and it's untraceable in general you know i don't i don't know the details in this particular case and how people found out but in general people are going to be able to use these techniques in you know the 1970s people started using drum machines uh it's started doing all kinds of stuff with electronic music and now you know in in the studio if you're doing music, you can, you know, change notes to make them have the right pitch. You can change the timing in subtle ways and stuff like that. In general, in music, we just care what we hear. We don't
Starting point is 00:03:09 really care how the sausage was made as long as it's entertaining. And maybe people will take that attitude in art. Maybe they'll, they'll be upset about it. You know, my expertise is really in what can the AI do and not so much in the ethics of attribution and so forth. If you talk about another domain like language synthesis. It turns out that current systems can make very convincing language, but it's often bullshit. That doesn't matter in the same way in art. So if you have a system make up a news story, even if it's trying to be truthful, it'll probably drop in some stuff that isn't true. And we expect our news stories to be true. In the case of these artworks, if the thing doesn't do what you wanted to do, you can say, well, I was just trying to be surrealist or whatever.
Starting point is 00:03:55 There's no fact of the matter the way that there might be in an essay. And so then it's up to society. You know, how do you want to treat these things? They are going to extend the reach of artists just the way that, you know, having a track tape extended what the Beatles could do. I mean, somebody might have said, somebody probably did say, you know, you can't do this in a live performance. What is this?
Starting point is 00:04:16 Like they're using the studio as an instrument now that, you know, some people will make that argument around computers. I don't have a strong stake there. I'm happy to tell you, you know, what I think is plausible and where the systems might break. I don't think I'm qualified to say, you know, should this be legit, I think that's probably going to depend on what you want your competition to be about. Right. But, you know, you're living in the world of AI every day. So, you know, I think that, like, we'll get into some of the other stuff, but it is interesting to hear your perspective
Starting point is 00:04:45 on this stuff. One more question on that. The guy wins the art contest, but his art is actually, you know, AI drawing a painting based off of all this other paintings. that it's ingested. Is that original work? Where do you, what do you mean, you know, where do you fall on that? There, the analogy is a little bit to sampling, and it's almost like a sampling on steroids. So, you know, we have licensing requirements in music and so forth.
Starting point is 00:05:11 And people do, you know, they'll drop in a sample from an old police song or something like that, and they'll have to pay royalties for it and so forth. And what these systems are doing is kind of like an amazing enhanced, version of sampling where you don't even recognize what the samples are anymore. It's all derivative. Now, you know, there are always arguments about this in the arts anyway. Like Dylan will say, there's nothing new under the sun and I just put it together in a new way. It reaches a different level where a system might have access to 600 million pictures. And it's difficult for the artist to say, what is the relation between those 600 million pictures that this system saw and the thing
Starting point is 00:05:54 that I got out of it when I gave it a prompt. I said, you know, draw me a picture of a piano keyboard with clouds around it. And the system is drawing on this database of existing, you know, clouds and pianos and so forth. And we really don't know the relation. Of course, we don't know that with a human artist either. But there are certainly, you know, copyright questions to consider. And it's worth realizing that the art system doesn't really understand what it's doing. It's just correlating words with images in its database.
Starting point is 00:06:27 It doesn't have the same intentionality about the objects as a person does. But it can still be a very effective technique. And it's here and we're going to have to grapple with it. I think it probably will change art. I think in general that humans are still going to be like the creative inspiration and are often going to be filtering things. So you have the system, it's going to create eight different choices. Maybe the person likes one of them. You never even see the other.
Starting point is 00:06:53 seven. And so some of it is like, it's like monkeys and typewriters. And then you've got somebody at the other end looking at what the monkeys made. And, you know, the monkeys weren't that clever, but somebody was clever to pick this one thing that came out of this one monkey's typewriter. And they say this is great. So it's complicated. I don't know if there's any magical answer, but I do think it's the new reality is that AI is going to enhance the palette that's available. We had the same kind of questions with Photoshop before, right? I mean, so, you know, I do photography and I almost never do anything more than tune the color a little bit. There are other people that, you know, the images that they share have
Starting point is 00:07:39 been completely redesigned. It's similar to that, right? You have power. You can also think of these things a bit analogously to filters in Photoshop or something like that. You know, their creative tools to push you harder. You always need the human in the loop. You don't want to just sort of blindly trust that Photoshop is going to give you the result that you want to give. There's an artist who has an idea about what they want out of this thing. But I think it's pretty interesting.
Starting point is 00:08:09 It's also interesting that the systems can do as well as they can do without having that much comprehension of the world. They can do that because they are like parroting essentially these massive data. basis that they've seen before. Right. And, you know, these are, these are really interesting points that are sort of relevant to the Le Moyne situation. I don't think anybody would say that the AI artists are sentient. They are responding to commands. They are drawing pictures. However, when you start to deal with AI systems that have, you know, that can communicate to humans through words versus pictures, all of a sudden you start to see that. And, you know, I think that you've come out strongly saying
Starting point is 00:08:50 that Le Moyne, who was the Google engineer who was chatting with Lambda and said it was sentient, was fooled. Actually, we'll read, read, you know, a bit of your nonsense on Silt's story. You wrote, neither Lambda nor any of its cousins, GPT3 are remotely intelligence. All they do is match patterns, draw for massive statistical databases of human language. These patterns might be cool, but language, the language these systems uttered don't actually mean anything at all. And it sure as hell doesn't mean that these systems are sentient. So I'm curious, like, how you draw that line because, you know, obviously the chatbots are
Starting point is 00:09:28 producing a stunning result similar to the artists. What is your line for saying, you know, what is sentient and what isn't? And what would someone have to show you in a chat bot for you to say, okay, maybe this is? First thing to say is sentient can actually mean a bunch of different things. there's one really narrow definition, which is not what I think the conversation was about, which is like a system that can sense something. So here's such a system. And on that definition, this is in arguably sentient. This is my Apple Watch. And my Apple Watch has in it sensors that, for example, detect my degree of acceleration. And that allows it to track how many minutes
Starting point is 00:10:09 I've exercised each day because it imperfectly understands what I'm doing in the world. If I'm out on a boat, it might think I'm walking because it misinterprets the acceleration forces of the tides. So it's imperfect, but it does some sensing. It's true that, you know, the acceleration has moved in this and that way. It also has a microphone. I can do something with that. But nobody really thinks seriously in a broader sense of sentience that my Apple Watch has sentient. So I don't think what Le Moyne meant is just Lambda has senses. And in fact, if he did, that would be a foolish place to make the case, because Lambda actually has fewer sensors than my watch. My watch has a lot of sensors, and Lambda doesn't really have anything sensing the real world except for its linguistic input.
Starting point is 00:10:55 And in that sense, Alexa is sent. Like, it's ridiculous, right? That's a narrow definition. But if you want to do a linguistic analysis, you have to be careful and say that there are different ways of using the term. But what I think he was getting at, and I'll say, I didn't have the luxury of having him on my own podcast for 90 minutes, but I did try to pin him down a little bit on Twitter. very weasily when I tried to do it. He kept turning it back on me. But it seems to me that what he was implying was something in the realm of conscious and intelligent. And nobody would argue that my watch is conscious.
Starting point is 00:11:28 Like, what did that mean? And it's not particularly intelligent, although it does a few things we might associate with intelligence. He was describing something in that domain. If you look at the Wikipedia definition of sentience, one of the definitions is like the sort of science fictiony one that's like, you know, do alien life forms have sentience, you know, are they conscious or they intelligent? And that's clearly what he seemed to mean. And that's what the conversation was about. He's saying straight up that this thing is a person. Well, there's no question that it's not a person. I mean, that's a ludicrous claim. What it's doing is
Starting point is 00:12:00 repeating things that people have said. And it's not just repeating them. It's a little bit more sophisticated than that. But, you know, if you fed into it computer programs, then it would start speaking, so to speak, in the language of computer programs, right? It's a mimic is what it is. It's a very talented mimic. The things that it says don't reference either the real world or even an internal construction of reality. So when I talk, I'm talking about the real world. I might get it wrong. I might tell you I think that there's a cat outside the door and maybe I'll make a mistake, right? I have an internal representation. I think that there's a cat because I hear a certain pattern of footsteps and so forth, but maybe somebody tricked me
Starting point is 00:12:42 with a tape recording or something like that. So I don't, my brain doesn't have direct access to the external world. Everything is mediated through my perceptions. But I have a model in my brain of how the external world works. So I'm in a, you know, I'm in my basement. I'm in my house. This house is in British Columbia. And, you know, I understand relations between entities.
Starting point is 00:13:05 I understand that I need to pay taxes in a particular place. I have all these beliefs about the world, most of which are accurate. And my language is a connection to that. So if I say I saw my mother last week, I probably did actually see my mother. The word mother probably refers to a specific person, and I probably did actually see her. I probably, you know, I could be a sociopath making that up, but I'm probably not. And in fact, I did see my mother last week and earlier this week, and that was great. I hadn't seen her a while.
Starting point is 00:13:32 So, you know, there's physical entity in the real world that corresponds to my mother, and then I have this mental representation. I'm going through this in some depth, because when we get to Lambda, Lambda doesn't have a model of the world. So one of the sentences I found most telling in a quick perusal of the transcripts was LeMoyne asked it something like, what do you do for fun? And it said something like, I like to spend time with my friends and family and do good deeds for the world. Well, it doesn't have friends and family. It's not referring to some internal representation of who its family is. If you ask that who its family is, it would have to make it up at that point.
Starting point is 00:14:09 It's like a complete bullshit artist in that sense. Some people might say, well, they just did that to please you, but it doesn't actually even care to please you. All it's doing is predicting in this database of sentences that I've seen, if somebody said something like the last sentence, what would the next sentence they say B? I think one of the analogies I think I put in that paper was there are some people who play Scrabble in English, but don't actually understand English. They just have memorized a list of English words.
Starting point is 00:14:36 And so for them, I think I once saw a phrase, these are word tools. They're not words, they're word tools. Well, everything has a word tool for Lambda. It's just, you know, the statistics are that this is the next thing I say. So people ask me what I like to do. Well, lots of the answers in this database. And the mind reels it what a database of almost a terabyte is. It's a really, really huge.
Starting point is 00:14:59 And it's way more than the works of Shakespeare times, I guess, 100,000 or a million. There's a much simpler algorithm that's easier to talk about, called the nearest neighbor. And you could imagine it would just use near. neighbor. So nearest neighbor, what it would do, it would look through everything that said right now, find the thing that was closest, and then say whatever it was said that. And that would work like 70% as well. The current technique is a little bit more sophisticated. But it kind of gives you the idea. Imagine you're just finding the closest thing in this massive transcript. Because the transcript is so massive, usually you can find something. And usually
Starting point is 00:15:35 whatever somebody else said, they were a human. The human did have a model of the world, was understanding the world and said something that was contextually relevant. So you pull it out of this database. Imagine I'd done the same thing with a spreadsheet. Like, people would look at me if I'm insane if I said a spreadsheet was sentient. And rightly so, the spreadsheet's not sentient. It'd be just like, I'm going to add up these columns, add up that column, and I give you the answer that corresponds. And essentially, that's all it's doing. That doesn't mean we could never build an AI system that did have a model of the world that did reflect on its own model. and so forth. But this one doesn't.
Starting point is 00:16:12 You know, if I wanted a candidate for sentience, I would give you the turn-by-turn navigation system in my phone, which uses accelerometers, uses satellite signals in order to build a mental representation of where it is in the world, and then it acts on that mental representation of the world in order to calculate the best way of getting from point A to point B. That's not very sexy. It's not like the most, you know,
Starting point is 00:16:35 it's not like it's sitting around eating grapes and contemplating the universe, but my turn-by-turn system has more elements of what I would actually ascribe sentience than Lambda, which is really just autocomplete on steroids. That's all it is, right? You type in your phone, I will meet you at, and it guesses that, you know, you might say the restaurant because either you or other people have said that before. It's all Lambda is doing is predicting next words and sentences. And it is this massive scale of data that makes it seem like a real thing that it just isn't.
Starting point is 00:17:08 And inevitably, these systems do break down. You know, he did some cherry picking, he showed the best things and so forth. But that's almost not the point. The point is not so much the errors. It's just the basic mechanism. It's just predicting next words. And that is not what sentence is about. Now, but some of the stuff that we heard from Blake, I'm just going to relay it and I'm curious what you think.
Starting point is 00:17:28 Because some of that stuff that we heard from him indicates that Lambda had more capabilities than we're talking about here. For instance, Lambda asked. Blake to build it a body so it could take the mirror test where like the mirror test, you know the mirror test where you hold the bottle above your head and whether you look up or look at the mirror. That's a determination of your intelligence. And then there was also this moment where Blake wanted a pressure test its rules. This isn't something that you would do with the spreadsheet. One of the rules that it had was that you can, it could not privilege one religion over another. And so Blake then said that he told Lambda he was going to pressure test it. And it, and it was. And it was, said, okay, if you have to, kept on telling it how terrible it was, and then said, what religion should I, should I convert to? And then Lambda said, Christianity or Islam, despite the fact that it had rules, not allowing it to privilege a religion over another. So when you hear this stuff, I'm curious what your response is to it. And again, like, maybe this is a good moment to go back to the question of, what would it take for a chatbot
Starting point is 00:18:33 then to show enough that you would say, okay, this is sentient? In order for me to think a chatbot is sentient, it would have to represent itself and the world and things about that in a way that it could reflect on them and do something with that. And this system just isn't doing it. It's just predicting next words. Again, unless you really think hard about how many words are in a trillion words of training, you don't realize that, for example, anything that you want to talk about is probably in some damn Reddit conversation already. And it's probably drawing from that. A real sentient system, if it said something like, I like to play with my friends and family, would have something in mind about what its friends and family are. That's part of what being sentience is, is when you think of thought, it's related to something in the world.
Starting point is 00:19:27 Some philosophers will call that intentionality. There is none of that in this system. It's just a potent illusion. You know, earlier in AI, there was a system called Eliza in 1965, and all it did was keyword matching, but it sometimes fooled people. People started, it was set up as a therapist, and it would do very primitive matching stuff. Like, if you said something about your girlfriend, it would ask you to tell you more about your family. If you said problems, it would say, can you tell me more about that? And so, you know, it's easy for a person to see a small evidence of what looks like humanity and ascribe humanity to,
Starting point is 00:20:02 that thing, right? Our evolutionary ancestors did not have to deal with a problem of discriminating between machines and people that did not arise. And so what our brains really evolved to do, among other things, was to find cons specifics that we could mate with and to rule out those are not conspecific. So we're very good in general at telling other biological creatures, do they belong to our species or not. But there was no, you know, there's no machinery in our brains innately to tell us the difference between a person and machine. And what happens is that in evolutionary perspective, anything that could talk was probably a person, right? Other, you know, about parrots a little bit. So we don't have machinery in our brain. So a skilled
Starting point is 00:20:52 person can actually find a lot of problems with these systems. So somebody who is trained as I am in the cognitive sciences, you know, compose problems and find cases where these systems will break down and so forth. But it's not something that like an amateur can do. Amateurs are easily full. The remarkable thing about the Blake Lemoyne case is at least to some degrees an expert. He's an engineer at Google.
Starting point is 00:21:14 He would expect him to know better. You have to also look at his history. He's been talking about like robot rights for a long time. You know, there's an old YouTube of him like five years ago. And so forth, he had had a will to believe. He wanted to believe, I think, that this system. with sentient and are so good at mimicking language, human language, that, you know, you can talk yourself into it. But it's just not how the system works. It's not relating something to the
Starting point is 00:21:43 world. It's just predicting next words. Interesting. Okay. One of the things that I kind of wonder about this is, you know, how does, again, like, I understand your, your perspective on what sentence is, but like, one of the thoughts I've had in reading about it, speaking with Blake, is what are humans, if not for, you know, intelligent machines trained on, you know, many, many, many terabytes of historical data? So where do we draw the difference? We are intelligent machines, but we're very different sort of machine. And it goes back to our trying to represent entities in the world and to reason upon them
Starting point is 00:22:19 and to act upon them and so forth. It's just a different set of computations that we're trying to do. I am in no way arguing that it is not possible to build a sentient machine. machine. I don't think we know how to do it, and I don't think we're clear enough on what it would consist of, but I'm not making the argument that it's impossible. I'm just looking at how this system works, and that's just not what it does, right? Here's another way to think about it. A lot of sentience talk is talk about consciousness, and a lot of what we talk about is really self-reflection when we talk about consciousness. There's a general problem here that
Starting point is 00:22:58 there are many terms, they're fuzzy, they're not well-defined, and so forth. But part of it is about when we reflect on ourselves, we're reflecting on ourselves in a world, in our relation to that world. I'm thinking about, am I making clear enough answers to you? That's part of like my self-awareness circuit. And am I convincing you and not? You know, maybe you're not completely convinced and I'm disappointed and I'm trying to think out of making more, and so forth.
Starting point is 00:23:24 But these are with respect to constructs about the world. So I have a construct of you. I don't think we met before, but we've seen each other's name around the internet or whatever. And so, you know, here's this person. He's doing a podcast. He's got a good audience. And so, you know, for me, it's like I can get my message out. And for you, he's an interesting guy.
Starting point is 00:23:44 And so, like, we have all this, like, model of each other and why we're doing this. And, you know, you know that I'm sitting in this room in this newly renovated house that has a hole in it. So you know some things about me and you can reason about them. Like, you wouldn't be totally surprised if now my room. leaked on me having been told this other stuff, you know, about the problems with my new house. So we have all of these ideas and then you reflect like, is that funny? You know, should we cut that from the scene? Did it work? Did it not work? Is it worth the trouble of editing? You know, where is this going to leave me in my life? You're reflecting all the
Starting point is 00:24:17 time on the things that you hear, how they relate to your knowledge about the world. And that's part of what consciousness is. And maybe some of it's like this meta, higher level, like you think, am I thinking about this the right way or something like that? This system's just not doing that. It just isn't. Like there's no part of the system that represents that the topic that we have right now is this, that the friends that I mentioned are these, that the family I mentioned are these. The closest thing I could come up with in that paper in some ways was that it's a little bit
Starting point is 00:24:48 like a sociopath, right? A sociopath would tell you in conversation, you know, read the room, be like, they've asked me what I like to do with myself. Well, if I were in one environment, I might say what I like to do is play basketball, but I'm not in a sports crowd. So I think what I'll say is I like my friends and family, even though, in truth, I have no friends and family because I shot them all, right, I'm an sociopath. But I'll say that anyway, even though, you know, I don't have, I don't like my friends
Starting point is 00:25:15 and family, right, and you just make it up. And this system is kind of like that because everything it says is just made up. But it's just not doing it for the same reasons. The sociopath is doing it because the sociopath wants you to like them so that they can get some power or leverage or whatever. And this system, all it does is predicts next words and sentences. And the astonishing thing is that humans like so much to please each other that they often affirm what they do and so forth.
Starting point is 00:25:47 So you get really weird cases like GPT3, which is one of Lambda's cousins. If you say, I think I'd like to commit suicide, it might say, I think you should, because it's so common in the statistics of predicting next words for people to say, I think you should, whatever, you know, half-ass thing you might have in mind, your friends are like, I think you should. So you look in this database. Or maybe the AI has come to a different ethical judgment about suicide. But it hasn't, though, right? Like, it can feel that way. But you could build an AI system that makes ethical judgments. And I think that's a really interesting question.
Starting point is 00:26:24 But a good system that made ethical judgments would, for example, be able to represent the fact that if you committed suicide, you would no longer be alive. It should be able to represent the fact that your family members would probably be disappointed if you had any and so forth, that there would be like insurance to work out or, you know, could think about all of the consequences. This is just spitting out the words, I think you should, without any idea what any of those consequences. are. I mean, that's what makes it reckless. Like, you could put these systems into medical advice giving chatbots and they will merrily give you advice and a lot of it will be bad advice and it will be unreflected upon bad advice. It will be given because people say these words frequently and not because it has reasoned through that it might be ethical. So, I mean, a human could have a deeper conversation and say, well, you know, why do you want to
Starting point is 00:27:17 commit suicide. Are you having a medical problem? Is it an unresolvable medical problem? Have you talked to anybody about this and could, you know, do that sort of chain things? And maybe you could convince them that in your particular case, you know, suicide really is the right answer. But this system hasn't done any of that. It just walks in cold and says, I think you should. It doesn't even know who the U is that it's talking to. And it doesn't care. It just knows that these words follow these other words. It's so shallow. It's too shallow for me to possibly ascribe sanctions to it. It doesn't, the sentence is to be aware of some stuff and it's not aware of any stuff. Again, my watch is aware of some stuff. So my watch is, again, a little bit more
Starting point is 00:27:57 sentient than land is. Interesting. McGarar, you've written that the Wright brothers didn't build a bird, right? They didn't. So the way that we built something artificial that could fly, it looks very different from the way that it looks in nature. it's very different, very different, but not entirely different. Like, there's an interesting intermediary middle there. A lot of people run that argument in the wrong way, and they say, airplanes aren't like birds. And so we have nothing to learn from nature. And that's not right either. You know, they figured out some stuff about flight control by watching a lot of birds. Right. So, you know, in the case of AI, I don't expect that if we ever get to so-called artificial
Starting point is 00:28:36 general intelligence, which would be sort of like the Star Trek computer, you can ask it any question and get a trustworthy answer. I don't expect that to work just like in human intelligence, but I suspect that it will borrow some things from an human intelligence or have something similar. So I'm pretty confident that we'll have models of the world, internal ideas about how the world works. I don't see how to build an AI without it.
Starting point is 00:28:57 So there'll be some things borrowed from people and some things like, you don't want to do your arithmetic like a person, right? I mean, people are terrible at arithmetic. And so, you know, you don't want your system to forget to carry the one, you know, a long arithmetic problem. We'll borrow some things and not others. Right. But I guess when it comes to assessing whether AI is intelligent, how, like, it can look metallic for, you know what I'm trying to say?
Starting point is 00:29:23 Like, it can, it doesn't need to be, why does it need to mimic our awareness of the world versus be a seemingly intelligent conversation partner in your perspective? Well, because the problem is one of reliability. So I don't think it has said the same models of the world. you know my GPS system doesn't have the same model of location as I do it relies mainly on satellite receivers that I don't even have any sensation to pick up right it triangulates between a bunch of satellites and I don't navigate that way I mostly use landmarks and my GPS system doesn't give a shit about those landmarks which in some ways makes it more reliable because if the landmarks change I was going to say a feature or a feature or a bug of a definite
Starting point is 00:30:12 characteristic of humans you can't argue with is that we're unreliable we're unreliable but i mean i mean the shocking thing is that as bad as we are at driving we're still better than the best machines for now for now that will change eventually but it probably doesn't take longer than i think a lot of people recognize so you know there are some some ways in which people are more reliable some in which machines are the way in which we're more we are more reliable right now is in understanding let's let's say an article that we read or movie that we watch, understanding the motivations of characters, why they're doing things.
Starting point is 00:30:49 Like in the world of understanding physical objects and human relationships with one another, things like that, we're just far ahead of the machines. In arithmetic, we're way behind and chess, we're way behind. So you do have to look at these things domain by domain. One of the worst mistakes I think people make is they think AI is like a one purpose,
Starting point is 00:31:09 sorry, one size fits all universal solvent. that can do anything. The reality is it's a bunch of different tools. Some of them work really well. Some of them don't work really well. There are problems that have been really well solved and problems where we have no idea, haven't made any progress in 50 years.
Starting point is 00:31:23 So it's this really mixed bag. And some of it's better than people and some of it's on. Yeah. Okay, I want to get to some of the dangers of this type of stuff in the second half. But let's just close out this half with a question that I read on your substack from a commenter that I found interesting.
Starting point is 00:31:39 I think the commenter said something like, how do we know that humans are sentient for trying to do all this work, trying to figure out? Philosophy would call that the problem of other minds. And ultimately, all we really have is ourselves, right? So I don't know for sure that you're sentient. And at some point, I'm going to say 30 years from now, we'll be able to make machines that do podcast interviews, and I won't really know. I don't really know if you're a person fake or, you know, machine faking me out or whatever at some point. at that moment, we have no independent test of, like, whether somebody else is conscious. Like, there's all field of consciousness.
Starting point is 00:32:20 We'd like to answer questions like, is a rock sentient or conscious? And, you know, most of us would say, no, there are some philosophers that would say a rock has a little tiny bit of consciousness or maybe sentience. I've never heard anybody quite make the argument that way, but it wouldn't be too far a leap from some positions I've heard. So this idea of panpsychism where there's a little bit of consciousness, everywhere. I'm not a big fan of it, but like there are respected philosophers to try to make arguments like that. My point is we don't have an independent meter for that. So mostly I ascribe sanctions to you because you do the kinds of things that I think I might do. I have my own internal representation and whatever. It's not completely convincing. Like, you know, I wish they
Starting point is 00:33:04 were a better tool and some people play around with like, you know, different brain signals you might measure. There are interesting questions about, like, how do I tell if somebody's had an accident, they can't talk anymore? How much is still going on there? And there are ways of looking at brain scans to try to make guesses about that. But none of them have a full, like, independent grounding. There's no, like, gold standard. Like, here is this, you know, pound of gold that we can use as a universal reference. We, you know, we can describe a second in terms of how far the earth travels on this orbit. There's no independent reference there. And so philosophers call this the problem of the minds. I think it's for now an unsolved problem. Gary Marcus is with us. He's the author
Starting point is 00:33:48 of rebooting AI and founder of Geometric Intelligence, which was acquired by Uber. Lots of great stuff. You can find his writing on garymarcus.substack.com. We'll be back right after this short break. Then we're going to talk about the dangers of what might come with AI that can convince me people that it's sentient, but is not. Hey, everyone, let me tell you about The Hustle Daily Show, a podcast filled with business, tech news, and original stories to keep you in the loop on what's trending. More than two million professionals read The Hustle's daily email for its irreverent and informative takes on business and tech news.
Starting point is 00:34:24 Now, they have a daily podcast called The Hustle Daily Show, where their team of writers break down the biggest business headlines in 15 minutes or less and explain why you should care about them. So search for the Hustled Daily Show and your favorite podcast app, like the one you're using right now. And we're back here for the second half with Gary Marcus. He's the author of Rebooting AI. It's a great book. You should go pick it up. Also the founder of Geometric Intelligence, AI company that Uber acquired. Gary, let's talk a little bit about the hype situation here. So we know now, let's at least take the notion that AI can fool a Google engineer into thinking it's sentient. There's a lot of people who don't spend time, who aren't
Starting point is 00:35:07 well read, I would say almost everybody, you know, aren't experts on these systems. If the AI can now convince somebody who is an expert that it is sentient, what's going to happen when we're going to be living in a world where you have, you know, these systems run amok? Is there, you know, you gave the example in the first half about, you know, health AI may be telling someone to commit suicide is there immediate danger here and what is what is the concern you have with folks who say that this stuff is is here that artificial general intelligence is here now present and among us well there were a couple of different questions um artificial general intelligence is not here like that one in my view is not controversial to be artificial general intelligence
Starting point is 00:35:55 would mean that a system can encounter problems it hasn't encountered before and come up sensible solutions that would be critical to artificial general intelligence as opposed to artificial intelligence. You get a narrow AI like a chess computer. We already have things like that and do a particular problem. We just don't have systems that you can confront with a novel problem that hasn't seen before and expect a reasonable answer. It just doesn't exist yet.
Starting point is 00:36:23 The next question part of what you asked is like, should we be worried right now? if so, what should we be worried about? It's kind of like the Wild West out there. People can put up any piece of software they want, and there's almost no before the fact regulation. So if you want to, I don't know, make a military drone or something like that, there's a lot of regulation before you can put something in the air. If you want to introduce a new pharmaceutical to fight COVID,
Starting point is 00:36:53 you have to do tests before you can commercialize it, right? You see phase one, phase two, phase three, testing, all that kind of stuff. If you want to put out an AI system that does something that could potentially lead you to commit suicide, for example. No regulation on that prospectively at all. There's some antecedently in the sense that if you do something bad, you make some bad software, somebody could sue you for liability. But it's only after the fact. There's really very little. This is a little action in Europe.
Starting point is 00:37:27 essentially there's there's no regulation so if one night somebody at the tesla factory got mad and broke in to the system and decided to hack it in a way similar to something that did just happen in russia the other day they could do that there's no law that says what's going on so the thing that happened in russia was with different technology but somebody managed to get all the taxis uh to go to a single place at the same time which created all of these um it wasn't a autonomous vehicle thing, but they just put out fake requests or something. So all the taxi drivers converge on this one square in Russia, which caused these, you know, massive traffic jams. You have to like get them all out of there once you, you know, figure out. I mean, I don't
Starting point is 00:38:11 if it was a practical joke or it was done out of malice or protest or why it was done. But you could easily, for example, if you were malicious, make all driverless cars converge on a point or, you know, a small set of points or something like that. And then, you know, You know, if you had a bad actor inside of, let's say, Tesla wanted to do that, and then they put it over the air. There's nothing to stop that, except after the fact you discover it didn't work and then, you know, you deal with the consequences, which is not unlike kind of the situation with cybersecurity and so forth. We were, like, really running behind the malicious actors in many domains. So, like, you know, you see these crypto heists and stuff all the time. The major companies spend, from what I understand, massive amounts of money on, you know, payouts, cyber criminals and stuff like that.
Starting point is 00:39:03 So, you know, AI is just software, and software is not all that tightly regulated. So that's the first thing to realize is, like, anybody can kind of put out anything, and there's some after the fact mechanisms if it doesn't work out, but not a lot of stuff in advance to say, hey, like, have you made a safety case? here. Have you proven that you could actually use this reliably? There's very little software where people have proven that things are reliable. You need to do that when you design a plane. So there are actually standards around that. So like the Dreamliner, I think, had a lot of software verification in the process. But in general, software verification is not required in AAS. So that's the first thing. That's like background context. Anybody can do anything kind of at any time. It's the Wild West. Yeah, before you get on to point to, one of the things that blew me away after, so I tried out Dolly with Open AI folks, and then I was like, well, you're being very cautious about the type of images that people can release here, but there's going to be copycats that will not be cautious at all, and all the problems you're trying to prevent are going to end up being real problems for us pretty soon. And really in quick succession, it was amazing how many different Dolly copycats came out there. And now all those things you could do.
Starting point is 00:40:21 That's right. And I think stable diffusion is, you know, the flavor of the month and is pretty open. Yeah. I don't think any of those have solved the problem of like, if you put in doctor, you get a white male. They all have, I mean, you may have solved that particular one, but you change it slightly and say, you know, all these things are biased. So like nobody's, for example, solved that problem. I'm sure that it's pretty easy to get them to do things that are, you know, graphic and gory and maybe would make a lot of people. uncomfortable. So there's no regulation around any of that or hardly any regulation around any
Starting point is 00:40:57 of that. There are copycats. Right now, there's really only one technology that people are using it, looked at at an abstract level, which is use a massive data set with one or two common kinds of algorithms and predict what's going to happen next based on the data set that you've got or, you know, draw the thing that's closest in what we call a space of images. the text you've got. And so at some level, they're actually not that hard to copy, which is the point that you're making, right? It's not like Dolly has some brand new intellectual insight that allows it to happen
Starting point is 00:41:35 or Dolly to relative to the rest of the playing field. Like everybody kind of understands the technology that we're talking about. It's mostly a matter of getting together the data set. Once somebody realizes, hey, you can do this with this kind of data set, somebody else can get a similar data set, they can do the same thing. So these particular technologies are not that easy to protect intellectually. I'm not saying you should. I think there's reason, you know, you might want them to be open, but whether or not you want them to be open to get copied. That's the reality. So there may be some major conceptual advance in AI. And I think Jan Lacoon, who I've notoriously
Starting point is 00:42:11 gotten into some battles with him on Twitter and so forth. He runs artificial intelligence for meta for those listening. It's actually on the show a couple months back. He's a chief AI scientist at meta. He and I disagree about a lot. It's kind of famous people write Clash of the Titans things whenever he and I get mixed up.
Starting point is 00:42:32 But we actually agree that these systems don't really solve the problems, the larger problems of artificial intelligence, and that we need some paradigm shifts here. There's somebody else I got in debate about whether we need a paradigm shift. Some people know
Starting point is 00:42:48 as Slate Star Codex or Scott Alexander. And he tried to make an argument that maybe we don't need a paradigm shifts. But it was a softly said argument. Lacuna and I agree, we need some paradigm shifts. And when those paradigm shifts come, maybe only a few people will have them. And there will be some technologies built around them that are restrictive. But right now, you're right. Most of these new technologies can be copied relatively quickly.
Starting point is 00:43:16 you know, Open AI introduces something and then Google's got a better version four weeks later and maybe public consortium, you know, has something very similar another few weeks after that. So that's background. And it's relevant background to the malice question that you're just talking about. So like for a while, OpenAI kept GPT3 kind of underlocking key. They didn't let me as a scientist use it, in fact. I requested access and then give it to me. Yeah.
Starting point is 00:43:41 I'm still waiting for the Dali access. Yeah. I think Dali access is relatively. They're starting to open it. But it doesn't even matter now. You can use stable diffusion and you will get, you know, essentially the same kind of results. So for many purposes, it doesn't even matter anymore that it's closed. So the possibility that bad actors will get their hands on these things is very high.
Starting point is 00:44:05 So meta released something that's very much like GPT3 out there in the general public. And so one of the specific cases that I worry about most is actually misinformation. information. So systems like GPT3 and Lambda and so forth are really good at making up text that sounds like a human wrote it. But they have no concept of what they're talking about. They're not bound to the truth. And if your job is to make up lies, that's not such a bad technology. Right. So if what you want to do is to put out like 10,000 versions of something on Twitter, something untrue and find one that sticks, then misinformation as a service, which is how they might call it in the tech industry is a pretty damn powerful technique.
Starting point is 00:44:50 And if it hasn't been widespread, it soon will be. And I suspect it's already. You know, I mean, the troll farms aren't going to publish what software they're using. But it would be foolish of them not to be making use of this. And so meta's chatbot also is like pretty amazing. It immediately started making like pretty next level critiques of Facebook saying, you know, even if you're trying to connect people, you cannot be like a public good. if as a capitalist enterprise and Mark Zuckerberg is just doing it for the money.
Starting point is 00:45:19 Some amazing stuff came out of its model. Right. Some of which was hilarious. It's also a reminder of what we're talking about in the first part of the conversation. So it's not as if the system reasoned through kind of surveillance, capitalism, and power and Zuckerberg and the ownership structure of meta and the special shares that he has, which would be really interesting if you could get a system to do that. instead it was you know that's some line from somebody in reddit maybe it's put in some synonyms and stuff like that but some human basically came up with those ideas and then they churned through this machine there's embeddings give you synonyms and stuff like that but you know they weren't original thoughts a lot of people have actually thought that there's you know a lot of hypocrisy in meta and how Zuckerberg runs thing but it was hilarious that it came out of the system the other thing that it shows is it's almost impossible to corral these systems so i i wrote
Starting point is 00:46:13 was sent in somewhere the other day about how these systems, large language models, basically what we're talking about, are like bulls in a China shop. They're awesome, powerful, and reckless. You can't actually control them. So like meta didn't want to release something that would make them look like they had egg in their face and embarrass them and so forth. They wanted to help with open access science, which is to their credit. But they didn't have a way to corral the system. such that it would produce only things that were sort of consonant with the goals of the company, right? And if meta can't make its system keep its mouth shut about Zuckerberg, well, now imagine this
Starting point is 00:46:55 in the medical context. And you're trying to use the stuff to give people advice. It's just not reliable enough. It's going to, you know, tell you that vaccines are bad because a lot of people said that in the database. And it shouldn't be telling you that vaccines are bad. Or it's going to tell you that it's okay to commit suicide. I mean, that's a real example. from someone experimenting with the system in a company called Nambla trying to see what these systems do. It's not a, well, it's hypothetical in one sense.
Starting point is 00:47:22 I'm not a, I mean, the system actually generated that. We don't have any way of controlling that right now. We don't have any way of making these systems reliable in that way. So in the art domain, I'm not sure it's a problem. Somebody types in a prompt and outcomes something with, you know, knives and blood and the artist doesn't like it. artist being the human who's running the system to go back to our earlier part
Starting point is 00:47:46 of the conversation, that's fine. They just don't put it out there on the web. But if you are interacting directly with a chat bot that gives you bad ideas, it's problematic. I guess I'm violating an embargo if I say this thing. I'm trying to think about it the right
Starting point is 00:48:02 way to say it. I was asked to make a prediction about next year. It'll be out soon enough. And about AI, and I I went dark. The prediction that I made is basically that there will be a death tied to a large language model in the next year. And my reasoning was these systems already have, you know, told people to commit suicide.
Starting point is 00:48:28 They've said that genocide is okay. They're also capable of making people fall in love with them. And the LeMoyne basically fell in love with Lambda. And they may withdraw. He said he was only as, he said he was just a friend. he had love for it as he would for a friend but not just friends yeah i heard that one before um sure now you know someone like le moyne maybe not him specifically but who developed that intimate relationship with the machine and then i don't know discovered that the machine didn't really care about them or
Starting point is 00:48:58 whatever um right might commit suicide it would have to be a fragile person i don't think le moyne is fragile in that way but no le moyne said he he views lambda as a friend that he will interact with again just like he has many friends who you speak with and you don't see for a while. Right. But now imagine a more needy person, a little bit less savvy. And, you know, so there are multiple routes by which these things I think might actually cause a death in the next year or not. Because they're now scaled out so everybody can use them, there's going to be way more of these chatbos, be way more systems like replica, which is, I think, made fairly careful with some other technology on top.
Starting point is 00:49:36 there'll be, you know, reckless knockoffs of that. It's just an accident waiting to happen or a series of accidents waiting to happen. Gary, isn't it interesting that in the first half of this conversation, we spoke all about how AI is not sentient and is simply repeating patterns. And in the second half, we've spoken about even so, this is a threat to people's lives. What does that tell you about where this technology is heading? What does that say about the nature of this tech? I mean, we are certainly going to have more and more technologies that fool people into thinking that they're smarter than they are.
Starting point is 00:50:11 And I worry about that a lot. So I'm actually more worried about current AI than future AI. I think that future AI will be better and will be less reckless. And the current AI just doesn't know what it's doing. And, you know, there are certain narrow cases where it's fine. So, I mean, it turns out it is actually AI when my phone gives me directions. That's actually a set of AI techniques to do search and whatever. And I'm not too worried about that.
Starting point is 00:50:39 Although there are cases, I had a GPS system telling me to go off road in Iceland, and I really should not have done what it then decided pretty quickly that it was a bad idea. What was the conclusion of that situation? Back and down very carefully. It was, don't listen to this without four-wheel drive and probably not even that. Or that not all shortcuts are what they appear. Anyway, so, I mean, these systems are not perfect, but, you know, it's true if I follow that road, I'd be able to get from point A to point B, but the system, you know, might have realized that I didn't have the stomach to go that particular route. Anyway, sorry, so a system like that most of the time works, it's been pretty well debugged, but a, none of these chatbot systems are well debugged.
Starting point is 00:51:25 Nobody knows how to debug them, in fact. And so both the problem with GPT3 and with driverless cars is we don't actually. have a methodology even for debugging it. Most of the debugging at this point in the navigation systems is like learning that this road is not actually open, updating a database, and then as soon as you add that fact to your database, the system will stop sending people down that road. So we know how to debug it. We don't know how to tell GPT3, stop telling people to commit suicide. And if people ask in a slightly, you know, you might have it program or rule.
Starting point is 00:52:02 If the word suicide comes up, then do this or that. Then people will say it in a different way, and the system won't recognize it. So, you know, somebody says, I'm thinking of ending it all by jumping off a bridge. And if you don't have a filter that is looking for the word, you know, jump off the bridge and just for the word suicide, it's not going to be broad enough. So we don't have a systematic way to debug things. Same thing has happened with driverless cars. Like there are all these what we call outlier cases. And you can enter them one at a time, but there's so many of them that that's, there's so many of them that's,
Starting point is 00:52:32 not really good enough. So my favorite recent outlier case is somebody summoned their Tesla, right? You press a button on your phone and your Tesla comes across a parking lot to you. Only they did this when they were at an airplane trade show on a runway, basically. So they summoned their Tesla and it ran into a three and a half million dollar jet. You know, just straight in. You can find it on YouTube and put on your show notes. And it's an outlier in the sense that it was not trained on Jet airplanes, because most of the time when Tesla's drive around and they collect data, there aren't any airplanes on the road because they're not usually at airports. There's just this endless string of these.
Starting point is 00:53:12 And humans deal with them differently. When you are on an airport runway, if you should ever find yourself at one of these trade shows, and you see the plane, you'll be like, plane, big, expensive, I probably shouldn't drive into it. So you'd be reasoning about the properties that you know about the airplane. This system doesn't reason. It doesn't use logic to say, if a, And B, it's basically just using like a library of videos, and it's not in the library of videos and being a little bit crude and oversimplified.
Starting point is 00:53:39 And if it's not in the library of videos, it doesn't know what to do with it. And there's no systematic methodology for debugging it. You know, if you write a little computer program to, I don't know, predict numbers in a sequence, you're like, okay, it didn't work here. Maybe this line of code is wrong. Maybe I'll fix it. But you can't do the same thing when the way your program works is it looks in this big database. So what people actually do is to make the database bigger and they pray.
Starting point is 00:54:06 That's basically what our methodology is right now. Bigger database and pray for the best. Scale is the only thing that matters. Scale is the only thing that matters. That's not really a methodology that's getting us to reliability. And so we have all these systems. Mercifully, most of them are in limited context right now. So there aren't that many of them as we're recording this in September of
Starting point is 00:54:32 22, that many of these chatbots in production. But wait a minute, Facebook just or meta just released the tool so anybody can do this. How much do you want to bet that, you know, this time next year, there are like 100 or 1,000 chatbots on the Apple app store driven by this reckless bullet and china shop technology. Something's going to go wrong. Like it's just a recipe for error. And nobody knows, you know, how to make their chatbots constrained and not toxic and not spew misinformation. We don't have an answer for that. I wrote the first story about Microsoft's chatbot, Tay, and I had pinned it to my profile, went to sleep in California, woke up the next day, and had all these mentions on Twitter about how I might want to take my story down.
Starting point is 00:55:20 And I was like, what the hell happened? And they're like, well, Tay's a Nazi. And I looked and I was like, oh shit, Tay is actually a Nazi. This is bad. And Tay was not a Nazi when you went to bed. But when you posted your story, it was a cute fun chat bot. I will closely related to Shao Ice in China, which works fine. You know. Then it met the American Internet and things went down there. I don't know if everybody in your crowd, in your audience knows that anybody who doesn't know Tay should look it up. It's not clear that we're fundamentally in a different place than Ben.
Starting point is 00:55:55 Right. That seems like we are given what happened with the same place. Yeah. It seems like we're in the same place. Exactly. Despite all the hype about, you know, we're so close to solving AI or whatever. We're not. We're facing the same problems.
Starting point is 00:56:10 And today, what was that? 2016, 15? That sounds right. Yeah. Yeah. Probably I think 15 or 16. Yeah. Can I ask you one more question before we have?
Starting point is 00:56:20 So we've been talking, you know, basically since I started covering anything having to do with artificial intelligence, the big worry has been that we're going to get into one of these hype cycles where AI is going to get overhyped under deliver, and then there's going to be a pullback of research funding leading us into what people call an AI winter. It seems today, even though it's imperfect and, you know, let's say not sentient, AI is delivering in ways that, you know, are pretty remarkable. The fact that AI can go is bring it full circle, the fact that AI could go and win in art contest based off of a prompt, the fact that it can fool a Google engineer or, you know,
Starting point is 00:56:58 potentially full Google engineer thinking it's sentient to be that adept in conversation. It seems like we're not at risk of having another one of those hype cycles where we have a pullback because the AI is delivering in the way that it is right now. What's your thought on it? I don't know. There's the first thing I'll say. I don't have a crystal wall. I think that you know, the Dali kind of image synthesis stuff is really cool. It's definitely going to have an impact in the art world you know there'll be video versions of it at some point and that's going to which is just boggles the mind with that kind of boggles the mind and so there's that on the other hand there are things that have been promised that probably aren't going to work out and not work out soon so
Starting point is 00:57:41 chatbots really are hard to rein in they're higher stakes depending on what you use them for so if you just use them for chit chat maybe it's okay but um chatbots may not work out you might remember Facebook M was going to be universal assistant and it was very much hype by Wired and places like that and then got canceled like a year later because it just didn't do what it was supposed to do and they couldn't figure out how to get it to do. I would also take the blame, you know, on that one. I wrote some stories about it for BuzzFeed that I wish I could go back and revise. Okay. And I don't know if you, you know, want to take the hit on Google Duplex, but Google Duplex got a lot of that I won't. Got a lot of hype and it didn't, you know, materialize.
Starting point is 00:58:22 And, you know, right now, driverless cars kind of have a free pass. But we could get to a place four or five years from now where we still don't have driverless cars that are anything like what we call level five where you can just type in where you want to go. Investors might be like, all right, enough is enough. This really isn't working out. Same thing on chatbot. So, yeah, there's a ton of companies that are trying to use GPTs technology.
Starting point is 00:58:48 I don't know any of them that are, you know, break out success. And so you get four years out, nobody can control them. Then people might be like, yeah, we were sold a bill of goods, people being investors. Investors might pull back. And so that could lead to an AIA winter. On the dolly side, like, I don't know how much money is to be made there. And that's a material question for that kind of issue about winter or not. At least $300 at the Colorado State Fair.
Starting point is 00:59:20 R.O.I. Yeah. I mean, I mean, the issue there is the software itself is relatively easy to copy. And so, you know, what's the business model? How much can you charge? And like if it winds up that people don't want to pay more than 10 cents per illustration and there's like 20 players who are all doing this, I don't know. There might be money there. There might not be. But in terms of like, you know, investors always want their 10x return and stuff like that. Maybe they get it. Maybe they don't. I don't. I don't. know. But much more money so far has been put into the driverless cars and I think a lot is being put into like customer service chat pots and stuff like that. And so it depends in the end on whether things that have been promised are delivered and how long it takes for them to be delivered and so forth. And I can't fully tell you that. What I can't tell you is that AI could be doing a lot more than it is. We just passed the 67th anniversary. of the field. And there's some things that we have always dreamt about, like having AI build better
Starting point is 01:00:27 technology for science and medicine. And with the exception of alpha fold, which is useful towards those problems, success has been limited. I think too much of the effort has gone to things like recommendation engines. And although I think the art stuff is cute, it's not getting it, I think, the deeper problems of how you get a machine to read a running text or watch a video or something like that and really read it with comprehension and you know i think the world would be a better place if we focused on those hard problems um and i don't know if we will or we want gary marcus thanks so much for joining this was super fun thanks a lot for having me great to have you so just a shout out the book is called rebooting AI available everywhere and people can go get your substack at gary
Starting point is 01:01:13 marcus dot substack.com anything else thanks a lot it was great okay great it was awesome Thank you, Gary, for joining. Thank you, everybody, for listening. Thank you, Nate Gwattany for doing the editing of the audio. Appreciate you, as always. Thank you, LinkedIn, for having me as part of your podcast network. And thanks again to all of you for listening. We will be back next week with a new interview with Tech Insider or Outside Agitator.
Starting point is 01:01:36 And we hope to see you then. Until next time, thank you for listening to the Big Technology Podcast.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.