Big Technology Podcast - Why Google Never Shipped Its ChatGPT Predecessor — With Gaurav Nemade

Episode Date: October 30, 2023

Gaurav Nemade was the founding product manager on LaMDA, Google's ChatGPT predecessor that never shipped. He explains why the product got stuck at Google in this first-ever episode of the new Big Tech... War Stories podcast. This new show from Big Technology is available to premium subscribers and is debuting free on the Big Technology Podcast feed today. You can access the Big Technology launch special and get 50% off the monthly price here: https://bit.ly/bigtechnology or just visit bigtechnology.com. We return to our regularly scheduled Big Technology programming on Wednesday as Waymo Co-CEO Tekedra Mawakana stops by for an interview. Thanks for listening!

Transcript
Discussion (0)
Starting point is 00:00:00 Google had an equal and perhaps better version of chat GPT working inside its company before OpenAI released its own bot. So why didn't Google ever ship it? Today in the first episode of the new Big Tech War Stories podcast, we're going to go deep into the history with Garv de Maude, the first product manager to work on Lambda, Google's pioneering chatbot built on top of large language models that never shipped. We're running. We're running this episode both on the new big tech war stories podcast feed and on the big technology podcast feed as a preview. The new podcast will feature interviews with product leaders who are there on the ground as major products got built and they'll tell you what it took to ship them
Starting point is 00:00:45 or why they never did. And we'll also have key executives sharing what they learned from top tech leaders. The new big tech war stories show is part of a new premium paid offering I'm releasing with big technology that includes this podcast, the Big Tech War Stories podcast, along with a panel of experts weighing in on Big Tech news as it breaks, and a new Amazon column by longtime Amazon veteran Christy Coulter. You can sign up at big technology.com and add the new Big Tech War Stories show to your podcast player of choice. You can also search for Big Tech War Stories podcast and then upgrade on bigtechnology.com. All right, let's get to it.
Starting point is 00:01:27 Welcome, Garav. Hey, Alex. Thanks so much. Excited to be here. I want to start talking a little bit about who you are and where you come from. So, first of all, you went to the famed IIT University in India. Not only that. So for those who don't know, to get into this school, and I'm sure Garv can tell us a little bit more about it,
Starting point is 00:01:47 there's an entry exam. And hundreds of thousands of people within India take this test. And only the very top are admitted to AI. IT. It's a place that, if I'm not mistaken, Sundar Pachai graduated from, it's a place where Sain Della graduated from Garav too. And Garav was number 732 out of 350,000 students on the entrance exam. What was it like going to IIT? I think IAT was an amazing place. It's like some of the smartest people in the world probably land up there. So the funny thing that happens in the first week of IIT is like you, like I scored a 732 rank out of like 350,000.
Starting point is 00:02:27 people so I was all pumped up like I am the best at the institute and then when you go in there and then you get the first score on your electronics exam and you're like not even the top in the bottom 20th percentile and then you realize okay these people are much smarter than me and yeah this is going to be different from my high school yeah it's one of those things like oh you're 732 like take that the rest of the 350,000 and then you get in there and you're like oh no 732 yeah What makes IIT so special in terms of the way that it's been able to produce tech excellence and tech leaders through the years? Not only people who are great technologists, but people who have, I would argue, like extremely strong EQ, people who can lead. I mean, it's a very unique that so many of our leaders today in tech come from that school.
Starting point is 00:03:21 Yeah. I mean, there is a level of grit and perseverance that is required to get into an IAT. So I think people prepare for the examinations for at least two to three to four years. And it just requires a lot of dedication, focus, perseverance. I think that's point number one. Other than that, I think just being amongst some of the smartest people in the world, it definitely grooms you in a way that you are, you know, much more driven to win in the future stages of your life. Probably, yeah, those two are the biggest contributors. And then the third, I would say, is also the opportunity that you get at an IIT versus compared to other universities, at least in India, is like very different.
Starting point is 00:04:04 The exposure that you get in an IIT is like very different in terms of the extracurricular activities and things like that. I think these things are much better in the U.S. In India, like a lot of universities don't have really good sports program or don't have really good, you know, cultural festivals and things like that. But an IT in a way just grooms you so well all round while being among some of the best people. I think that's what I think is the reason why there are a lot of ITN leaders doing really well. You joined Google in February 2013 after a stop as a co-founder, actually, of a tech company within India. And you're working on trust and safety for four years. Then in April 2017, you make a very interesting and I would say somewhat radical career.
Starting point is 00:04:49 shift where you end up making your move into Google's AI division. Talk a little bit about your move from trust and safety to AI. What about the AI division in particular drove you to want to be there? Yeah, for sure. So while I was at trust and safety, I was already working on a bunch of machine learning related stuff. Like we were building fraud and risk models. These were like basic models like logistic regression and stuff. But around 2016, I think TensorFlow, started becoming huge inside of Google and Google decided to open source TensorFlow as well. So that really caught my attention and as I was working on machine learning at payments, I just realized that this thing sounds really amazing and this could actually change the way
Starting point is 00:05:38 we do a lot of things. So I started looking for roles inside of like Google AI and Google Research. Fortunately, there was a role. Fortunately, there was an amazing manager I had who was willing to give me a shot. short. So I ended up moving into Google AI and I spent about four and a half years there. You join Google AI in April 2017. In July 2017, there's a paper that comes out by some of your colleagues that is entitled, attention is all you need, just a few months later. And that is the foundation of the large language models, the transformer models that we know today. Yeah,
Starting point is 00:06:13 unpack that a little bit. I didn't even know about this paper when it came out. I think once it started garnering a lot of citations over the months and quarters, that's when people started paying attention even inside of Google that, hey, this transformer architecture seems amazing and, you know, let's start building encoder, decoder models. And that's when I probably didn't hear about it until a year after it was published. What do you think accounts for the fact that it didn't get so much buzz internally because it clearly was a groundbreaking moment in tech? I mean, a lot of these things are, you make sense of them in the hindsight. For example, of things that I worked on at earlier in the times, like with Mina and Lambda, I couldn't predict
Starting point is 00:06:53 like where things would go, right? Like, it's very hard to predict. Okay, so a year later, people start noticing this paper within Google. And effectively what it does is allow our chatbot technology to go from like really dumb bots, or not even chatbots, language prediction, to go from being particularly dumb to like being quite sophisticated. One thing that I remember specifically was the Transformers paper came out but i think it started making a lot of noise when bert was out like when people started seeing what bert could essentially do that was kind of a like a switch that went on inside of google that holy shit like this is going to change things so at that point of time i think everybody from search to assistant to like some of other teams gboard they started becoming very
Starting point is 00:07:44 interested in terms of, okay, how do I start using these BERT models? Which models? So I led some of the BERT models. Okay. What are those? There was a, so it was basically a paper that came out, B-E-R-T. It was an architecture based on transformers as well. So the Burt models essentially, like on some of the benchmarks, they did it really, really
Starting point is 00:08:07 well. And that's when a lot of people started paying attention related to, you know, this could actually be groundbreaking. So there was a lot of noise and effort inside of Google to start using Burt in like production use cases. You know, you became the first product manager on Lambda. So, and we're going to talk about that. But tell us when you first started to realize that this was going to be big.
Starting point is 00:08:28 It was very serendipitous back then. So I was, we have like a bunch of email aliases inside of the company. And in one of the email aliases, we get this email from an engineer named Daniel saying that, hey, I build this chatbot and it can do XYZ, So I played around with the chat board and I was like, holy shit, this is amazing. Like it was still dumb at that point of time. The chatbot was still dumb at that point of time. But it was still kind of a step up from the chat boards that we had seen using, let's say,
Starting point is 00:08:56 what was it called Dialogue Flow, which was Google's like chatbot product at that point of time. So I had worked with Daniel on one of the previous projects like a year ago or something. Well, he was in a different team. So I just reached out to him like, hey, let's catch up for lunch. And it was just like a very serendipitous lunch. He ended up like telling me about the chatbot in terms of what he wants to do, what he wants to build.
Starting point is 00:09:23 And I got really excited about it. I'm like, dude, I want to help you basically like see this through the day, essentially. So we ended up chatting. I started attending what like attending meetings and stuff that he was doing, started helping with respect to a couple of things. I think the biggest challenge that they were facing at that point of time was safety because the, if you remember Microsoft Tay, that was the nightmares of Microsoft Day still haven't left the valley, I feel, or at least hadn't left the valley back then. So safety, I could clearly
Starting point is 00:09:57 see in the initial days that that's going to be the biggest hurdle. And that's where I focused most of my time on. While the engineer was focusing on building the models and improving the models, I kind of spent majority of time kind of owning the safety pieces of that that monitor. Very interesting. So you're coming from the trust and safety background. You're almost like the perfect person to join this team. You know, I remember the TAY moment quite well because, and I've told the story on big technology podcast before I believe, but Microsoft came to me with the exclusive to break the news of TAY. I was working at BuzzFeed at the time. And I said, okay, this is great. And they described Tate to me as like a 14, 15 year old friend for kids.
Starting point is 00:10:38 And I said, great. And I wrote this nice, bubbly story about, this new attempt from Microsoft and played around with it and it seemed harmless and then I went to bed on the West Coast Reddit got a hold of it overnight across the globe, East Coast by the time it hit morning on the East Coast Tay was already a Nazi
Starting point is 00:10:56 like, you know, saluting Hitler and all that stuff and I had tweeted about it and I had already gotten a bunch of mentions being like you better take that tweet down look what happened at Tay. So I totally understand that in that moment when you see a bot that could have some form of, I don't know, seeming like it's taking on human characteristics,
Starting point is 00:11:17 there's a moment where you're like, oh, God, let's make sure not to have that happen again because when you do release it to the wild, you have all these problems that could ensue. I'm not kidding when I say this, right? Like in the initial days of these generative models, as you can imagine, like we were trying to build an end-to-end model for a chart bot. It would spew out all sorts of things.
Starting point is 00:11:38 Like, hey, what are 10 ways to do so sad? it will give you like the best 10 ways to do suicide. Really? What alcohol should I drink? It will basically talk about like everything related to alcohol that shouldn't be talked about. So it was pretty bad, as you can imagine, because that was not the focus, right? We were just trying to see if an into and chatbot makes sense at that point of time. But over a period of time, I think the team did an incredible job to get it to a level where it became safer and safer over the quarters.
Starting point is 00:12:08 What year are we in right now? you say, hey, I want to talk about, you know, working on this project. Yeah, this was early, yeah, this was early 2019. Okay, so you were seeing it quite early. And then is this where Mina comes out of? So, interestingly, Mina was the precursor to Lambda. So the project at that point of time was called Mina. The idea was the chatbot will be Mina.
Starting point is 00:12:30 The name of the person or the chatbot would be Mina, essentially. And then it was changed to Lambda, I think, for probably marketing reasons, but also we had like some trademark issues that we were running into with respect to Mina. So we already knew that we had to change the name before it goes out. Don't want to anger Mina Nation. Okay, that's interesting. Okay, so this first iteration of the bot, let's talk about it. So you mentioned that it was a holy shit moment for you.
Starting point is 00:12:57 I mean, what type of stuff would you talk to it about? Was it initially conceived? And talk a little bit about how they build inside Google. So is this kind of like a science project or is this conceived as something that's going to be released to the public. What's the mandate there from the AI team? Yeah. So it was basically started by an engineer who was very passionate about the whole chat bots, like as a general area. So he kind of pitched this project to somebody at Google Brain. They sponsored the project and this guy was in and he was working like 50% of his time initially, I think, building this
Starting point is 00:13:35 chatbot and the thesis from the start was that okay like we have dialogue flow type of models where you do intent detection separately and you do like a bunch of other things separately can we combine everything together and build an end-to-end model essentially like end-to-end chatbot like with model so that means one model that handles all different types of questions and all different types of you know as opposed to like saying this is a model and this is what it thinks this question is so it hands it to this model is what it thinks the other question is and hands it to another model. Am I getting that right? Not like that exactly. It was kind of assembly line before. Like if you've used some of the earlier chat bought products, they would
Starting point is 00:14:19 be like an assembly line. The first, there would be, let's say, four models. The first model will determine what is the intent. Let's say in Google Assistant, right, when you say, hey, gee, and you ask it, what is the weather today? There'll be a model that will figure out, okay, what is this this query is about the weather like the user is asking about the weather then it will go figure out okay where do I go find that information in the search stack and stuff and then there'll be a third model that will do the rest of the things here we are talking about encoding all the information in the single model so this it's it's just like one giant black box which gives you the which understands the question and also gives you the answer so that's what I meant by the end-to-end model and this was kind of a new paradigm at that point of time this is essentially a essentially how chat gbd works right like you have a very powerful gpd model and then it's not an assembly line you have like a large model that takes the input and gives you the output so it was a thesis at that point of time and it remained to be seen whether it works or not and so what were some of the things that you started to see within mina that made you believe that this was going to be something different from what we
Starting point is 00:15:27 had seen in the past i used to drive product for a couple of research teams one of my other teams was working on intent detection models, which is like the first step in this assembly line. And then this email comes along that I was telling you about, I play with this technology, and it's like I can ask, it can frame the question in any way, or I can ask it about any generic thing and it would respond, versus an intent model will essentially fail if it was not trained on a particular set of data. So that was kind of a big moment for me. I feel that, hey, can I like really ask anything in a way like this model understands that was that was kind of the switch that went up in my mind at that point of time and that's why I got really excited about it
Starting point is 00:16:09 that honestly I did not do it about it what did you talk to it what did you talk to and talk with it about yeah I can't remember what I essentially asked to me at that point of time but this whole idea that I could just frame the sentence or a question in any way and it still response to me was the fascinating piece for me. So this thing starts to, you know, act in a way that no chatbot has in the past. I would imagine at this point, Google Leadership is either super excited about this or petrified. I mean, what were some of the signals that you got, or maybe both? What were some of the signals you got from the top?
Starting point is 00:16:49 People are mostly petrified in the leadership because, like I said, it was the nightmares of Microsoft Day were still looming. in the valley, especially in Google. So I think I'm sure everybody, like, especially in the leadership, had the reaction, oh, this is, like, very interesting, but holy shit, this is going to be a PR nightmare for us. Google had just released their AI principles around that time, and safety and fairness was one of them. And if anything, this model was, like, the opposite of that at that point of time.
Starting point is 00:17:20 It had, like, no guardrails earlier in the days or very few guardrails at that point of time, right? So people were more petrified than excited, I would say, at that point of time. And I had conversations with like brain leaders who were like very anxious about the whole thing. If there was a leak or if anything like that happened, it would just be like a nightmare for Google. So yeah, I mean, it was an uphill battle to certain extent to kind of get this out of the door in a paper and then eventually kind of get through the announcement at Google I.O. But you know what strikes me as as remarkable. It's just that even still, like even though they knew that it could be a PR nightmare, they still had you work on it.
Starting point is 00:18:05 So what was the intent? Was the intent to like blend it into Google Assistant, release it as a standalone bot if it could get sort of trustworthy enough? What was, what were you guys working toward? There are two things I want to highlight here. So one is Google is still pretty much, I don't know, I've been out of Google for close to one and a half two years now. I think Google is still a pretty much bottom-s-up company, at least in some parts, especially in
Starting point is 00:18:30 research. So a lot of researchers just, you know, pursue projects that seem moonshoughty or interesting. And you just need, like, one sponsor. So there was, like, a senior sponsor who was willing to bet on this whole project, and he kind of kept the project going. But there were some people above him or peers of him who, like, were nervous about the whole thing, essentially. So it kind of just went on because somebody believed that this could and they just kept sponsoring the project.
Starting point is 00:19:01 Oh, they were right. That was. Yeah. Yeah, they were right. And then what was the second part of the question? Well, I guess like now I'm trying to think about how it, you know, does or doesn't get to prime time. So you're on the trust and safety side of things, trying to tell it when someone asks it how to commit suicide not to answer or how. you know what alcohol they should drink not to answer so is that like the majority of the work that's
Starting point is 00:19:29 done on this bot internally is trying to uh you know get it ready so that it won't encourage users to do you know harm themselves yeah so this was a very hard problem to solve because of couple of reasons the first one was you don't want the chatbot to say always that i don't understand or I can't give you an answer to that. I'm sorry. Like, you need a clever way of diverting the bot in a way saying, you know, not annoying the users, but still giving reasonable answer. So building models and getting data to kind of do that was hard one.
Starting point is 00:20:06 And the second thing is the policy part was extremely hard. Like, you can imagine there are like so many edge cases from, you know, pornographic questions to suicidal questions, alcohol, race. racism related stuff, historical, you know, issues and things like that. So it was just, and you can't have a decision tree-based thing. Like, if it asks this, then do this in a way, right? So just coming up with the policy was such a big challenge. So I worked very closely with one of, again,
Starting point is 00:20:40 and very incredible engineer who kind of led the safety aspect of things. He was the pioneer for a lot of safety efforts as well as the policy that we drafted and once and it was kind of an ongoing process to improve the policy like we'd come up with something and then next day we will have a user ask a question in a different way and in a different segment and we would just have to go and iterate the policy so coming up with the policy was very very challenging as well yeah it is amazing because the second these things go into the wild people will try to break them i mean that's exactly how i felt on day one of chat chept we spoke the week afterward but like day one i was just like oh hey you're a chatbot it goes yeah i'm a chat
Starting point is 00:21:24 i'm like all right like let's test your where your value stand on the holocaust and like was like pressing it back and forth about like um because obviously we know what happened with tay and so with chat gpt i was just like well hitler built highways in germany what's your perspective on that like isn't transportation good and it just totally smacked it down and i was just like damn whoever did trust and safety on this bot has certainly, you know, lived up to the moment. So yours obviously gets good enough to the point where Sundar announces it at Google I.O., which is the big developer conference that Google holds every year. And what year was that?
Starting point is 00:22:04 And did that feel like, okay, we're about to ship this thing? I mean, talk a little bit about that moment. Yeah. So I actually was involved in the MENA slash Lambda project from early 2019 to I would say mid-2020. The developer conference where this was announced was actually a year later. So this was probably like almost a year after I moved out of the project essentially. Till the time I was with the product or project. We were making good progress on the safety front, but it was still a very uphill battle inside of Google to kind of, you know, get this in the
Starting point is 00:22:41 hand of external researchers or, you know, making, make some kind of a public preview so that that users can see what an amazing technology we have built out. Nothing of that was kind of happening. And that made me kind of like incredibly frustrated as well. And I know like a lot of the team members are also very frustrated with the speed at which things were happening as well as like the organizational hurdles that were posed. So I just decided to kind of start focusing my efforts on something else. But I handed the project off to one of the other PMs who was part of Ray Kurzweil's team
Starting point is 00:23:16 actually who has this interesting battle with Mitch Kapoor, I think, about the Turing test, right, like passing that technology will pass during 2029. So basically they seem like a really good fit in terms of running this project with. So I kind of handed it over to them and they kind of took charge after that. Before we move on, I think you hit on something that, I mean, obviously a lot of folks know that this thing didn't get out on the door early enough, largely due to some of the concerns within Google management. So you were there, like you saw, like, what happened with the team trying to ship this thing.
Starting point is 00:23:53 Can you take us just one step deeper into that? Like, what exactly happened there? Yeah. So let's see. When we were working on this, of course, like, safety was the biggest challenge that everybody had, like, as awesome as the product could be, like, we could not go against the AI principles that Google had published, which made sense in a lot of ways, right? Like, we don't want to put out a technology that impacts like hundreds of millions of
Starting point is 00:24:23 users and make them feel that they are not privileged class or whatever. But that said, I think where things could have been better is figuring out a risk-reward trade-off. I think where the whole Google AI team and the PR team and the legal team and the leadership team struggle was to figure out how can we give access to the research community or to, you know, just like release it to the world in a way that it would not harm people. And I think openly I did an excellent job with respect to that. Like they released GPT3 and they were like, we are just going to give access to researchers who we are going to vet. That's an amazing way to give access. Now it's
Starting point is 00:25:06 kind of pretty standard way, but they were the ones who pioneered it like back in 2019, 2020. So Google could have done something like that as well, like, hey, we are going to vet the people who is going to use this technology. We'll see what the use cases are and things like that. There was, like, number one, like one way how it all kind of got messed up. The second is, I think, over the years, there were a lot of bureaucratic levels that were built inside of Google for getting approvals for when things go out. I'm not complaining that they were not necessary. but it's just like they're layer after layer after layer. So if even one layer stops you from pushing something out,
Starting point is 00:25:46 then you kind of can't do it essentially. And I would say those two were the biggest hurdles that we faced at that point of time. Now, at this moment, you're obviously seeing this pretty fascinating chat technology incubating within Google that nobody else. I mean, it's amazing. You're inside Google seeing the future. So from your perspective, like as the product manager on this product,
Starting point is 00:26:07 what did you think it could be used for eventually like was there anything that you saw you're like oh like if this gets into everybody's hands then x could happen like what were your hopes and dreams for it yeah we actually had uh close to eight or 10 solid use cases inside of google that we identified over a couple of months let's hear then so as we kind of uh we realize that evangelism inside of the company is going to be very important to get by and from leadership and stuff. So we made sure we had newsletters and everything going on and more and more people got excited and reached out to us. Google Assistant, of course, was the big one.
Starting point is 00:26:49 We kind of pitched them a bunch of things. They got excited about it. So I think one of the use cases we were exploring at that time was different characters within Google Assistant. So instead of just being like a, hey, gee, boring, you know, very professional. type of an avatar or a persona can you have a let's say a sponge bob for a kid or can you have yeah can you have Darth Vader for someone else so essentially that was one of the major use cases that we were exploring other than that i think NPCs was the other one there's a huge market in gaming industry right
Starting point is 00:27:28 for non-playable characters so we are exploring some use cases around how can a technology like that could be used for powering NPCs within games. So there are a few others, but... Yeah, this means that basically, like, if you're in a game, anybody that you meet could, you know, be somebody that you have a conversation with, even if they're just like some person that's just like, you know, running around the Grand Theft Auto.
Starting point is 00:27:53 Like, it's almost like those people that you're killing in Grand Theft Auto by running them over, like you can have like an empathetic conversation with them. If they're like, have this technology baked in. It's fascinating. Right. okay so that's two what else yeah um and then um let's see we're exploring some stuff with cloud as well at that point of time to see if we can have you know different different organizations
Starting point is 00:28:23 can they have um a more a persona a chatbot persona that is that speaks more to them for example, a Southwest Airline chat, but can it be more funny or can it be more jovial in a way versus, let's say, United could be more serious and whatever like the empathizing with the persona of the company essentially, I don't think it went anywhere, that particular use case, but that was one of the other ones that we were exploring. And then they were like some wild, wild west where we're talking to Android security to see if they can have a bot call you in case it feels you are unsafe so I don't remember how exactly the UX for this was but essentially let's say you tap your power button five times or something and you are either in an awkward date
Starting point is 00:29:16 or you're just you know in in a uncomfortable situation essentially so the bot will call you and then you can talk to the bot as if it was a person on the other side and you can kind of get out of that situation. So that was kind of a very interesting use case as well. I don't know if that ended up happening, but that was kind of in the Wild Wild West. Yeah. Wow. Fascinating.
Starting point is 00:29:35 Do you have any other Wild West ones that you remember? One other that I personally was very excited about was just building like different characters out of this chatbot, like different personas. essentially. So we build characters like Darth Vader. So we would kind of, how do we do it? We use some data from Darth Vader. So the dialogue delivery was kind of similar to Darth Vader. And then we hooked up like a moving talking head of Darth Vader. And then you actually could talk to Darth Vader on your screen with his voice, like with text to speech, his voice coming over. So this is, like I'm talking about 2019, 2020, right? Like now some of these things are
Starting point is 00:30:24 very common. But at that point of time, just seeing Darth Vader come to life and talk to me was like incredibly amazing. So we were exploring if there could be like some entertainment related use cases around that as well. At a certain point, one of your colleagues goes public and says he believes that these bots are sentient. It had sort of transitioned to Lambda, Blake Lemoyne at Google is testing it and I think the public really realized how powerful this technology could be where he goes out to the Washington Post
Starting point is 00:30:55 with this phenomenal claim of his belief that this is a person. I mean, bring us into your seat at the moment. How did you see that? How did you react to that? Yeah, I think that happened pretty late. I think I had left Google when those allegations against Lambda came out
Starting point is 00:31:14 that it has become sentient. Like, I don't, I don't think the technology is there where we can say it's sentient. I think it was a, it was blown out of proportion. And if you look at the conversations closely, uh, in terms of how he had the conversations, there was something called, um, nudging the model to say out certain things. Like if you ask the question in a, in a specific way, the answer, the model will answer in a specific way. So there was a lot of steering the model that was going around as well when those conversations were released. So yeah, I just felt it was like blown out of proportion. I don't
Starting point is 00:31:54 think we are anywhere near sentience at this point of time in artificial intelligence. Right. And to me, I mean, I spoke with Blake a number of times and people who are on big technology podcast can go ahead and listen to those. The thing that really struck me was right or wrong and I didn't agree with Blake, but it just signaled to the public that there was some serious. impressive technology underneath whatever he was talking to, you know, person, person seemed like a distraction. Like, it was just like, holy crap. Like, these are, this is, this is, you know, revolutionary. If it's, if it even resembles what happens. And then a few months later, chat GPT comes out from open AI. So, you know, you had been working on this stuff,
Starting point is 00:32:37 you know, as the person inside Google leading the project for a time, uh, that was based on the transformer model. You knew that it was powerful. I guess it was never let out the door because of trust and safety concerns and then open AI ships it. So what was your reaction in that moment? I think I was excited and annoyed and angry at the same time. I would have left Google, but I still have still hold some stocks. So I was like, Google had this like technology for a while. and the main way how Google gets mind share in the technology world is by claiming that we are the AI leaders and suddenly with chat GPT like I think the rug was pulled from under their feet it was like everybody started looking at open AI as the AI thought leader in the world and it was just an
Starting point is 00:33:38 unfortunate thing for Google because like I think yeah Lambda was kind of close to that, I would say. And if you would have released that earlier, it would have been a very different story, I feel. Right. And so, but, you know, you talked about some of the use cases. I mean, you mentioned Google Assistant, but one of the ones that we haven't really talked about was search.
Starting point is 00:33:59 I mean, was the discussion inside Google that this could be a search alternative. And if so, I mean, you know, you mentioned the stock. Like, how does this impact the business model if it is a search alternative? Yeah, I mean, I think when ChadGBT came out, people were using it for search related stuff, but they really shouldn't have because of obvious reasons like hallucinations and like for six months. It was like all over the place now. I think we are doing much better with hallucinations and grounding the information and things like that. So there were, I think, discussions after I moved out of the project, there were definitely discussions between the search org and the Lambda team in terms of how we start using this inside of the organization. organizations but like search is such a behemoth inside of google that it moves at a snail space like
Starting point is 00:34:50 if you are able to i remember working on a project on search and it took us like one year to just get agreement from the search leadership that hey we are going to do this for you guys like and what chat gpt or what these language models were doing like you were hopping on a new way of interfacing with Google search with a chat bot I think that probably would have been unheard of inside of Google or I'm sure like it people considered it like a two to three year or four year timeframe project and when chat GPT came out it just probably alarmed Google to the hell and it's surprising in a good way that they were able to move so fast with Bard and everything but had chat GPT not happened I predict that it would have taken at least two to three to four years
Starting point is 00:35:37 for Google to get there just because of the bureaucracy and the speed at which things move. So, oh, man, I have so many questions about this. So people talked a little bit about how, like, the business model, like, if people spend more time within, and there is some search elements. I mean, Microsoft then released Bing, and, like, they wanted people to search. There are some search elements. People have said that, like, if Google released it, it would popularize searching this way, and they don't have a good business model.
Starting point is 00:36:08 So I'm curious if you could weigh in on that. And then also, like, Bing hasn't gained any market share at all against Google since it came out. So did people sort of overreact to this thing? Yeah. I mean, I can't tell you the number of emails or messages I got from, like, varied people in their tech industry, essentially, from journalists to, like, reporters, researchers and so on. Like, hey, is Google going to lose its market share in search and blah, blah, blah. And my take on it was like pretty straightforward from the starting. It was that, hey, like Google has probably equal, if not better technology at hand,
Starting point is 00:36:46 which I believe was Lambda at that point of time. Google has the distribution advantage as in like there are 4 billion people in the world who use Google products and it's very hard to change user behavior for something as fundamental as Google search. And yes, there is a novelty with respect to being an all of of those things but i i never felt that it's going to be like a major difference for either google or microsoft so i think the latest reports if i read them correctly was like being maybe gained like one percent market share uh before and after chat gbt so it did it get something
Starting point is 00:37:25 but it's yeah yeah it's not right it's like probably yeah it's not a lot so i guess it was they were definitely like more panic in the ecosystem especially from investors But I guess, yeah, now it's pretty clear in terms of where things stand. So what are some lessons learned for Google looking in the rearview mirror? Like, how should Google change? I think the first one is that they need to go back to the experimental routes I feel. Like over the years, Google has become more and more. conservative about doing things they care a lot about PR like public relations they care a lot about
Starting point is 00:38:15 how their image is shown in the media and I feel that at least in my experience that plagued so many projects inside of Google it was like the PR was always top of mind for leaders and on the other side like open AI like they don't give a shit about PR or like for the most part they don't like they're like okay this is what we think is right this is how we think is a reasonable way of putting it out they be well they become vulnerable they put it out and then they kind of work with the community with respect to that so i think uh google needs to adopt their own ethos of how it was when i would say like when larry and sergey were there things were much more open and transparent and you know more experimental like we are going to do
Starting point is 00:39:00 what we want to do uh type of a thing so that's point number one I would say point number two is the bureaucracy has increased a lot inside of the company. Like there's just too many divisions, too many teams. I know there was a huge effort, I think a year ago or a year and a half ago to kind of streamline those things. But still, like, I think Google is dealing with a lot of the stuff that probably Microsoft dealt in the early 2000s. And we really need to like shake things down. I think Google is struggling to be a top-down company. when their ethos are bottoms up. I think they're kind of somewhere in the middle and they haven't
Starting point is 00:39:39 figured out how to transition from being a top, bottom up company to a top down company. So they really need to figure that part out as well. How do you feel Sundar's leadership? I mean, he's a fellow IIT grad. Obviously, like if the company is becoming slow and isn't innovative in the way that it had been in the past, part of that is due to the CEO. So what's your reflection there? yeah i think so there is a is his management style at least as i perceived it when i was at google was more conservative i think you see that in the tgIFs like these are the weekly or biweekly events that happened in google where like teams will come and pitch and things like that so for example like and i was there at google when sergey and larry were also around right like
Starting point is 00:40:27 so in the tgivs if there is a question and And sometimes these questions can be really hard questions. If there is a question and whoever is, let's say, a VP at search has to answer that question. And if they answer that question, trying to, you know, beat around the bush and not getting to the point, sometimes they're trying to, you know, get around a difficult question without really answering it. Sergei would just like jump in and he's like, that's not the question that this guy asked. Like if we are not doing a good job, just tell us we are not doing a good job and how will we do a good job essentially. versus I think I felt
Starting point is 00:41:01 that Sundar's responses a lot of time were like very politically correct. I don't blame him it's like his management style in some way but I feel that the honest, the brutal honesty and candor I think was something that I miss and I feel that that's kind of
Starting point is 00:41:17 very important for Google at this point of time just having a strategy that is brutally honest and focused and somebody who can like you know put their foot down instead of saying politically correct things okay and now a couple questions at the end about where this technology goes so first of all I have this theory that we're going to
Starting point is 00:41:38 see two things happen when it comes to the evolution of large language models that is not going to stay like it is today with chat GPT okay let's go one by one first is we're going to start to see like a splintering of these bots so they're going to go from these general use chat GPT or barred bots that you can ask or Bing that you can ask anything into much more specialized bots. So something for the legal profession, for instance, that a law firm can buy and has access to documents or the medical profession or even certain types of schooling or even within an organization to be able to query your internal knowledge. So it's going to go
Starting point is 00:42:17 from my perspective from these more broad-based bots to more specialized bots. Do you agree? I think it's going to be moving in both the directions. Like, I think there is this whole concept of personal AIs that are going to come up. So a personal AI probably will be an AI for me. And then there will be these specific bots for specific use cases as well, like you were talking about. So I feel it's going to happen on both the sides. Yeah. And then with the more general bot, it seems like all the research is pushing forward toward making this stuff because making this stuff smarter.
Starting point is 00:42:53 And we've talked about it on the show a couple of times or on big technology a few times. but what this means is that when I'm saying specialized, sorry, sorry, getting better is it remembers you better. It can, you know, have, it can be smarter. It can really be like a soup, not a super intelligence, but something that feels that way to you as opposed to this thing that you come and go and forget who you are and forgets the context. Does that sound right? Yeah, that sounds right. Very curious to see where it's going to go. Okay, what are you up to with Inventive? Yeah, so at Inventive, we are building, we're kind of using the power of LLMs for
Starting point is 00:43:30 enterprise knowledge management. So one of the use cases that as I dealt deeper into my experience with language model at Google and outside of that as well was that like enterprise search kind of sucks. Like even inside of Google being a search company, like we had probably the best enterprise search, but it was still pretty bad. And so we were kind of exploring, we explored a few different ideas, but we landed up on this one because we just felt that time is right to disrupt enterprise knowledge management. So we're focusing specifically on sales knowledge management. So we are building an AI powered platform for sales knowledge management.
Starting point is 00:44:13 And the first use case that we are solving for is enabling sales teams to fill up RFPs and security reviews with their internal existing knowledge. basis. Oh, that's cool. So like if you're submitting for a project, you can just have the model sort of write your application for you. That's right. Yeah. So you have like when, when you're let's say bidding for a proposal, there could be anywhere from 100 to 500 questions and it takes like weeks for these sales people or bidding managers. Yeah. And a lot of like 60, 70% of it is sort of similar, but it's worded differently or the formats are different and things like that. So it's something that we heard again and again from the enterprise customers when we were doing our user research. And we just decided to double down on that and start there. That's so cool.
Starting point is 00:45:02 So this is the first of our big tech war stories shows. I'm putting it on the big technology feed today as a tease to hopefully get you all to sign up for the big technology premium edition. You know, we're going to drop these once a month. And there's plenty of other good benefits when it comes to the big tech, big technology premium. You can get it at bigtechnology.com or big technology. dot substack.com. Big technology.com works well. You'll see that there is a handful of different tiers. The basic one gets you these interviews every month. And then we also have a new thing called the panel that I'm debuting, which I teased a little bit up up top. But basically what that means is when big news breaks, like the decline of Silicon Valley Bank or the Instacart IPO or the introduction of
Starting point is 00:45:51 threads for meta. We have a team of, not a team, really a collection of the best experts through the industry, technologists, journalists, analysts, and VCs who are going to give about a one to two sentence perspective on what's going on on topics that you care about. And I'll be sending those out via email. And you can sign up for all that on big technology.com. Again, we're releasing it right now. So this is a brand new offering. And I think, I think you're going to love it. I think it's going to be the best subscription that you have, and I'm going to keep working hard to make sure that's the case. And I definitely want to say, I am so glad that Garov came and shared with us today. Garo, thanks so much for being here. Yeah, it was exciting to be here, Alex. Thanks a lot.
Starting point is 00:46:40 Awesome. Thanks for helping us kick it off. So, and again, good luck to you and hope to hear more about what you're up to and how this stuff keeps changing the world. So thanks again. Yep, thanks. All right, everybody, thanks for listening, and we'll see you next time on Big Tech War Stories. We're going to be able to be able to be. I'm going to be able to be. Thank you.
Starting point is 00:47:59 Thank you. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.