Big Technology Podcast - Google's AI Narrative Is Flipping, Microsoft Hedges Its OpenAI Bet, AI Clones Are Here

Episode Date: April 12, 2024

Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover 1) The Solar Eclipse! 2) AI Music generation software Suno 3) Google flipping of its AI narrative 4) Ranjan'...s reflections from Google Cloud Next 5) Is Google's AI enterprise bet the right strategy 6) Microsoft hedging its OpenAI bet 7) Implications of Mustafa Suleyman's remit within Microsoft 8) OpenAI fires leakers 9) Eliezer Yudkowsky refuses interview and his reps won't pick up the phone 10) AI model training running out of data 11) Prospects of synthetic data for AI training 12) The Humane AI pin flops 13) Can Sam Altman and Jony Ive build an AI device 14) Cloning ourselves with AI. ---- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Want a discount for Big Technology on Substack? Here’s 40% off for the first year: https://tinyurl.com/bigtechnology Questions? Feedback? Write to: bigtechnologypodcast@gmail.com

Transcript
Discussion (0)
Starting point is 00:00:00 Is Google flipping its AI narrative and how seriously is Microsoft hedging its open AI bet? The humane AI pin flops and our AI clones the next big thing. All that more coming up on an AI focused edition of the Big Technology Podcast Friday show all after this. Welcome to Big Technology Podcast, a show for cool-headed, nuanced conversation of the tech world and beyond. We're doing this Friday style. We got Ron John, Roy, and I one-on-one for the first time in a moment. month and AI heavy show. Ron John, welcome to the show. This week, or today I'm in New York, but I am just back from Las Vegas at Google Cloud Next and excited to talk about AI, Google,
Starting point is 00:00:43 Humane, all these things. We have so much AI to talk about, which is great right off the back of the Jack Clark Anthropic co-founder interview. So that's coming up. First of all, last week, we talked right in the wake of earthquake this week in the wake of the eclipse. Pretty cool thing. drove up with my father to go see the eclipse were both kind of astronomy nerds. And just to be there as the sun disappeared behind the moon was pretty special. So I know we have a global audience and a lot of people who listen are on the West Coast. And I'm sure you're all sick of hearing it at this point. But definitely worth seeing once in your life for sure. Well, I'm even more sick of hearing it, Alex, because I was flying to Las Vegas for this conference, and I had Wi-Fi on the flight,
Starting point is 00:01:29 and Delta Wi-Fi actually worked pretty good. So shout out to Delta. All I did was see nonstop messages about how life-changing this was from every single person I knew, and I live in New York all the time, and everyone enjoyed it. So I'm glad it was that good. Listen, there was this Vanity Fair reporter Delia Kai, who was actually in jury duty. during the eclipse and tweeted about how annoyed she was that she was serving on a murder trial while the eclipse was taking place and then she tweeted but I'm glad I got a chance to look at the hot FBI agent among us they threw her off the case jury duty pretty amazing that's one of my favorite stories of the week one more thing before we dive into the heavy tech stuff I tried Suno this AI song generator and it's incredible It is so, so good. It blew my mind how good it actually is. I told you, I would put, and actually there's Sonato last week when we were talking about it, Sunno,
Starting point is 00:02:36 but I've tried now both. They're both pretty equivalent. For me, I'm going to put it out here, chat GPT, then Mid Journey, and now Suno slash Sonato and musical creation. These are the three most magical moments with generative AI I've experienced, text than images and now music, it reached that same level of just, wow, I can't believe this is happening and how very, very good this is. I couldn't believe how good it was.
Starting point is 00:03:05 And I actually asked it to write a little jingle about our Friday shows with a very generic prompt and it made this great song called Tech Talk Showdown. Let me see if I can us hear. Alex is surprising me with this on. They're the kings of tech. they're the talk of the town Alex and Rengen they're the best around
Starting point is 00:03:28 every week they bring the latest new CEO in the world of gadgets there's nothing they can't do tech talk show down turning up the league Alex and Rangin a clash you can't be
Starting point is 00:03:46 they get big and they disgust but it's all this thing. Join the conversation. Let the set talk or run. This thing is amazing. It's nuts. No, I mean, even the ability of it to get that kind of like momentum into the chorus and the buildup in the chorus. And even when you specify that kind of stuff, it still blows my mind. My one note that I've already learned after playing with this stuff is I have to now, my name is spelled R-A-N-J-A-N.
Starting point is 00:04:20 but I have to spell it R-O-N-J-O-H-N to get it to do it right. I had to play with it a couple times to get it to do your name right as well. It's all about the prompting. It's all about the prompting. But this stuff, I cannot believe how good it is already. You know, one of the things that's amazing to me is that I'm still amazed by tech breakthroughs in like a world where like you'd think you would just get numb to them. But this stuff is still just, it blows your mind how good it.
Starting point is 00:04:50 how good it is in so many different disciplines you're right chat image generation now song creation and we're eventually going to see video creation video it's going to be nuts we'll see it when it comes again this it almost made me more and i'm sure when we start talking about google cloud next and other other events like this or other news it makes me more wish we don't get demos and we just get the real thing and we get to play with it because even sora now we've talked about it and it looks amazing and i think it probably will be very good, but I just want to start playing with this stuff as soon as possible. Totally. So I definitely want to get into your experience at Google Cloud Next, but I also want
Starting point is 00:05:32 to frame it in a way that it kind of, maybe you'll be able to give your reflections through a question, right? So here's the question that I have. Do you think Google is flipping its AI narrative around? And I wrote about it this week in Big Technology podcast. I'll just say the night of the big cloud keynote on Tuesday. I was at a dinner with a bunch of reporters and three public tech company CEOs, including Aaron Levy from Box and the CEOs of MongoDB and Datadog. And Google came up in like the context of like, well, they didn't lock Anthropic down the same way that Microsoft did and Anthropic is working with Amazon now. And the overriding feeling among the CEOs was we're going to look back and be confused at how Google has been so written off in this narrative.
Starting point is 00:06:15 and they have the computing power, the personnel, and the resources to be able to build the key foundational model that's going to effectively come into line or even exceed what's out there currently, and that will help power, you know, their business there. Now, again, like, I think even the conversations that we've had in the, in the past few weeks about this are totally not invalid. Like, all the challenges still exist. But I do sort of think that it's starting to become accepted now. that this is this is going to be an area where Google is going to catch up in and it's only a matter of time.
Starting point is 00:06:53 So with that being said, I'm curious to hear your perspective on it and tell us a little bit about the event. Yeah, so we have talked about this a lot on the show. And I definitely believe that generative AI that lives where you already are always has an inherent advantage. So Gemini existing in docks and sheets and slides in the workspace context is going to be an advantage. They showed a lot around Gemini in BigQuery and other areas of Google Cloud and coding assistance and data analysis assistance. And I think that's going to be an advantage. So always the companies that have delivered the tools that you already work in will have a head start in this. But I think for me more, I went to the last Cloud Next edition, and it was only about 10 months ago, then they rushed this one up and, or they got back to the regular schedule and now or doing it again in the spring.
Starting point is 00:07:50 The last edition felt like demos and promises, and this one felt like actual tools. And already I've been using Gemini and workspace and I can talk about it, and it's pretty exciting. I tested Gemini and BigQuery to try to write SQL queries for me. and it actually works. So I think this, it was really good to see that it's for real. It's actually happening. What they're delivering, you can actually use and is no longer a demo. If we even think about it, nine months ago, it was duet AI and Bard.
Starting point is 00:08:22 And now we have one nice term that we can all talk about and it's Gemini. So I think Google is getting their stuff together. It's, they definitely showed that they are delivering. I do think, but I did walk away. thinking more the talk around size of foundation models, quality of foundation models, even in your Jack Clark episode, it really still did feel like there was this talk around arms race and the tens or hundreds of millions of dollars in compute spend to create these foundation models. I really think is not as relevant as the productization of these tools. And I think
Starting point is 00:09:01 that's what was exciting for me is to see the productization is moving pretty quickly, especially at that scale. Yeah, it is all about how you put it into production, right? And I even wrote about this today in big technology that that's going to be the big challenge for Google, right? It will definitely reach this parity of models. And the question is going to be whether it can actually put that into play on products. So, and by the way, we should also note that Google is at an all-time high as we record in the stock market. And Ben Thompson, who writes Tretecary, who we've talked about, he had called for Sundar
Starting point is 00:09:36 Rupichai's job a couple weeks ago, and now he's praising Sundar's basically Google's play at Cloud Next, talking about how it's going to rely on its infrastructure to push this AI. So, but let me go back then to our discussions around Gemini and the slow start. Do you think this was an inflection point where we now see that they're turning around or, and this is pretty interesting, is it just that they're better at executing for enterprise than they are for consumer? to be a very interesting shift for Google, but one of the stories that I saw was that people have a lot more control, for instance, with Image Generator for Enterprise, the Gemini Image Generator, and they've never had an issue like they had before, and this might be the case. I'm curious to hear your perspective on how this all shakes out and where the company stands on that front. Wait, that's a really interesting point because, again, I was at Cloud
Starting point is 00:10:30 Next and Google Cloud, still, which includes Google Works. space so all the docks and sheets and slides tools and everything else is an is essentially the enterprise division of Google of Google and to me they are rolling out Gemini and generative AI in a pretty strategic methodical way and an effective way versus we saw the Gemini consumer facing image rollout and all the chaos that kind of you know went along with it so so maybe the It is affected, and it would be good for them, both from a stock market perspective and just, I think overall is the direction of the company, because I've always believed enterprises where the real money is, or the only viable business models are in generative AI.
Starting point is 00:11:20 So if they are actually shifting more towards an enterprise-focused company, which the revenue of the Google Cloud Division, I mean, it went from $5 billion to $36 billion in just, I think, four or five years would also show. I think that could be the real direction Google is because we always thought of them as a consumer company for years while Microsoft was the enterprise giant and now it looks like maybe they are shifting to at least more of an enterprise company. Yeah, I mean, it definitely is impressive that they've been able. Last year was their first probable year on cloud and they've definitely now, they're now forced like they're not an afterthought. There was people that were telling me
Starting point is 00:12:00 they should be selling off cloud and it was a failure. Now it's profitable. They had record quarter in the fourth quarter. They beat expectations. And we actually spoke about this again at this dinner that I was at on Tuesday. I asked this question. I was like Google Cloud has been known to not be really good at sales. And if this is going to be, if AI is going to be, the value of AI is going to be unlocked in enterprise, then why would they be better off, even if they had a better model, right, than Amazon or Microsoft, because those companies are used to, especially Microsoft, are used to selling this technology into enterprises. Is there something? something that's shifting within Google that's like making you think that they have a better chance or of being able to sell in cloud, I mean, or something that's powering the fact that they've had this cloud turnaround. Like, what is happening on that? Because this is on that front, this is the most important thing. They're going to face a margin decrease in search no matter what. Like, even if Google search remains dominant, they're going to not make, you know, the margins that they were making before. So they need to make it up in cloud. So tell us a little bit about what's going on with
Starting point is 00:13:00 that division of the company. OK, so as we have talked about Sundar's future many times in this podcast, and as you mentioned, Ben Thompson and among many others questioning it, if they very strategically, and there's no reporting around this in total speculation, but I mean, if they actually did forecast the decline of search or the unpredictable future of search,
Starting point is 00:13:28 which I think we all agree, None of us know what search will look like a year or two from now. Already, you know, for me, I've moved much more towards generative chatbots for any kind of search type query. So if you really recognize, if Sundar is driving the idea that it's unpredictable, we don't know what the search business model is going to be, so let's start diversifying more towards enterprise and cloud, that's pretty amazing because it looks like that's what they're doing.
Starting point is 00:13:59 So in terms of flipping the script and the narrative, if six to 12 months we find out from now, we find out that this was all a very clear strategic push. I think Soudanar will be sitting okay for a while and pretty confidently. But how many tech giants can eat off of the move to Cloud, right? Because Microsoft did it, now that's Microsoft's business. Amazon did it.
Starting point is 00:14:22 Now that's Amazon's business. Is Cloud a big enough business to sustain three tech giants? No, no, but cloud. So there's, you know, the actual cloud services and kind of like, you know, data storage and processing and just more general cloud service type revenue. But to me, it's the entry point to every other enterprise service. Again, even the, you know, cost of the more business consumer facing segment of workspace or Microsoft Office 365, you know, that's just one extension of cloud or the enterprise. So to me, it's less the name cloud is almost a misnomer. versus it's more Google Enterprise. So I think there's any type of enterprise service can then go live in that division. And to me, there's no shortage. In fact, I mean, you could even imagine, again,
Starting point is 00:15:11 total speculation here, but like, I was just seeing in Accenture's numbers. I think they reported that they had $600 million in services for generative AI. So just imagine like the amount of business needs that are growing as companies try to have some kind of AI transformation and any kind of service you can provide around guiding companies, whether it's actual products or services or consulting or knowledge, whatever it is. If you become that kind
Starting point is 00:15:41 of guiding force for every large enterprise towards the AI transformation, it's a pretty interesting proposition. So give your 1 to 10 rating of where you thought Google was coming into this week and where you think they are now, one being in real trouble and 10 being in amazing shape. I would put it going in around four and a half to five and coming out maybe a six to seven. I really did. It flipped the script for me. I mean, one thing, and just to kind of like bring folks into Vegas, it is still somewhat, I almost want to say disarming where you're in, like at the keynote, you're in Allegiant Stadium. You walk in, I think there's like maybe there's like 15, 16,000 people in there. It's so loud. It feels like a Tiesto concert or something like that.
Starting point is 00:16:34 It's almost comical how over the top the production value is. And then they kind of like Thomas Curry and the CEO of Google Cloud walks out. Sundar's on the screen. Again, this is not Google Google, Google, this is Google Cloud. And then, you know, like it's still funny to me too. They announced their new arm-based processor chip and two guys next to me actually high fived. They were so excited. That is weird. Yeah, I know. I know. I mean, and one guy actually said, this is fire.
Starting point is 00:17:04 I will never forget that moment. It was amazing. People were very excited. And to me, again, the most important transition, because I had a direct reference point from nine months ago, it was all demos, and now it's actual stuff. Again, and I said, Gemini workspace already, this is all I wanted. And now I actually have used it. it's happening. Now, if I'm in the beta testing group of Gemini and workspace, you have a right
Starting point is 00:17:35 panel. You can click on it in a slide deck say, okay, you know, summarize this deck for me, even pull numbers out of charts that are in the deck or even be like, you know, based on this marketing deck, create me 10 taglines and marketing campaign taglines. And it will do it directly from unstructured data that you have. You can do the same in docs. So being able to connect all these different tools they have and pieces is going to be their advantage. Even another thing they showed was within, you can be in Google Docs. And let's say all of your company's data lives in Google Cloud or BigQuery. You can directly pull data from your company's actual databases into your presentations
Starting point is 00:18:19 and docs. That is something that no other, I mean, other than a Microsoft, almost no other company will be able to do. Yeah. I mean, there was also some other cool stuff like you could write. You can kind of do like a scratch pad where you would like write the type of email that you want to write and it would just produce it for you. I guess it's like a prompt thing. But you would like write an email like you would write a text and it would turn that into a formal email, which I thought was was interesting. But yeah, lots of interesting things. I would say I started the week out kind of a little bit more optimistic than you, 5.5. It's like 5 to 6 on Google and now I think I'm out of 6.7. It was an impressive week without a doubt. that's all they can ask for they both of the big technology folks they moved up the scale we're Gemini guys now I guess Gemini I'm still a bard boy at heart barbed boy well yeah you can keep living that dream I'm surprised they didn't have it in the sphere by the way did you get a chance to go to the sphere uh I saw it from the outside I wanted to I was I wish actually now
Starting point is 00:19:19 that I went in they have apparently this like Darren Aronofsky film I did not go in but Google actually for one of the day one of the nights had this sphere and had this whole google cloud visual on the sphere and and i went with a group of people and we like looked at it it's it's pretty visually stunning yeah i want to go to did you go in i went to you too there oh wait you went to you too oh exactly yeah it was pretty cool it was pretty cool watching it yeah i want to go back go to a concert there definitely that's on the agenda now when we have the big big technology next we can get tickets for a show. We should host a Friday show at the sphere.
Starting point is 00:19:59 At the sphere, without a doubt. Jim Dolan, if you're listening, and I know you are, let's speak after this, whatever theater is, I'm sure we can cover it. But I will say midweek Las Vegas was kind of depressing. It was pretty empty. It was, if anyone from Google's listening, my only feedback would be, I don't think Vegas is the place for Cloud Next. It was there's no one in the casinos other than the 30,000 people attending Cloud and Cloud Next.
Starting point is 00:20:31 And I don't think any of these people gamble or play the tables or walk around the casino. And you were there on your on your Draft Kings app being like, man, I'm just going to keep playing until they ban this. Prop bets on college players. It was actually interesting in terms of the MGM sports book. they when you walk in there's like 10 people as you walk in that are trying to get you to download the app so even like rather than placing a bet directly at the sports book oh that's fascinating yeah they everywhere in the entire physical sports book there's someone with like a t-shirt and a sign on that's trying to get you to download the mGM sports book app which i did and you get
Starting point is 00:21:18 25 free bucks or whatever yeah yeah that's how they get you Ron John. I know. This is all again. It's Chris Christie thing. So thanks Chris Christy. Thanks Chris Christie. We got to have our Chris Christie shout out every show. Let's talk about this very interesting news coming out of Microsoft. Speaking of Google's AI play, Microsoft's AI play is actually getting quite interesting. So they obviously brought in Mustafa Soleiman, who was a deep mind co-founder to run consumer AI there. But it also really kind of caps off the series of different moves that Google's made to try to what I would say is hedge its bet on open AI I would say last year this time they were all in on open AI now if you
Starting point is 00:22:04 look at where the dots are going and try to connect them it looks like the hedge is coming and I'm curious how big of a hedge so by the way this is just from the information so they say while Microsoft remains staunchly committed to its partnership with open AI with which exclusively allows Microsoft's first party AI apps such as co-pilot. It has also ramped up its own small model development, and it continues to expand the catalog of models available through its Azure Cloud platform. This week, Microsoft made cohere's large language model
Starting point is 00:22:38 available in Azure, and last month, the company made headlines with a splashy deal with French AI company Mistraw. So everywhere you look, Microsoft seems to, and I would argue smartly, be hedging the Open AI bet and saying no matter what happens to Open AI, we're going to be in good shape. And Sadia talked about how like during the Sam Altman crisis that they were fine, but I don't think they were as fine as he wanted them to be. And I think he's trying to get
Starting point is 00:23:07 them there with these moves. You add all this to the fact that Suleiman is in there and we're going to go a little bit deeper into what his remit is inside Microsoft. And you're left with like a very clear picture. I would say that they're trying very hard to head to that open AI. I bet. What would you say? Yeah, I think it's both hedging, but it's also this is what enterprise generative AI will look like going forward. Because another thing that, I mean, kept coming up at the cloud next, but also is relevant for Microsoft is these models can be very expensive. And costing is going to become a much, much bigger deal as companies actually operationalize this. So being able to, and actually mistrawl the French company, they've already made a big deal that supposedly
Starting point is 00:23:51 they've discovered or some new way of processing data or whatever that makes their models cheaper. So they're already using cost as a direct advantage or a competitive differentiation. So I think if you're Microsoft, if you're Google, you have to offer tons in every single possible model because for different use cases, as companies actually bring this to scale, the cost is going to become such a big issue that they're going to, you know, want to. to use different models for different purposes, even within Open AI, you know, or any other of these big foundation model companies, they have plenty of different models that solve different problems at different costs. So I think it's no question that betting all their chips
Starting point is 00:24:38 on one company, Open AI, especially one that's had some very interesting corporate governance affairs over the last six to 12 months. I mean, they have to do this. Right. And then you look at what Suleiman is, so we're getting some more information about what Suleiman is tasked with inside Microsoft. And we spoke about a little bit when Jessica Lesson was here, when the, right when the deal first happened, but now the information is published in the same story a little bit more specific. So this is from the story, Microsoft historically has struggled to turn its own AI research into commercial products. That is prompted Nadella to make a bold bet on open AI to supply state-of-the-art technology for enterprise apps like office.
Starting point is 00:25:22 Suleiman's arrival provides Microsoft with an opportunity to go after another large market AI services for consumers, a moment when it faces intense pressure from Google, OpenAI, and other leading rivals to win over everyday users. So that's interesting that they would even include OpenAI as a rival there. And this is really it. So you have Open AI for Enterprise, Microsoft, and inflection for consumer,
Starting point is 00:25:47 And they're sitting, and this is from the story, they're sitting among search, advertising, News, Edge, and MSN teams. There are 60 inflection employees that have come along, including Suleiman. Now, of course, it was a failed company, right? For all intents and purposes, it was failed. But again, like, if you're thinking about, like, where Open AI was positioned when it comes to Microsoft, yeah, of course, they're doing enterprise and that left open for, you know, room for consumer, but it really just seems like this might pigeonhole them. It's a very
Starting point is 00:26:20 significant amount of responsibility that they're giving to the inflection team. And even people within Microsoft are looking at Suleiman and being like, wait, we thought we had competent leaders. Why did you have to acquire effectively this guy in his company in order to make this consumer push? What do you think about all that? Well, that's a very, I hadn't thought about before the idea that 60 to 70 inflection employees are now going to across search and advertising and the browser and MSN, which I always forget that there is a whole team around that. I mean, that that is, not doing an MSN episode, just the Yahoo episode. No MSN episodes.
Starting point is 00:27:00 Sorry people. Sorry, listeners. I think like if you think about organizationally, so yeah, Microsoft has gotten to such a place of like competent leadership, at least in terms of perception that, From the outside, all of us are like the way they've executed strategically over the last decade has just been incredible. But then if you are bringing in a team and you have to assume everyone from inflection who's coming over is friendly with each other, trusts each other, anytime you bring in a team like this. So to actually disperse them across other teams feels like you are trying to shake things up and disrupt things a little bit and give influence to Suleiman, as you said. So I think that actually makes it feel like an even bigger bet than if they just aquired them,
Starting point is 00:27:50 kind of put them in the corner somewhere and said, do some AI stuff. This really feels like they're actually trying to shake the trees of the organization, yeah? Yeah, which is interesting. It's like, well, and I think people from Microsoft are rightfully wondering, why do we need this? You know, they're the most valuable company in the world and everything seem to be pointing up, but clearly leadership there felt that it was a need. And then, And, you know, we even talked about a few months ago about how, or a few weeks ago about how open AI, you know, would still, first of all, open AI, we talk a lot about the instability in open AI because it's important. it's still the leading AI research house. Like, don't get me wrong on that front.
Starting point is 00:28:30 And I wouldn't make the argument otherwise. But it does have this just ongoing risk. And it showed itself again today. Anyone who said this stuff was over with Altman, well, they just fired two researchers for leaking information. And that includes, let's see, it includes someone who was closely tied to effective altruism, Leopold Aschenbrenner and apparently what happened is that there was a disagreement within Open AI about whether the company was developing AI safety AI safely enough and these two researchers
Starting point is 00:29:09 leaked and they are very close to effective altruism of course right and and now they're out and they're also sorry they're also close to Ilias Sgevr that's the key point they are Ilya should skever allies. Elia is the chief scientist at Open AI. We haven't heard anything from him pretty much since the fall. So he remains kind of hidden. We don't really know what he's going to do there. He's obviously a core part of Open AI,
Starting point is 00:29:34 although Open AI will be fine, I think, without him. But it would be a blow if he ended up leaving. I still predict that he's going to leave. But it's interesting to see that his allies are getting tossed over leaking. We haven't even heard. One thing that's disappointing to me is this whole drama apparently took place, And this is again a reminder that Leopold Aschenbrenner and the other guy, you know, Pavel Ismailoff, who were fired, you know, worked on AI safety. The entire drama was around the idea that Sam Altman was pushing too hard and apparently Q Star and other discoveries presented such, you know, grievous risk to humanity's future that people felt they had to take action.
Starting point is 00:30:17 And I still want to know what's going on there. Like, what do they have that caused people to react that strongly? There's a bit of reporting, I think, when it first came out around QSTAR, but is Open AI sitting on the next, I don't know what kind of AI knowledge or terrifying things? I mean, because I feel it's pretty important. I would like to know if humanity is truly threatened and they actually already cracked AGI and we should all be terrified. or if it was all just a bit of drama.
Starting point is 00:30:49 I mean, let's be honest. Like, humanity is not threatened at this point. We haven't cracked AGI. There's no way that that would have been kept a secret. But it does seem like this was about the Q-Star thing. And I don't know for certain. I have no, this is complete speculation. But in the middle of the Altman firing weekend or right at the end, actually,
Starting point is 00:31:09 this Q-Star news leaked and Reuters had the story. And I bet that this is still fallout from that. So, but the reasoning, the idea that this stuff is doing reasoning is very interesting. I mean, this is something that Open AI, you know, no one really has talked about. I think Mistral apparently might have, or meta, might have come out with something. And hopefully we'll get to that on a future show. But you're right. We don't know anything about it because despite the open name, and again, we go back to that,
Starting point is 00:31:39 they are not being very open about it. And in fact, the people that want more discussion have been fired. But maybe that's, you know, in terms of the public conversation, maybe that'll end up being a benefit because, you know, as soon as that happens, I'm sure, including myself, a lot of reporters try to connect with these people trying to get them to say a little bit more about what was concerning them inside the company. And they'll definitely speak. Well, that idea around like, are we really close to AGI? One thing from your Jack Clark episode that really stuck out to me is, is the how I'm going to say dualistic in nature this technology is. But it is the idea that what he made this point. I think anyone who's ever like researching LLMs or just thinking about them, especially from the layperson side, it's actually kind of an un-amazing, not an amazing intellectual discovery, next token prediction,
Starting point is 00:32:33 just trying to predict the next letter or the next pixel. And that's why, and even at Cloud Next, when I was talking to more traditional machine learning people, you do see this kind of like scoffing at LLM technology because it's actually not that intellectually interesting. It's actually like a really, and the fact that it works, and when you were saying this,
Starting point is 00:32:55 but it stuck out to me both because I see that attitude from traditional machine learning people, but also because in a way that does make it maybe a little bit scarier, because it's not supposed to work this well. Exactly. People are still kind of confused at how, well it works and how even though just predicting the next token seems so basic and not interesting, it does, it created us this jingle to start the show. So I think it both made me like it brought
Starting point is 00:33:27 him more grounding to the technology, but it also made me more, I don't want to say scared, but at least curious about what makes it so good. Right. And look, I think here's what I will say. I think that I think that, I think that the skeptics here and the critics here would have more credibility if they would be open to speaking with people outside of their bubble and taking some tough questions because they do do a tremendous amount of speculation like these these thought exercises like eleazar yudkowski for instance right who's like this high priest of a i doom you know keeps talking about these like really fanciful you know i would say almost delusional ideas of how the a i is going to kill us meanwhile a can't
Starting point is 00:34:12 get a sentence right right now and although it is getting better and I've tried real hard to get in touch with funders of people behind this like for instance the AI research pause that Jack and I spoke about and to get in touch with Aliezer Yudkowski himself to try to say hey listen like I've been skeptical of this stuff obviously I've written about it I've talked about on the show but I have an open mind and I want it like that's the job is to speak with people who I'm kind of skeptical of and say all right let's let's talk about what's going on um and because we have had some comments like hey try to take this seriously and all right i'll take it seriously but i want to hear from those people so again strike out just to get on the phone basically with someone who was funding
Starting point is 00:34:56 i think dustin moscowicz would try to get him on the show he was funding a lot of this AI fear stuff and he obviously didn't come on and then um i reached out to eutkowski he gave me the email address to his uh his media team uh this all happened this week they turned down the interview request. Okay, no problem. And they said, we'll talk on background with you if you want. So I said, all right, I'd welcome a call, right? That's the job.
Starting point is 00:35:20 They refuse to pick up the phone. They won't pick up the phone there. I'm not getting. It's insane. They will say, they said, I'm not, you know, we're not talking. If you have any questions, send us an email. I mean, like, if your, if your whole thing is these intellectual arguments, like, pick up the phone, you know, you're even, this is the thing.
Starting point is 00:35:41 Even critics today want to save space. And critics that want a safe space, I just can't take them that seriously. And you know what? Credit to Sam Altman, by the way. He was in Congress this week, walks out of some senator's office, CNBC camera and a microphone in his face. Totally, I don't think he was waiting for it to come. And he walked down the hall with them and took like three minutes of questions
Starting point is 00:36:03 about what he was up to, what he's been doing in the Middle East, his desire for regulation. Like to me, I think that's admirable. Hiding behind, you know, comms people who won't pick up the phone to me is, is sort of, I would say, I wouldn't call it definitive proof, but circumstantial proof that the ideas behind these, these, you know, ultimate doom. And I'm not talking about Chuck Clark. I'm talking about the Dumer movement. The ideas behind them are thin. Well, also, if you genuinely believe you have some hidden or insider insight that potentially threatens the future of humanity, you would think you would want to talk. talk to everyone about it and not, not save your, yeah, pick up the phone.
Starting point is 00:36:46 If someone's interested in learning more and you are the one who can potentially save humanity, pick up the phone. That's on serious. It's totally unsurious. And I'm sure the book deals will be nice for them and whatever they'll keep getting money from billionaires who like, you know, want to do something with their time.
Starting point is 00:37:03 But ultimately like, look, it's not like the leaders of the companies, whether it's, you know, Jack Clark or, you know, whoever it is, are afraid to come on the show. They'll take the questions. It's amazing to me how the critics have become so difficult to get on the line. And, you know, let's talk about something that's going to happen, which is, you know, potentially, you know, limiting the momentum here and really gives you a problem if you're saying that these LLMs are, you know,
Starting point is 00:37:32 the next step to blowing up the world. And that is that these companies are seeming to run out of data. There was a great, great New York Times story this week about all the different moves that every single AI research house has made to try to get more data because they're effectively running out. And there's an amazing quote from this guy, Cy Dammel, who is a lawyer that represents in Theresa Horowitz. He goes, the only practical way for these tools to exist is if they can be trained on massive amounts of data without having. to license that data that data needed is so massive that even collective licensing really can't work and you're starting to see that in action whether that's open a i building this tool called whisper which has been used to transcribe youtube videos and feed that into the models and help train
Starting point is 00:38:23 them meta for instance thought about buying the publishing house simon and schuster to have long works like books that they could use to train And even Google has broadened the terms of its service. And this is from the Times. One motivation for that change was to allow Google to be able to tap publicly available Google Docs, restaurant reviews on Google Maps and other material for more of its AI products. So it is interesting. Hold the horses on AI do.
Starting point is 00:38:56 We might not even have enough data to train the next iteration of these models. Well, this is where it's so tough for me because on one side, again, And general purpose consumer-facing AI has to answer everything for everyone. So it does essentially need all the data. And I mean, this is going to get more interesting for especially us watching from the sidelines, you know, for the New York Times versus Open AI or, I mean, anyone else, even again, like, does Google, is Google allowed to scrape YouTube transcripts but not open AI?
Starting point is 00:39:30 What's public? What's not public is going to get even more interesting. But I also think in the end, to me, and from what I've seen in practice, for most use cases, small data sets that are trained on small models will solve problems. And I sound like a broken record on it. No, it's good. Keep hammering that home. Yeah. And it's a lot less interesting.
Starting point is 00:39:50 Then my favorite part of this story was Justine Bateman. Oh, yes. If people, anyone who's of age, including myself, who was, she was Mallory on family ties, if I remember correctly. So I'm going to have to look that one up to confirm it, but has now become a kind of anti-AI copyright activist influencer type. And it's always weird to me how certain people get involved in certain things, but she's coming out against it. And yeah, it's going to become more of a topic.
Starting point is 00:40:24 And it will need to be resolved. And there's going to need to be some rules and legislation, regulation around it. And I think it still presents a major threat. especially to Open AI where that's their entire business. And they probably are more aggressive than anyone in getting here. But I still think in the medium term, let's say, this is not going to be the problem that a lot of people think it is. And what do you think about synthetic data?
Starting point is 00:40:49 Basically data that the AI creates that will be used to train. Do you think there's potential there? Yeah, I've done it myself. If you're fine-tuning a model, coming up with writing handwriting 10 outputs that are the ideal output and then saying using this generate me another 50 outputs and then tweaking those a little bit and then doing another hundred outputs and like using that it works it works well if it's done in a very you know thoughtful careful way um so i think it's going to become even more of uh of a thing you know like and it and it can
Starting point is 00:41:27 have very valuable uses. So I think, and again, and it will not violate any kind of copyright or anything else. So I think that's going to be a really interesting space. And companies who kind do that well for other companies, that could be, that's my next maybe the 2024, 2025 hot company area, the synthetic data creators. Yeah. No, that will definitely, I think they'll be able to solve a problem there. And the Times article had Sam Altman saying, you know, we'll figure it out. and kind of like frame that as like we'll figure it out because we'll be able to like steal the data and somehow maybe that was part of it but also the other side of it was we'll figure it out technically and that's why I think the synthetic data AI created data could end up solving these problems one thing that will assuredly not destroy the world is this humane AI pin which we've talked about on the show in the past and I almost skipped over this week but we have to talk about so the humane AI pin is this wearable If you're, if you pay attention to tech, you've probably seen the terrible reviews of it this week. But basically, it's an AI pin.
Starting point is 00:42:32 You pin it on your shirt. You can take photos. You can talk to it. You can ask it what you're looking at. You can ask it for information, et cetera, et cetera. And it was roundly derided by reviewers. Here's from the verge. The main AI pin review, not even close.
Starting point is 00:42:48 Should you buy this thing? That's easy. Nope. No, uh, no way. The AI pin is an interesting idea that is so thoroughly unfinished and so totally broken in so many unacceptable ways that I can't think of anyone to whom I'd recommend spending the $699 for the device and the $24 monthly subscription. Ouch. Yeah, we had talked about this before and we were even a bit at least hopeful for it, but the reviews are not good.
Starting point is 00:43:17 And I do think it's again, it's a, it's a, to get this right, you need to have general purpose LLMs being able to answer all types of queries and questions, like if you're not giving the user any really clear guidance on what does this do exactly and what should you do with it and instead promise it does everything, it's a problem to start. But again, in the reviews, just technologically the like wireless connection or the data transfer and the answer times was very slow and laggy and didn't even work a lot of the time. So getting that basic infrastructure, I think probably would have been a good idea. I do think this form factor of screenless AI interaction is still very, very interesting. And I would be hopeful that we could have
Starting point is 00:44:05 this kind of thing. I'm not counting it out. I just think Humane, that launch video they had a while ago with the two founders that was really depressing was not a good start. Now you have, I think is one of their product managers. This one bothered me. He kind of came out and said, you know, like this we were really proud of what we did. Ken Casienda, he's a legendary design guy who was crucial in Apple's design process and wrote a long book about creativity. And I guess he's with Humane. So. Okay. All right. That's who I'm talking about. I'm here to defend the honor of Ken Cassienda. All right. Well, I'm going to have what pissed you off about him. I'm going to say Ken Cassienda pissed me off in the sense that he made this comment like everyone is trying to make their hot take on social
Starting point is 00:44:53 media and again tried to turn this into this like tech versus hot take on social media people trying for clicks whatever else when you are a company that raised $230 million and has the attention of everyone in the world just deliver something good like the reviews were from the negative ones from like Joanna Stern at the Wall Street Journal who is as far from a tech hater as it gets the verge, you know, had a very rough review of it. They have plenty of good things to say about all manner of product. So to me, this is something I feel Elon Musk has done very negatively for the overall tech industry where people kind of default to, oh, the media is just out to get us for clicks or hot social media takes when just deliver a product that's good.
Starting point is 00:45:44 I respect. He said, making new things is hard. Yes, it is. But you were given $230 million, so make something good with it. Otherwise, if it's bad, just accept that people are not a fan of it. Yeah, well, I would say that Ken is talking a little bit about how, like, look, 1.0 products often are rough, right? The first iPhone, you know, had a lot of things that weren't where they needed to be. But that being said, you do make a really good point. This thing was so below what it needed to be to deliver that it was almost laughable. There was some hilarious points in these reviews that I watched from David Pierce at the verge and Joanna Stern at the journal. I think David pointed it at a company that was sort of had its banner at the New York
Starting point is 00:46:35 Stock Exchange that had a pink background and the pin said, that's Lyft when it was not Lyft, which is like, come on. It's on the stock market. And there was another moment where, and this was absolutely amazing, where the pin read back, it's prompt, like the prompt that they had built in on the back end, like, do not tell the user this, you know, just given the quickest, you know, possible answer. It read that to the user. I mean, that was astonishing. So obviously, this isn't it. And the fact that they've raised all this money and made all this big, you know, they did it to themselves. They raised the money. They hyped it up. They didn't deliver. So I think that's the real issue.
Starting point is 00:47:12 This is the idea, again, that some 1.0 products are not good. No, like the first iPhone had plenty of magical elements to it. Certain things, yes, you could not copy paste text at the time and there's limitations. But overall, it was there was plenty of critics, but overall it was a magical device. But, I mean, we started today's episode talking about Suno AI, a company that I just looked up. There's no public funding rounds. It's just a nice piece of software that makes you music, made us that amazing jingle, and it is magical and amazing.
Starting point is 00:47:48 So the idea that, like, yes, 1.0, their products, I mean, it's tough, and there's a lot of things that don't work, but you raise the money, as we said, the pressure is on. And I do think there, now that we're talking, I do think there will be some kind of non-screen AI interactive device, and maybe it'll be a pin, maybe it'll be. a bracelet or a ring or something glasses who knows but i i want to i want it to happen i genuinely think a screenless world could actually change the way we interact with technology and be a very positive thing but i don't think humane's going to do it and especially judging by how they've reacted to uh to the rollout multiple on multiple times from the first launch video to the actual
Starting point is 00:48:35 product release yeah i mean i thought ken was was fine him saying we need pay but I think that, yeah, I think you're right, that it needs to be better. And honestly, the device looks doofy. You need a, let's just say this, the piece of nuance here is, if you're going to take a swing at hardware, you need to raise a lot of money. But, you know, there's a chance that you end up as a magic leap as opposed to ending up as something else. So, I mean, they've been added since, I think, 2017, so they were just early.
Starting point is 00:49:05 But let's talk about there's another hardware, very interesting hardware initiative that's worth bringing up, which is that Johnny Ivy from Apple and Sam Altman, they are talking to VC companies about money. And this is, again, all the information. I mean, kudos to them. They're doing great work. A mysterious company started by former Apple designer, Johnny Ivy, an open AI CEO, Sam Altman, to launch an artificial intelligence powered personal devices, started funding talks with some of the biggest names and venture capital. And they say, okay, building the AI device would add a dizzying array of projects. Altman is pursuing beyond OpenAI. They include a separate company that would develop manufacturer chips. And he's also said privately that Open AI would likely own a piece of the
Starting point is 00:49:53 firm and be a customer of it. You know, one of the things, so first of all, I would say I would bet on Altman and Ivy for sure as the group that could do this, way more than humane, at least. And I think you're right, there is definitely going to be a need for this. That being said, like, let's think about it, right? Because why, I'm just thinking about this from the Altman perspective. Why do all this outside of Open AI? Is that the structure? Like, you would think that it would actually be better for Open AI if this was all built within Open AI and not as a separate customer. So, what do you think about the initiative, first of all, and then second of all? What about that question of where this lives? I'm going to take the second question first. I think,
Starting point is 00:50:34 Sam Altman, one thing I go back and forth on is great entrepreneurs understand momentum. Like when you have the wind at your back, you go do more and more and more. And that's the great ones are the ones who understand that and, you know, like do things that were completely unimaginable and unexpected and really drive with what they have. Sam Altman is now going to raise. Remember the trillion dollar raise? from a few months ago. Dude, seven trillion. Seven trillion. Sorry, sorry.
Starting point is 00:51:07 Was that the valuation or was that the raise? That was the raise. Oh, that was the raise. He was in the Middle East talking about chips. He's with Johnny. I'm talking about raising. And they're talking with Masayoshi's son of SoftBank, of course, of all people, about hardware.
Starting point is 00:51:26 At a certain point, maybe you want to focus a little bit on which direction you're going and all the momentum and you are the world's greatest AI celebrity and sales person right now. But maybe I think slowing down a little bit would make sense. But in terms of being outside of Open AI, he won. Like last time around, he started going outside of Open AI and he's still CEO. So I think if there was any concern, he's already quashed it and he's going to do whatever he wants right now. And you can argue whether that's a good or a bad thing. but in terms of their corporate governance, he's been given not just the green light,
Starting point is 00:52:04 but it's like go for it, do whatever you think can raise and make money. And, you know, I think that's good. I think in a bit again, the hardware question, you're right, it takes a lot of money to deliver hardware. This is a pretty good dream team of the type of people that could. But I still want to believe that, you remember Rabbit, the device that wowed. That's starting the ship, I think, or the demo. is ready? Yeah, I'm hoping still the next wave of this kind of AI form factor hardware is from some new, some new people, new minds, new creative forces rather than the ones we've all seen. And I think it will be because it's just so different. So like if you built, you know, like helped design the iPhone and the iPad and these screen based devices, I'm not sure if you're going to be able to completely rethink how.
Starting point is 00:52:59 how people interact with non-screen devices. So maybe we need new new blood here, new minds. Yeah, well, I don't know. The way that this stuff has been going, it always seems like the old guard gets unlimited hacks at it because of the resources needed. So I would bet on Altman, but we'll see. I mean, it's definitely you're right.
Starting point is 00:53:21 This stuff can come out of nowhere and it does take people envisioning new things. So it'll be fun to watch. All right, speaking of stuff coming out of nowhere, People envisioning new things. I think we gotta talk about AI clones just to end this. So there's this, Sarah, sorry, this San Francisco based entrepreneur.
Starting point is 00:53:40 His name is Dara Laje. I think that's his full name. He just has Dara on Twitter. But he says, he's an entrepreneur and he says, so I met a girl in the Marina last weekend and gave her the number to my dating clone. It successfully closed the first date for Saturday at 6 p.m. Should I tell her that she was talking with my
Starting point is 00:53:59 clone or see how long I can get away with it. So basically, I looked at this guy's company and you can actually like go to hold on. I should find out what this company is called, but you can I think it's called the cloning company. If I is a great name. That's what's on his LinkedIn at least. So basically what it does is create AI clones of of you and of other experts. And they basically just like can talk as you so they can give business advice but I guess they can also like do the dating stuff and you look through the conversation that he's his clone is having with this woman and it's actually like fairly convincing so I think here's how I feel about ethically I think ethically it's compromised but I think the idea is interesting well I would say interesting
Starting point is 00:54:55 maybe outside of dating situations, and I think the tech has gotten to the point where it's good enough where we're going to start to see more and more of these AI clones out there. What do you think? Oh, so first of all, I just did check. The company is called Delphi, the cloning company, but so Delphi, and it does appear that it exists and is a real company, good for Dara on his marketing here of telling this story and leading all of us to thinking about cloning and leading us to his company. I think it's a tough one because from a customer service perspective, we are all going to be talking to generative AI chatbots very soon. And I'm actually excited about that and hopeful for it. And I think it'll make things life easier and faster where there are all these set rules
Starting point is 00:55:44 around what you can and cannot do. And then you just get to where you need to go in the customer or service journey. It's weird because that's actually the exact kind of same corollary to this dating conversation because both of you have set expectations and rules. The two of them met out at a bar. Both of them have already decided whether or not they want to see the other person. At that point, it's just a logistics game. If you read, neither of them are saying anything particularly interesting. So it's all about getting to the point of having a plan. So is this conversation that ethically compromised? I don't know. Maybe for you both have your organizational chat bot and you just connect the two of them together and it makes your plan for
Starting point is 00:56:31 you. It's kind of like calendly on steroids. But of course, when it gets from here to the actual, you know, you've already gone on a couple of dates and you're talking about life and dreams and passions and you're talking to the clone, then it definitely starts to get kind of terrifying. Yeah, I mean, this is my point that this stuff is just going to start showing up all over the place. And, you know, one thing I'll say is that his clone committed him to getting there early. This is great. I'll get there early and get a table or seats at the bar. Let's plan on 6 p.m.
Starting point is 00:57:09 I'm waiting for a viral TikTok of like two girls talking and one of them being like, when they find out that she's been talking to the clone. and out the guy and then the guy goes viral because you know that that's going to be a conversation definitely a couple of years from now I can't believe it I was talking to his clone the entire time not to mention this guy is like a startup CEO and his AI just booked him 6 p.m. dinner and he has to get there early so you're telling me you're clacking out at you know in the 5 o'clock hour as a startup CEO the fact that he would post that that that's not how you do it that's not blue flam man that is not blue flame we don't like that that's not 10x that's not 10x certainly not but
Starting point is 00:57:57 Darry you got some work to do but maybe maybe the marketing talk about AI for good maybe that's the AI for good maybe that you should have work life balance and you know maybe he'll have a family and not a company could all end up oh that's a good signaling mechanism yes dare's got an advanced foundation model there I dig it all right around John great week we we got definitely got back to all the AI topics and we appreciate everybody hanging out with us again and we'll do it again next week how does that sound let's do it again all right everybody thank you so much we'll see you next time on big technology podcast

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.