Big Technology Podcast - ChatGPT Gets Lazy, Elon Musk Blasts Advertisers and Releases The Cybertruck, Jack Ma Returns

Episode Date: December 1, 2023

Louise Matsakis is a reporter at Semafor covering AI. She joins us for our weekly discussion of the latest tech news. We cover: 1) ChatGPT getting lazy 2) The commoditization of AI companies 3) Anth...ropic’s unique board governance 4) Anthropic’s EA connections and the state of EA 5) Amazon’s Q chatbot 6) Elon Musk tells advertisers to F themselves 7) The drawbacks of brand safety puritanism 8) The Cybertruck is here. 9) Jack Ma appears in public for the first time in years. -- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Questions? Feedback? Write to: bigtechnologypodcast@gmail.com

Transcript
Discussion (0)
Starting point is 00:00:00 Is chat GPT getting lazier? We've got lingering concerns about the world's leading AI companies. Elon Musk tells advertisers to go F themselves and sets the cyber truck free. Jack Ma is back and plenty more coming up right after this. Welcome to Big Technology Podcast Friday edition where we break down the news in our traditional cool-headed and nuanced format. Wow, what a week of news. It just continues here. And we're going to break it all down.
Starting point is 00:00:26 Joining us today is a great reporter, Luis Medsakis of Semifor, here with us to talk, of course, about the week's news and some of her great reporting on the latest in the AI dramas and plenty more. Louise, welcome to the show. Hey, Alex. It's great to be here. Great to have you. You're about to come out with a story talking about how Chachipit might be getting lazier. Now, this is something that we all maybe feel spiritually, but what are your reporting show you on that front? Yeah, so over the last few weeks, a number of mostly professionals, I would So these are people who already pay for chat GPT Plus, which is the $20 month subscription, which gets you access to GPT4, which is their most advanced model right now.
Starting point is 00:01:11 And they're basically like, look, when I ask the chat bot to like produce 50 lines of code or, you know, just to do tedious work to maybe like, you know, put a list together of upcoming dates for me, it won't do it. Or in I think the cases that are the funniest, it will instead give people instructions on how to do it themselves. So it'll be like, hey, like, here's a template. You know, you can fill out the the other 50 lines or whatever, which is, I think, super interesting. And it's not clear to me exactly why this is happening. So a few open AI employees have said that this is part of, you know, the result of a known bug. So there is something going on here that, you know,
Starting point is 00:01:51 open AI admits is a problem. But I think the reason that this is interesting is that it speaks to this wider, really unique issue with large language models, which is that they're a black box. You know, you put something in and something comes out on the other end, and even the people who make these models don't really know what goes on in the middle. And that can be really frustrating for users. And I think what you're seeing now is as people try and bring chat GPT into their workflows, this is really frustrating. You know, this happened a few months ago where people said, oh, you know, Chad Chi BT is getting dumber. It can't do the things that I used to ask it to do.
Starting point is 00:02:31 But at that time, I felt like a lot of people who were saying that were like kids trying to get their homework done or, you know, maybe people who are using this in an experimental way. But I think this is a bigger problem now for the company when you have these, you know, startup founders, tech executives who said, you know, I'm doing this workflow and I'm using this chatbot and it's not working anymore. And that's really frustrating. Can I just say how amazing it is that a known bug within chat GPT is making it tell people,
Starting point is 00:03:02 nah, I'm good when they ask it to do stuff. It even makes me more in awe of this technology that it is sort of incorporated such a decidedly human trait in the way that it works. Totally. I heard people joking like, oh, you know, it's like hiring a bunch of really capable interns, but they're lazy and avoid work in the exact same way. as human interns would. So I totally agree. It's remarkable.
Starting point is 00:03:28 And I think it also speaks to how, you know, the reality is that Open AI is making tons of tweaks all the time. And they can't always predict what those tweaks are going to do to the models. And that's, you know, I think a problem for users. And we just also don't have a transparency system or transparency norms. You know, there's no release notes. There's no like, you know, hey, we made a big tweak today, like you might see this, right?
Starting point is 00:03:56 Like we have no established system for communicating with people about how these models are going to change over time or, you know, what the impact might be. And so I really want to see how that is going to develop over the next few years. Right. And it sort of brings us to an area which is like become more discussed since the open AI situation, which is that we seem to be heading towards this commoditization of all these bots. It was discussed at Deal Book with Andrew Rossorg in the
Starting point is 00:04:28 conference this week about how they all seem to be coming together and starting to mirror each other and you're not going to have one chatbot that's extremely capable and one that's not. Just the functionality seems to be colliding. What do you think about that? Totally. I think that right now, you know, especially enterprise customers have a lot of different options for the types of models that they want to use. And a lot of the companies that I've spoken to say we're not going to commit to a single model, you know, it's too early, we want to see what works best for us, what's the most affordable for us, you know, where we get the best customer experience. So I think that when there's, you know, rumors flying about the chatbot getting
Starting point is 00:05:07 lazier in that kind of environment, that's a bigger risk for an open AI, right? Because, you know, it used to be, oh my God, chat GPT's down, like, what am I going to do? Am I going to have to write my own essay that's due tomorrow? Whereas now I think it's a lot easier for people to say, okay, tragedy is not working for me. I'm going to try, you know, Meta's Lama 2. I'm going to try something from Amazon. I'm going to try, you know, a different open source model that's free and I'm going to run it myself. So I think that's really exciting, but it also means that differentiating and ensuring that you kind of just have the basics of like good customer service, a good interface, you know, reliability. Those boring things are going to become more important than just the
Starting point is 00:05:48 capabilities. Right. And maybe customizable as well, which is where these open source companies can get a heads up. But you really just kind of lead in perfectly to this story that I've written this week about Anthropic and their governance because it does really seem that it is Anthropics moment. They've reased $7 billion, including multiple billion-plus dollar rounds since late September from Google and Amazon. They have a bot that works extremely well called Claude. They sell the underlying technology as a model.
Starting point is 00:06:20 They've released an update in the middle of the. this Open AI catastrophe. And they're so well positioned to be that, you know, number two or that alternative that people sub in if they don't like what's happening with Open AI. And then you start to look at the governance. And that's where things get interesting. Because they're not a nonprofit. They're actually a public benefit corporation. But the people that choose the majority of its board will eventually, and this means within a few years, be people that are, made up, people that make up this long-term benefit trust that Anthropic has set up. And the trust has five people on it, initially picked by Anthropic, but eventually they'll
Starting point is 00:07:04 select replacements. And that trust within four years is going to be able to pick three of five Anthropic board members. Now, the board members are going to have this fiduciary duty, so it's not a nonprofit like Open AI, which you would imagine insulates them a little bit from the type of thing that happened at OpenAI, but it's just so fascinating that you have this company, which was set up by OpenAI expats, right? People who had left Open AI, and they wanted to build a company more focused on safety.
Starting point is 00:07:34 So, of course, they're not going to go with a traditional for-profit board structure, but they build this long-term benefit trust. The trust, their mission, is to make sure that the AI is developed safely, and the way that they live out their mission is by selecting people who would effectively channel it as members of a for-profit board. And it just puts Anthropic in this very fascinating position where it can be this number two. It can be this opening, this second, or yeah,
Starting point is 00:08:04 this opening or this hedge for companies that are trying to build away from OpenAI. But that being said, the governance is fascinating, and it's not as clear cut as a standard for-profit. You know, hearing that, I'm curious like what you think about Anthropics' position and should, you know, when people, when they're thinking about this company, you know, should Google and Amazon now kind of have their eyebrows raised, given that the structure is not the traditional for-profit structure. It does have some notes of that open AI structure, although perhaps a bit more stable. I think it's a really good point to make. And it's so interesting to me because when Sam Altman got fired, I looked back at how some of Open AIs investors were talking about the structure.
Starting point is 00:08:47 and they loved it. You know, I think it was beneficial to them, you know, a few days before Altman's ouster, he, Brad Smith, the president of Microsoft, actually said that this nonprofit structure was what made OpenAI more trustworthy than a competitor like meta, you know, that is profit-driven. So I think there's been sort of this rude awakening to what this sort of funky structure will mean. But I think unlike maybe Microsoft, which is Open AI's biggest partner, you've seen Google and Amazon, which are the big investors in Anthropic, hedge a little bit.
Starting point is 00:09:24 You know, Amazon is allowing other models onto AWS. They're developing their own, you know, their own infrastructure, their own AI technology. You know, and Google is also very much doing the same. So I think that by having Anthropic, you know, on their team, so to speak, they're, you know, able to sort of see what's going on to have some, you know, interest in one of, you know, what is being referred to as the foundational models, you know, and Claude is considered one of them. But I think at the same time, you're not seeing that sort of over-reliance, you know, to the same extent, at least, like, you're not seeing, you know, the CEO of Anthropic on stage next to the CEO of Google or
Starting point is 00:10:08 the CEO of Amazon. Like, I think that that branding is not as clear, I guess. And I also do wonder, I think that this firing of Sam Altman did give an opportunity for Claude. There was some reporting that more customers were going over to Anthropic, even if maybe the same governance vulnerabilities are there. But I think the real issue is that OpenAI is a household name. And that's going to be really hard for Anthropic to compete with. Like, you know, even companies that are in the tech space, sometimes I ask them like, hey, like, are you using Claude? Like, what other models you're using, and they look at me and they're like, what's that? You know, and I think that that speaks also to how Altman has just been more of a corporate,
Starting point is 00:10:55 you know, actor, right? Like, he's extremely ambitious. He's sort of positioned the company in a very public way. And I think that a lot of the people who are at Anthropic, you know, are academics, sort of have this high-minded idea about what artificial intelligence should be and what it should do for the world. You know, a lot of them are former academic. or, you know, have PhDs and things like philosophy.
Starting point is 00:11:18 And while I think Anthropic has been really shrewd about getting sort of a front row seat at the regulatory table, you know, they were much smaller than Google or OpenAI, these other companies that were invited to these closed-door meetings at the White House or, you know, invited to testify before Congress. But I don't know if that's going to translate to profits. Yeah, and they're not just academics. So there are also plenty of effective altruists in that. company. And we spoke about effective altruism at length on Wednesday, so we're not going to just
Starting point is 00:11:49 continue to do it so deep here. But I would be remissed if I didn't point out the fact that these trustees on this long-term benefit trust, two of them that I found at least two of them have direct ties to effective altruism. There's Zach Robinson, who's the interim CEO of Effective Ventures U.S., which is inherently tied with the effective altruism movement. And then Paul Cristiano, who's the founder of the alignment research center, who's also a prolific writer on the EA forums. You know, and you've done some reporting on this. You spoke with the CEO of Skype, who said that they're backing away a little bit from these EA ties. Let's just briefly touch on it. Is this something that folks should be concerned about, and how do you view what's
Starting point is 00:12:36 happening to the EA movement right now? So I think the number one thing that is important for, you know, investors, consumers, everyone to realize about the impact that effective altruism and sort of like, you know, related movements like rationality are having on, you know, the development of artificial intelligence is just that it's a very specific view, right? Like when you talk to these people, you know, there's totally variation, you know, in this community, which is, you know, thousands of people on multiple continents. But what I find is that they read the same text, they read the same blogs, they all know the same people. they go to the same effective altruism conferences. There's nothing wrong with that, you know, and I think there are plenty of good ideas that are circulating within these communities, but it's really important to remember that it's one perspective, it's one community,
Starting point is 00:13:28 and I think that this technology is too important, it's too powerful for only one viewpoint to be considered in how it's developed, and I think that eventually something has got to give. And I think that, you know, the good news is that you're seeing, sort of at the government level. I think that the White House, you know, other other lawmakers and other countries are seeing like, okay, we can't just listen to these people to decide how
Starting point is 00:13:52 we're going to steer this, which is great. But I think that the risk for a company like Anthropic is that if they're not letting in these other perspectives, what are they missing, right? What are they not seeing? What are the problems that they're not accounting for? And that's what I worry about when you have a board that is, you know, primarily consisted of people who read the same things, talk to the same people, have the same perspectives. Yeah, and the philosophy, so by the way, we did get a few people writing in after Wednesday's podcast, and I appreciate I've read it already, I'm going to respond, and we're definitely going to try to get somebody who's more aligned with effective altruist thinking on the show
Starting point is 00:14:31 just to talk it through in nuance. But yeah, it's clearly like baked deeply within, into anthropic. And the thing is, and we talked about this on Wednesday, that there is a tendency to sort of act rashly if you're ascribing to this long-term thinking where, you know, the lives of people in an imagined future universe or future Earth are just as valuable as the lives of people today, even though you have no idea what the lives and the society will look like in the future. And so, yes, it's one strain of thought, but it also, we've seen it in places like FTX and potentially even with this open AI board, there's a tendency to, well, not a tendency, but there's definitely moments where the thinking can lead to what seems like rash action
Starting point is 00:15:19 from the members. Does that sound right to you? Yeah, I think that that's definitely true. And what I think is positive that's coming from this moment is that people are realizing that this is a community that shares a lot of the same views and has, you know, its own perspective. because what I think was happening before is that you saw people say, oh, well, like, this company is saying this and, like, this company is saying it. And, you know, that think tank, that research organization, like, oh, wow, like, they all agree. So, like, therefore, there's clearly consensus about, like, what matters the most with AI or, like, what people should be worried about.
Starting point is 00:15:57 Whereas I think now people are saying, oh, wait, like, all of those institutions are, like, people who all know each other and they all are being funded by the same. handful of billionaires, right? Like, that's really important to know. It's really important for lawmakers to know. It's important for consumers to know. It doesn't mean that you can't agree with them or that they don't have good ideas, but, you know, knowing that the three policy papers you read are actually all funded by the same billionaire is important context for evaluating whether you want to believe those claims, right? Or like, whether those claims are like, you know, the totality of the thinking on this issue. Yeah, I was going into some of these
Starting point is 00:16:34 organizations on the long-term benefit trust of Anthropic and digging into their financials. And they have murky statements like 70% of our funding is from one source. And it's like, come on, just name the source. Who's it really helping to keep that opaque? Right, right. And we should just say that in a lot of cases, it's Dustin Moscovitz, who is, you know, an early Facebook employee who is worth billions of dollars. And I think that he is sincerely funding these initiatives because, you know, he genuinely is worried about how AI is going to be developed. But I just think in other contexts, when we see a web of organizations that are being funded by the same person, we don't ignore that fact in evaluating what they're saying.
Starting point is 00:17:18 Great. And yeah, I have asked Dustin to come on the show multiple times and no avail. So, but let's also turn to Open AI because I'm curious to hear your thoughts. about what's happening in the aftermath of the Altman coup and counter coup. You have a story this week saying that a number of open AI employees are looking for job opportunities elsewhere after this cataclysm. So, you know, 700 open AI employees signed this paper demanding Sam back. Sam is back, but some are now starting to explore working at other companies. What can you tell us about that?
Starting point is 00:17:59 Yeah, I think first, one caveat worth noting is just that like these people make oodles of money and they are so in demand. And so, you know, it's not hard to find opportunities elsewhere if you want them, right? But I think that what it demonstrates is just that this was really crazy, right? Like this was, I think, an unprecedented incident in many ways in Silicon Valley. And I think the number one issue is that we still just do not really know why Altman was fired. The only thing we know is that he was not consistently candid in his communications. I still have trouble saying that phrase, which is all that the board really has revealed so far. And I think that the board would say that they're so worried about that and that, you know,
Starting point is 00:18:46 Altman coming back was not, you know, indication that their concerns were not valid. They believe that, you know, there's going to be an independent investigation. he lost his board seat his close ally Greg Brockman who was also back at the company also lost his board seat so I think that the fact that these people stayed on
Starting point is 00:19:08 during this incredibly terrible you know intense five day saga in which headlines were swirling constantly in order to get these concessions I think shows that these concerns were serious and I think until we know more it's difficult for
Starting point is 00:19:26 really talented, really in-demand employees to necessarily bet, you know, the future of their career, particularly like we talked about earlier when perhaps AI models are becoming more of a commodity and there's all these other great companies that, you know, you know that they're going to exist, right? And that like the CEO is going to be there at the end of the day. And maybe that's, you know, better peace of mind for, you know, you and your family and your future. Right. Did you see Amazon speaking of everybody building a chatbot? They have their own chatbot for business now, called Q. They announced it this week at their big AWS Reinvent conference in Vegas. It comes a year after Microsoft backed open. I launched
Starting point is 00:20:08 its chat chbt chat bot. This is according to CNBC. And it has, it's an interesting name. They can't quite determine where Amazon selected Q. They may be from the James Bond movies, maybe from Star Trek. But it's really an enterprise chatbot. It plugs into applications like Jira and Salesforce and Slack, and it seems very useful for business intelligence. And, you know, my instinct on this one was that, oh, this is a boring chatbot. And, you know, of course, Amazon comes out with, like, the least exciting bot. And it's a very enterprise focused bot. But then I'm reminded of this belief that I have that we're going to end up moving from these broad, large language models to more narrow specific use case type bots.
Starting point is 00:20:54 And this actually might be something that can be very successful and useful to people who are using it in a business context. Maybe not as big as chat GPT, but definitely something with staying power. If it could, for instance, you know, help you speak to all your organizations' data and figure out like where you need to get selling better or what's broken and create a ticket right there or potentially, you know, aggregate some of the conversations within your organization happening on Slack and get a pulse right away. What's your perspective on Q? Well, first of all, maybe I just have, like, internet brain worms, but I couldn't believe that Amazon named this Q after we just endured, you know, years of, like, the Q&N conspiracy theory.
Starting point is 00:21:38 But that aside, I think that you are spot on that, look, the reality is that Amazon is not an organization of a bunch of, you know, high-minded researchers who, you know, think that they're building, you know, super intelligence, right? Like, this is a company that the majority of its profit is coming from. It's cloud computing service. It's talking to the biggest companies in the world every day about how they're storing their data, how they're accessing it, you know, what tools can make their business better. And I think that Q is, you know, responding to those concerns, right?
Starting point is 00:22:13 Like, they are trying to build something that will slot neatly into your existing AWS experience. And I totally agree that what people are going to need is something that's something that's. customize something that's fast, something that integrates with the tools they already use. You know, it's kind of interesting that one of the first things that chat GPT did was, or sorry, that OpenAI did with chat GPT was those integrations, right? It was like you're going to use chat GPT to like order from Instacart or you're going to like use chat GBT to like buy things online, right? And it's kind of interesting that for now that didn't really work, right? Like no one, I don't know anybody that's using those integrations where I think you're
Starting point is 00:22:54 definitely going to see enterprise customers using something like Q to integrate with the services that they're already using, right? I think we're a way off from something sensitive, like, using Q to, like, do financial transactions or to access, you know, maybe potentially sensitive customer data or, you know, things like health records. But there's no reason that you couldn't use something like this to say, like, hey, like, what was our profit last quarter, you know, or like, how much should we make from this? Or like, you know, how many customers signed up today or you know these sort of like run-of-the-mill business questions and I think that yeah it's boring but it's going to make more money perhaps at least in the short term than you know
Starting point is 00:23:37 trying to get super intelligence or make something that everybody um you know uses in both in their personal and professional lives cue is here to save us louise you know's the truth not again listen to cue oh i can't do this again Alex Q's finding these patterns that no one else is seeing. All you have to do is trust Q. Like, were these marketing people living under a rock? I just don't understand how that happened. Maybe they knew exactly what they were doing.
Starting point is 00:24:08 True. True. It kind of fits. Lewis Metzakis is here with us. She's a reporter at Semphor. We're going to talk in the second half about Elon Musk telling advertisers to go F themselves. So stay tuned. We'll be back right after this.
Starting point is 00:24:22 Hey, everyone. Let me tell you about The Hustle Daily Show, a podcast filled with business, tech news, and original stories to keep you in the loop on what's trending. More than 2 million professionals read The Hustle's daily email for its irreverent and informative takes on business and tech news. Now they have a daily podcast called The Hustle Daily Show where their team of writers break down the biggest business headlines
Starting point is 00:24:43 in 15 minutes or less and explain why you should care about them. So search for The Hustle Daily Show and your favorite podcast app like the one you're using right now. And we're back here on big technology podcast with Luis Metzakis, a reporter at Semaphore. Let's talk about what might have been one of the more remarkable tech interviews, I don't know, of a decade. I haven't seen many like this. So Elon Musk is sitting up there at Deal Book with Andrew Ross Sorkin. You knew it was going to be good.
Starting point is 00:25:12 And Sorkin immediately starts talking to Musk about this advertiser, drawback, boycott, exodus after they didn't like some of the tweets that he was writing about Jewish people. And Andrew Osorkin was like very like, I would say, tender in his line of questioning, talking not about how Elon Musk needs to be liked, but he did need to be trusted. And must replied in I guess typical must fashion, he goes, if someone's going to try to blackmail me with advertising, blackmail me with monies, go F yourself. Go F yourself. Is that clear, basically telling these advertisers that he doesn't want to play games with them. Either they were wanted to advertise on the platform or they didn't want to advertise there. I will probably sink Twitter's advertising business.
Starting point is 00:26:06 Louise, what did you make of this moment? I mean, you have to laugh, right? I think that Sorkin, yeah, maybe it was a little tender, but I think he also just did a great job of getting out of someone's way who was, you know, making. controversy for themselves. You know, he was really measured and sort of not reacting. Oh, it was terrific job interviewing it. Absolutely.
Starting point is 00:26:30 Yeah. It was, you know, it was sort of a masterclass in how to do an interview with someone who was really volatile and difficult. Yeah. I guess I have to say that I think at this point, what would be helpful for the media, and you've already sort of seen this, is to not let Elon get the shock value. I think a lot of people were like, I saw a lot of people quote tweeting, you know, Linda, the new CEO of X saying, hey, like, what's your response? This is so ridiculous. Like, what are you going to say? Like, how do you let him treat you and your customers like this? And while I think those are all valid questions, the reality is that we've seen again and again for the most part that he gets away with it, right? And I think that he carefully chose a CEO who was going to look the other way or, you know, stand by him in other cases, and, you know, she put out an internal message to employees
Starting point is 00:27:29 saying, like, yeah, we're going to double down. Like, we're not going to let anyone bully us and sort of trying to see what Musk said in the best light possible. I don't think that this is going to result in, you know, a better situation financially for X. But I think that in a paradox way, it's still furthered this very specific brand that Musk has cultivated. You know, you saw a lot of his loyalists saying, yeah, like, that's right. Like, he's telling it like it is. Like, screw Disney. Like, I don't care about these advertisers either.
Starting point is 00:28:05 Like, I like that he's not going to be blackmailed. Do I think that that is a legitimate narrative for what's happening here? Absolutely not. You know, he's long said things that are offensive, wrong, stupid. And that's just, you know, not the way. you can behave when you run an advertising company. That's just, you know, a freshman in a marketing major could tell you that, right? But I don't know if this is going to move the needle, really, in terms of where Musk stands.
Starting point is 00:28:34 I just, I think what I see again and again is that, I don't know if you feel this way, Alex, but like with Elon, people are like, this is finally going to be the thing that like some authority is going to like, get him now right like this is going to be the thing that like he has to answer for um and again and again that's just never been the case right so i think we need to stop waiting for that authority it's like people can back away financially they can cut their ties with him and i think that um i guess we need to sort of investigate are companies doing that for their own benefit and what are their interests here I think rather than using, okay, like, these companies are quitting, like, let's stick that on Elon, right? Or like, that shows that he's wrong.
Starting point is 00:29:24 I think that's the mistake that it is being made with the story again. It's just like, yeah, like, that'll get him, right? It's like, no, like, let's actually look at the interest here. Like, is this advertising effective? Like, what does this mean for companies like IBM or Disney that are pausing or canceling their advertising, right? Like, that's where I think the focus should be, even though I was stunned for sure. Right. No, it was absolutely shocking.
Starting point is 00:29:45 So I think there's a number of things to say here. First, I did think that Linda Yaccarino, the CEO of Twitter, should have come out with a statement afterwards saying something like, yeah, go F yourselves. That would have been so great. I would have loved that. Second, I think you're right that there's a level of do these ads work that matter most to advertisers.
Starting point is 00:30:13 and Twitter's ad effectiveness has always been questionable. So the companies really had to live on brand advertising. And so it's much easier for a brand advertiser to pull out. Like if Mark Zuckerberg said that, they would not leave. I can tell you that for a fact because Facebook advertising works so well that basically Facebook is immune from anything that the company does or the CEO says. And advertisers, well, within some level of reason, and advertisers will still be there.
Starting point is 00:30:42 it's much easier to cancel your Twitter spend. But I think going to the heart of the issue, I think that there's some level of what Elon Musk did that people working in the business side of the news industry have to appreciate in some way. And I'm not saying he was right. Obviously, he apologized for his tweet. I'm not saying he was right for engaging the people that he did.
Starting point is 00:31:10 but I do think that there is this, and we've talked about it on the show in the past, there's this brand safety puritanism that anyone who works in advertising now is just driven by this misguided idea of brand safety where they don't want to appear next to anything controversial, anything that's newsworthy, and that maybe more than anything is driving ad dollars away from the news industry. Certainly it's going to, you know, almost sync Twitter because there's going to be this mass exodus and you know there are things to to say about the way that Elon acted and the way that um and and what precipitated this whole crisis but it's also just like you know I think that anyone who's like celebrating advertisers just saying well we're out of here
Starting point is 00:31:56 um you know and trying to sort of dictate the nature of of you know the discourse in the US there's something a little bit I would say wrong about that And I don't know. I mean, I'm not exactly celebrating what's happened with Musk, but I also feel like, you know, there's something about what he said that should resonate with people trying to run media businesses. Maybe I'm misguided. What do you think?
Starting point is 00:32:22 I couldn't agree with you more. And I am so glad that you brought this up because sort of the irony of, you know, the journalists that are celebrating, you know, these advertisers leaving Twitter is exactly what you said, is that it's part of this bigger. industry that I think is really insidious, which is sort of like these organizations that exist to essentially warn companies that they're advertising in places that are not brand safe. And, you know, there's some nuance here. Like, I will totally acknowledge that it doesn't make sense for a kids, you know,
Starting point is 00:32:56 clothing company to be advertising on Pornhub for sure. You know, like there needs to be some, you know, like, you know, moderation here within reason. But we're at a point now where because of these sort of like brand safety analyses, which we should also say are often, you know, highly inaccurate, where they claim to be able to, you know, deduce the sentiment of a web page using like, you know, these janky AI tools and like, oh, this is negative. So like you don't want to have your advertising next to something negative or, you know, that has to do with a sensitive subject.
Starting point is 00:33:29 I think that that's really bad, not just for reporters and people who are producing the content online, but also for the advertisers. I think the best place for advertisers to be is where readers are the most engaged. And they're often engaged on serious, difficult, sensitive topics. And I think that there's nothing wrong for a company to be next to that content if it's handled with care, right? And if it's accurate, if it's not sensational. Like, you know, there's no reason that these companies cannot be on, you know, a New York Times story or a CNN story or, you know, big technology story. But a lot of times they avoid it because I think that there's this sort of cottage industry that scares them.
Starting point is 00:34:11 And I think a perfect example is what you saw happen with Jezebel, right? You know, basically, you know, and of course, again, I think that Jezebel was mismanaged by, you know, geo-media. But what you saw in sort of their explanation for why they wanted to sunset the site, which I should say has now been bought and is going to come back, is that we couldn't figure out how to sell it to advertisers because, you know, feminist content, content about abortion, content about, you know, these divisive issues that are really important to women, you know, we couldn't find someone who wanted to advertise there. And I think that when you have an ecosystem like that, what do you get? Particularly, like, let's look at women's media. What do you get? You get shopping content, right? You get consumerism. You get this stuff
Starting point is 00:34:55 that's like doesn't really resonate with the most important things in people's lives. And I think that Elon Musk is totally right to be frustrated by that. I just wish he was not the sort of mascot of this issue and that we had other, I think, people speaking up about it. And I think the problem is that a lot of reporters don't understand that this is happening because they don't understand the business model of their companies. Yeah. And we saw layoffs this week at Vox at Kandanask, Bendy Fair.
Starting point is 00:35:23 Washington Post seems to be gearing up for layoffs. I don't want to say it's directly related, but is it indirectly related? absolutely so speaking of elon the cyber truck is finally out right i mean let's talk about something that we can actually celebrate this beautiful vehicle with glorious angles and tremendous pickup i can't wait to drive it uh let's see this is from cnbc he says the musk says the cybrotrocks hard steel body was bulletproof and its windows are rock proof it could tow 11 000 pounds it can accelerate from zero to miles per hour in 2.6 seconds. It has a super tough composite bed that's six feet long and four feet wide. And must says the vehicle is going to change the look of the roads and the future
Starting point is 00:36:12 finally looks like the future. What do you think, Louise? I mean, you live in LA. It's a car driving city. Do you expect to see the cyber truck start to flood the roads out there? You know, how eager are you to drive one of these things? Oh, I will absolutely never get in one. Are you kidding me. No way. I will give it to Musk that it's fun. I appreciate that someone is like being inventive. I just like, oh my God, what if we let, you know, cool people who have imagination beyond sort of like a little boy gave them the resources to build really cool hardware? Like, that's what I always think about is just that I think a lot of the, no offense, Alex, but a lot of the men who are given sort of like the capital to build fun things,
Starting point is 00:36:59 I just think often it's pretty lame or like it's not as, I just like dream bigger. You know, it's like, oh, we got another gigantic clunky car that looks like it's out of a video game, right? Like, it's an impressive feat of engineering, but I bet it's going to have a ton of problems. It's going to be a collector's item, right? And there's nothing wrong with that.
Starting point is 00:37:19 And I will say, like, at least it made it to production, right? that was a question for a long time, whether this thing was actually going to get out there. So, you know, hats off to that. But I also think in the last few years, you know, Musk has sort of damaged his public reputation. And you're already seeing people move away from Tesla, right? Like when I first moved to L.A. in 2020, there were so many Tesla's, you know, on the west side here. And, you know, Silicon Beach, as they call it. And I'm seeing fewer and fewer of them, to be, you know, completely frank with you.
Starting point is 00:37:52 I still think they're great cars in a lot of ways, but it's not really cool to be associated with Elon Musk anymore. And, you know, a Tesla on the road is sort of one thing. But this cyber truck barreling down the street, I just think everyone's going to be like, whoa, who is that guy? You know, it's kind of giving the energy of the people who mod their cars. So I can't remember what they mod the exhaust or whatever. So it like makes a really, that's the same energy, right?
Starting point is 00:38:18 It's like the silent version of that. I'm taking the side of the. a cyber truck. I can't wait to drive this thing. I just spent the weekend, Thanksgiving weekend, driving the Rivian R1T pickup truck back and forth from New York to Boston and had an absolute blast doing it. And I'm here for the electric pickup, absolutely. So welcome cyber truck. Tesla, if you're listening, send me one. I'll get behind the wheel. Can I? Just for a weekend. Don't bribe me with the cyber truck, but just let me test it. One word for you, Alex, Rivian.
Starting point is 00:38:53 Can I get you a Rivian? Would you like to try a different, a different, you know, larger electric vehicle? There are options now. You know, I don't think you have to. No, definitely. Yeah, the Rivian, it really was great. It drove super smooth. It was just a fun car to drive.
Starting point is 00:39:10 I enjoyed it. We're going to have RJ Scurrens, the CEO of Rivian on the show, probably in the new year. So folks can stay tuned for that. We'll talk with him about the company, its finances, its strategy, and what's coming next? seems like they might have some stuff coming up next year on the new new truck front so but that's what we are we're moving maybe we are moving towards electric finally we'll see I hope so I don't think I'll get another gas car I think it would be a really a lot of things would have to go super wrong for that to happen and so you know if the cyber truck you know continues to make electric cooler great I'm I'm all for
Starting point is 00:39:45 that but I will laugh at you Alex when I see ruling by that would be a fair reaction um so So last story for us today. Jack Ma is back. He's been gone for, it seems like, years after feuding with the Chinese Communist Party. I mean, he has not talked at all. And Alibaba has really sunk maybe as a result or alongside this. And they have 22,000, sorry, 2200,000 people working at the company. And Ma came onto this message board. And he said, every great company is born in a winter to the staff and the people willing to reform for the future and the organization is willing to pay any price and sacrifice are the ones that are truly respected. What's your, I mean, you're a very close China watcher. We haven't seen Ma, and it seems like
Starting point is 00:40:37 years. What do you think about this return and what do you make of the significance? Yeah, I think it's been now three years since he's said anything. And this isn't really even a public statement, right? Like, he made this internally to employees, but I think it was so noteworthy that it immediately leaked, because, you know, any Alibaba employee probably, like, you know, immediately texted their friends being like, Ma just said something on our internal slack. Like, what is going on? I think it signals one thing in particular, which is that they are getting crushed by Pinduodwa, which is this competing e-commerce company that a lot of listeners might be familiar with because they own TEMU, which has now for over a year been
Starting point is 00:41:22 one of the most popular apps in the U.S. It's gaining a lot of traction in Europe as well. And it's kind of like Wish or Amazon. It's basically, you know, a discount e-commerce platform where you can sort of get commodities, like a spatula or like, you know, a costume for your dog or like, you know, have you ever ordered anything from there, Alex? Yeah, we're true Timo heads over here. like it's a constant of state analysis. Yeah. Not without its problems, but it is crushing right now. Yeah, I mean, the prices are a lot cheaper than Amazon in many cases.
Starting point is 00:41:57 They have all these deals. They're really good at sort of like gamified e-commerce. So, you know, you'll get on there. You'll kind of be browsing and they'll like give you a discount that you can't resist. And so PDD started as sort of like, it was really not cool. And they were kind of like, it was really looked down upon. It was sort of like uneducated people. people in what China calls, like, you know, third or fourth tier city. So not your Shanghai.
Starting point is 00:42:21 You know, it's like, you know, the grandma playing games on her phone and like ordering junk that she doesn't need. That was sort of the reputation that it had. But overnight, you know, since I think they were founded in 2018, so, you know, much later than Alibaba, suddenly they had, you know, 800 million users in China. And now they have, you know, I think something like 100 million people have downloaded, or at least 100 million times the app has been. Timu has been downloaded in the U.S., so they're just gaining ground, you know, very, very fast abroad, whereas I think, you know, even today, it's pretty niche to order anything from AliExpress, right, which is the equivalent of Timu in the U.S.
Starting point is 00:43:01 And it's Alibaba's, you know, international platform. And the number that just really, really stood out to me was that in the third quarter, PDD's revenue grew 94% where, you know, year over year, whereas Alibaba was only grew nine percent. So when you're looking at your competitor growing 10 times faster than you, you know, I totally get why mob was just like, I cannot, you know, stay silent any longer. You know, we have to do something to catch up. And I think it's just a really good example of, like, these companies that are not sexy often end up sort of smoking the competition over time. And I think you're seeing it again with TEMU here in the U.S. Like a lot of people who I talk to who are
Starting point is 00:43:44 using this app, like you, Alex, but I hear a lot of people in, like, not the coasts, right? It's like the people in Texas, you know, people in the Midwest who, you know, maybe they're seeing their paycheck shrink or they're, you know, being impacted by inflation. And it's really beneficial to them to have this, you know, fun, interactive site where they can order things for more cheaply than they can on Amazon and they don't have to pay $120 a year for a prime, right? And they get free shipping anyway. So, yeah, I think it's really interesting.
Starting point is 00:44:14 definitely something that I've been keeping an eye on. And, you know, the big question, of course, is just like, will this Chinese company end up in the crosshairs of, you know, regulators in Washington or not? It already is, but I think the benefit is that I think what you see again and again is just that, like, lawmakers like brand names, right? Like, we saw all the trouble that Jewel got into, right? Because jeweling became a verb. But the irony of that is, like, now all the teens are smoking vapes from China, right? Like these unregulated elf bars and stuff like that, but elf bar is not a name that any congressperson would know.
Starting point is 00:44:54 And I think that for now, Timo isn't either, whereas TikTok is, right? Like they all know what TikTok is. They all know it's Chinese. But if Timo can kind of just stay a little bit below the surface of a brand, as a brand, I think that that will help protect it. Does, and last question about Ma, Does his return signal any shift in the Chinese government policy towards its entrepreneurs? I mean, we just saw Xi Jinping coming to the U.S., speaking with Biden,
Starting point is 00:45:23 clearly trying to get the Chinese economy on track. Does that mean, you know, they might be backing off this, you know, basically they crushed some of the more visible CEOs. Do you think there's a change coming on that front? I do think that just because of the realities of the Chinese economy, that there's less room right now to, you know, overregulate or to clam down. Like, you know, they need to sort of signal to entrepreneurs that, like, this is a good time to start a company here.
Starting point is 00:45:54 You know, we're going to support you. And I think a good example of that is, you know, it's a good note to end on. It's just that China was, you know, the first country to pass comprehensive AI regulations. And a lot of other countries now are looking to them to see how does that go. And I was struck that at first there were these really rigorous, like, really intense rules. And I was like, I don't know how anyone is going to comply with that. But then, once you saw the finalized regulations, they were a lot softer. And I think that they were a lot more sort of pro-innovation. And that to me was, you know, another sign that there's
Starting point is 00:46:26 definitely been a shift. And I'm not surprised that now is the time that Ma is speaking up. Besides, I think he's also very scared for his baby that he's built, you know, and that it's kind of become an old behemoth that's now struggling. But I think also he was like, you know, I'm not going to have the authorities knocking at my door for this. Yeah, fascinating stuff. Luis Mentecis, thanks so much for joining. Thanks, Alex. It was great.
Starting point is 00:46:49 Great to have you on. By the way, can you shout out where people can find your work and the work of the semifor team? Yeah, you can read my work at semaphore.com. We are not spelled like the word semifor, but it's S-E-M-A-F-O-R. And you can get the newsletter that I write twice a week with Reidel Brigotti in your inbox on our website.
Starting point is 00:47:13 Terrific. Well, it's been great having you on. We had Reid on a few weeks ago, actually right before the Open AI news broke. So we had to go back to the Semmel for while. You guys are doing great work, and it's always great to speak with you. Thanks, Alex. So are you. We're big fans here. Okay. Thank you so much. Thanks, everybody for listening. We'll be back Wednesday with another one of our flagship interviews and then back next Friday to break down the week's news. We'll see you next time on Big Technology Podcast. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.