Big Technology Podcast - 2023 In Review, 2024 Predictions — With Casey Newton

Episode Date: December 22, 2023

Casey Newton is the editor of Platformer and co-host of Hard Fork. He joins Big Technology Podcast for our annual look back and forward at the year that was and the year to come. In this episode, we b...oth go through our standout moment from 2023, then make predictions on Gemini, GPT-5, the future of Google Search, Apple’s Vision Pro, X vs. Threads, Falling in love with bots, autonomous driving, and plenty more. Tune in for a fun lively discussion to close out 2023.   --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Questions? Feedback? Write to: bigtechnologypodcast@gmail.com

Transcript
Discussion (0)
Starting point is 00:00:00 Let's predict 2024 with Platformers Casey Newton. That's coming up right after this. LinkedIn Presents. Welcome to Big Technology Podcast, a show for cool-headed, nuanced conversation of the tech world and beyond. Well, it's the end of the year, and that means it's time to recap what happened this year and predict what's going to happen next for the third year running. We're joined by Casey Newton. He's the editor of Platformer, the co-host of Hard Fork, and a wonderful repeat guest here this time of year.
Starting point is 00:00:36 So, Casey, great to have you back. Thanks for coming on. Nice to be back, Alex. Thank you for having me. Wasn't 2023 like the most interesting year covering this stuff? For me, definitely. It's been most interesting one I can remember. Yeah, I think every year has its own kind of flavor.
Starting point is 00:00:54 You know, last year's flavor kind of feels like it was chaos. and I think of it mostly as the Elon Musk taking over Twitter year. This was a year where we saw people start to build a lot of new things. And I think for the first time in a long time in Silicon Valley, there was a sense that something truly new and momentous was gaining momentum. Exactly. And it had, I feel like it had everything. There were business stories.
Starting point is 00:01:18 There was like legal stories with the Sam Bickman-Fried story. And then, of course, like the big technology moments with AI. Do you have a single moment that stands up? for you this past year? I mean, I think the release of GPT4 to the public and the beginnings of it proliferating throughout so many different industries, I think will be the moment that we look back on when we think about 2023. Like this was the year that large language models went from pretty good to like often great
Starting point is 00:01:56 And I think the implications of that, you know, we're just going to be dealing with for a long time. One of the interesting moments to me that stands out from this year and we're going to move on to our predictions as opposed to spending too much time looking back. But it almost is easy to forget that this was like the quote unquote year of efficiency and all these big tech companies ended up laying off so many employees and, you know, dealing with some serious business problems at the beginning of the year, only to end the year. and, you know, better shape financially than they've ever been, more dominant than they've ever been. Company like Meta up more than 150% on the year, even Alphabet, right, up 40%. I was looking like we have so many more trillion-dollar companies now than we started with in 2023. It's kind of an interesting footnote in a more in a year that's been more interesting from a technology standpoint, but definitely something that I'm trying to, you know, wrap my head around.
Starting point is 00:02:54 Yeah, I mean, I think that the year of efficiency was really like the year of pretext, and the pretext they were looking for was just get rid of a bunch of employees they didn't want around anymore for a lot of reasons, you know? I'm sure that some aspect of it was wanting to be more efficient, but I think for the most part, they really, really just wanted to get rid of a bunch of divisions and do some reorganizing. Do you think it worked for them? Um, I mean, I don't know, you know, like, I don't get too far into the bowels of company org charts for the most part. Um, I think the, the proof is a culture thing though, right? It's like, did it revital? I mean, that's what people said that sparked their culture and it was like very lazy before. So just not from a broader standpoint. I mean, I don't know. I think that most of the people who got laid off, um, in the past year would do not feel like they were lazy at their jobs. And it's, you know, there's a lot of just, kind of individual cases, and I think it's hard to speak to generally, but we do know that these companies have been wanting to get rid of, you know, whole divisions for a while,
Starting point is 00:03:59 and this year they finally got their excuse. Okay, let's get to some of these predictions, because you've written a handful of fascinating predictions and platformer, which I'm a happy subscriber to, and I think we should talk about them. So one of the first ones that really caught my eye is that you say that Google mostly catches up to OpenAI in large language model quality and begins to neutralize GPT's lead, effectively saying all this marketing shenanigans that we saw with Gemini. Don't read too much into that. The Google LLM is going to be on par with Open AIs and it's going to be productized. Can you expand on that a bit? Yeah, I think for the past year, Open AI has had this lead that has felt
Starting point is 00:04:41 kind of, I don't know about insurmountable, but I just think they've been unquestionably the leader. And whenever anyone releases a new LLM, OpenAI is always the benchmark. And so Google comes out at the end of this year and they say, here's our new LLM and here are the benchmarks. And according to us, it's better than GPT4 in a lot of, excuse me, in a lot of cases, not every case, but it's like better in a lot of cases. And I think that we should basically take them at their word. I mean, sure, let's wait and see what the thing looks like when it actually comes out, and it's easy to say this when you don't actually have to release the product. But I just think that Google has generally been pretty conservative in the way that it talks about its AI. Like, I don't see them as people
Starting point is 00:05:30 who have been doing a lot of hyping and hyperbole. So I'm just sort of inclined to believe them when they say, like, trust us, this thing is going to hit the benchmarks it needs to. But I think the bigger issue is that after that happens, these models are all roughly going to be at parity. And then it's no longer a pure technology game. It's no longer about who hit which benchmark, who's 3% better at which test. It's all about product design and distribution, you know? And I think Google's in a really good position to just put its generative AI in front of a billion plus people. And I think Open AI is two through Microsoft. And I think they're just going to kind of fight it out there. Like obviously, they're going to continue to research these next generation
Starting point is 00:06:11 models and there might be some big leaps ahead and big breakthroughs still to come that'll kind of reset those dynamics. But I see 2024 as a year where these two get kind of closer together in terms of quality and they have to find new ways to compete. Right. And the Open AI I've argued is in the hits business, right, which is kind of going along the way of what you're saying. Do you think they have another big hit in them? I mean, Sam Holman, you know, spoke with you guys about their research. I was talking about how he recently saw like the veil of discovery pull. back. I mean, all this stuff around him getting fired. You know, there was a hint of maybe this was like in the wake of a big breakthrough. Do you think we see that next year?
Starting point is 00:06:51 I mean, well, first of all, I think I would disagree that they're in a hits business. Why do you think they're in a hits business? Well, if everything becomes commoditized, then you have to, I mean, I'm not saying they go away if they can't make hits, but they sort of are in a place where they're going to get caught up to by the bigger players very quickly. I mean, thanks to their partnership with Microsoft, I think they just kind of are one of the big players. I think they have a really powerful API that is being used by a lot of different developers. And I think that API is going to see increasing use for the next year. I mean, the story with them so far is basically that they can't keep up with demand,
Starting point is 00:07:31 maybe sometime in the next year, demand levels out. But I think they have a lot of just kind of very basic stuff they can do to make their API more available, to continue to fine tune it, to continue to explore new applications. for it. And I just think that gives them a lot of runway, like, through the end of next year. You know, when I think hit space business, I think of, like, video games. You know, it's like you spend like 10 years making Eldon Ring and you hope people like it. You know, Open AI has built some pretty powerful underlying technology that I think they can kind of continue to refine and, like, put into new shapes and new products. And so I think it's going to set them up for
Starting point is 00:08:03 success. Okay. Yeah. I know, I think they'll be successful. And the question is, like, to be the leader, right? What's it going to take? That's really, I think you're right to that maybe the hits business is too much into this like Hollywood video games motif, which might not be the proper way to compare it. You think we're going to get to AGI this year? You mean by the end of 2023? Let's go big. No, I think you know.
Starting point is 00:08:30 24, man. I think we have a few more weeks than 12 months. No, probably not. I mean, I think one of the things I predicted for next year is that open out, I will begin to train GPT-5. It seems like it's been about a couple years in between models for them. If they start it toward the end of next year, it'll still be a little early, which is why it's not my highest confidence prediction.
Starting point is 00:08:54 But I think that after the whole Sam Altman drama, they are going to feel the pressure to show that they are still the leader. And I think that's going to lead them to push really hard in seeing whether they can get their next model into training. Nobody who I have spoken to thinks that GPT5 is going to be AGI. And in fact, there are some, you know, technological hurdles to just sort of, they still have to figure out to decide, like, what is GPT5 even going to be? And how do we get there?
Starting point is 00:09:23 But, you know, this is part of the unknown that we are in. If you just continue pouring more compute into these models and just try to keep making them more general and feed them more data, do they become omniscient? And I think the answer is like probably no, but like ask me again at GPT7. Right. So when do you, do you have a prediction as to how close we are to artificial general intelligence? No, I have no idea. I only pretend to be a smart person on podcast.
Starting point is 00:09:52 I don't actually know, you know, when they're going to get there. Okay, let me give it, let me do it this way, over or under 20 years. I'll go first. I'm going over. Let me be really annoying. I don't think we're close to it. Well, let me be really annoying and say, how do you define AGI? that is basically artificial intelligence that's on part with human level intelligence that can
Starting point is 00:10:12 generalize that can well some would say that we already have that today some would say that we already have that today somebody say we've already hit that bar you know I think if you want to talk about something that is like some kind of software that is better that at every single human at everything humans can do that's basically what I think of as AGI and I don't To me, the odds that we get there in under 20 years are higher than 50%. 20 years is a long time. Do you look at the technology you were using in 2003? You didn't even have like an iPod nano back then, right?
Starting point is 00:10:52 So a lot can change in 20 years and, you know, just look at the progress we've made with AI in the past, you know, four or five years. And it's clear that the like, the rate of change is accelerating. So, you know, everyone in the subject always says, like, look, we could hit some ditch, you know, like we could, we could hit some cliff where we can't actually figure out how to get to the next place from where we are. And I think a lot of people would actually be really excited if that happened, including people who work in AI. But it is also possible that we don't really hit that ditch. Or like to the extent that we hit that ditch,
Starting point is 00:11:25 we figure out how to get out of it within like a few months or a year. So there was around the time of Sam Altman being fired and then rehired. There was these stories about this Q-Star model that apparently was able to reason. What type of stock do you put into that? I mean, we're talking about path toward artificial intelligence. Like, what do you think about that model? I mean, is that what we see next year? It's hard to say, and I did a little reporting on it,
Starting point is 00:11:50 and I was told, like, to the extent that that was happening at Open AI, like, it's not something that had come to the board's attention. I do think they're working on experimental models all the time. It would not be surprising to me to learn that they had built something that had shown some kind of new capability that, they got really excited about or they got really concerned about. But, you know, we don't know for sure. I would like to know more about that subject than I do.
Starting point is 00:12:18 The big thing that QSar was supposed to be able to do was to be really good at math. And so far, the LLMs that have been devised are really bad at math because they're not really designed to understand it. So if an LLM got really good at math, then, yeah, that would be something we want to pay attention to. Okay, one more follow-up. So you mentioned one of your predictions is GPT-5 starts training next year. It's always been surprising to me that they don't just start training the next GPT after they release that, you know, their current GPT? Like, what is the, what does GPT-5 look like that's different than GPT-4?
Starting point is 00:12:51 And why haven't they started yet? So, I mean, I'm going to feel a little bit out of my depth here. But, like, you think about what a large language model is. Like, the first thing you do is you, like, get a whole lot of data. and then you run a process against that data that creates this model that is then able to make predictions based on various inputs. And then you sort of go in and you fine tune the model and like you build guardrails. And my understanding is that like this is just a sort of massive project that you
Starting point is 00:13:32 don't just do it over and over again. It's phenomenally expensive to do this, right? So it's like, you got to go get a lot, you got to raise a lot of capital to undertake this process, right? And I think
Starting point is 00:13:48 GPT4 was their best idea at the time for how they could do all of that. And I think in the year since they have put it out, they have not devised all of the next steps. Where are we going to get the rest of the data? How are we going to, you know, reach the sort of next, you know, if the current model
Starting point is 00:14:10 scored 86% on the bar exam, what do you need to do so that the next one scores 100% on the bar exam? And I think there's just like sort of a million questions to answer along the way. So it's not like, you know, you release Microsoft Word 1.0 and then you have a meeting that says like what should be in Microsoft Word 2.0. It's like we've completed this massive science project and solved a lot of problems. And in some amount of time, we will start to think about, okay, let's plan another massive science project. Casey, that was an excellent explanation. Thank you. It was, if anyone actually, if any of your list is actually know anything about AI, I think that answer probably drove them insane, but forgive me. It's been a long year.
Starting point is 00:14:51 No, I think it's, I think it was great. So let's talk a little bit about A. AI created content now, which is going to be something that we really started to reckon with this year and is going to just explode next year. So one of your predictions is that the quality of search results degrades as Google proves unable to reliably detect AI generated content. Now, I have a question for you about this one because if, so Googles aren't, isn't the search engine that Google built all about basically identifying what a quality result is based off of the, way that the rest of the internet sort of relates to it. So, you know, if you get links in from high quality pages, then your, then your page goes up in the results. And my thought on this is if the AI generated pages are good enough to rank high within Google search results, then are they going to actually be low quality? Like, should we really care all that
Starting point is 00:15:49 much? And I'd love to hear you expand upon, you know, why you think that's the case. I mean, I think that framing sort of takes it for granted that Google search results are pretty good today. Most of the people I know feel like Google search results used to be better and that the results that you find at the top of a lot of Google search result pages today are not necessarily the best. They're people who paid the SEO consultants to game the system as hard as they possibly could so that they would show up at the top of results. and that is one reason why I'm not as inclined to believe that Google is prepared for what is coming for them. I think they have been able to get away with not actually caring all of that much about what the quality of search results has been. They effectively have a monopoly. Almost nobody uses Bing, right? And so what they have is a product that is just basically good
Starting point is 00:16:47 enough. It is good enough that people don't lose their minds and not too many people try to build an alternative. But once they get flooded with this crap that they're already being flooded with, you know, you're already seeing mistakes. There have been some interesting and high profile ones this year. Well, then I think, you know, we could be in a little bit more trouble here because I think they don't have the ability to assess what was written by AI and what wasn't. The SEO demons are going to ensure that a lot of bad stuff rises to the top. And I'm just not all that convinced that Google is going to have the tools to stop it. So, you know, we'll have to see.
Starting point is 00:17:31 Would love to be wrong here. But in my experience, I'm just not as convinced that they're going to be able to figure it out. That's a great answer. So where do you stand on this idea that maybe generative AI responses and chatbots could replace the search? So I think it's going to happen. You know, one of the best things that I did for myself this year was I started using this app called Raycast, which is a kind of launcher application so I could launch different
Starting point is 00:18:01 programs from it via my, via my little keyboard here. And I paid to upgrade Raycast so I can type in any question that I have to, you know, about anything. Most recent thing I did this for, by the way, I was watching. Jeff Bezos on the Lex Freedman podcast, and I was like, how old is Jeff Bezos? And so I just said, Command tab, and I said, how old is Jeff Bezos? And I hit tab again. And then it accessed GPT4 to tell me that Jeff Bezos is 59 years old. And that was, you know, useful information. Why is any of that interesting? Well, I didn't use Google to get there. I just asked ChatGPT. Now, chat
Starting point is 00:18:35 GPT might have had to do a little web searching to figure that out. In fact, I think it did in this case. But I didn't go to Google. And according to my little Raycast end of year results, I've done this hundreds of times this year, that things that I would have used to, do Google searches for, I now just ask the LLM directly. Now, maybe you're saying, Casey, isn't that super dangerous? Like, don't these models hallucinate? Like, aren't you sort of exposing yourself to a lot of bad information? And the answer might be maybe, but I only use this stuff for things where I don't care
Starting point is 00:19:04 too much if the answer is wrong. And it turns out that is most of the things that you ask a search engine. You know, we always talk about search like, you know, every search is life or death. A lot of search is like, hey, what? what movies was Adam Sandler in? And what you basically want is a list of eight movies. And if one of the movies is wrong, it doesn't matter.
Starting point is 00:19:23 You basically got what you needed, right? And this is what the LLMs are actually pretty good already at doing, is giving you a good enough answer. And so now, for the first time in my adult life, for the first time, like basically since web search existed, I'm now using a Google alternative all of the time, and I'm here to tell you it's fine. I get an answer directly to my question.
Starting point is 00:19:46 I do not need to see 15 blue pages of links. I just get what I was looking for and nothing else. And the user experience is better than Google. The information quality is just as good. And this is an area where I feel like I am like three to five years ahead of most people, but I'm telling you other people are going to get there. And what Google is going to realize is they're going to actually just let the web decline because their future is in building the experience that I'm using today with Open AI.
Starting point is 00:20:11 They're going to build it with Google products and the web. Let God sort them out. So predictions episode, I mean, I've been trying out this Google generative AI, generative search results. Do you think in 2024 that rolls out to everyone? And that's where they just basically answer your search queries as if they're a chat pop with a long text block. I think they'll keep pushing. I don't know if they'll roll it out everywhere. There'll be regulatory concern.
Starting point is 00:20:36 You know, Google, because it has this monopoly, people are quite sensitive to what those search results looks like. they get in trouble all the time for, you know, what they are and aren't showing in search results and how are they ranking things. Wall Street Journal had a great story today where they said that the publishers have looked into this and they believe that if the search generative experience rolls out broadly, most publishers will lose between 20 and 40% of their traffic. Why is that? Because, well, the AI is just scanning the webpage giving you the answer and now no one has any reason to click anything. So Google might look at that and say, well, this is the best user experience and we're going to do it, you know, publishers be damned. Or it might say it's actually
Starting point is 00:21:15 too risky to do this because we don't want to have to like, you know, go sit in a like regulatory hearing in every country in the world. So we're going to like create some licensing deals with publishers to maybe allow us to not have to do that. Like, so again, I don't know what's going to happen by the end of 2024, but like I do believe that Google will make the sort of bare minimum of licensing deals that it needs to so that it no longer has to care about whether it's setting traffic to publishers. And now we get to the most depressing of all of your predictions, which is that scaled-up AI-generated sludge out-competes many digital media companies for advertising and affiliate-linked
Starting point is 00:21:50 dollars, sparking further waves of job losses and consolidation. And you note the fact that we've lost BuzzFeed News, my former employer and Protocol and vice have all gone away. And, I mean, if we have AI out-competing, basically, you know, publishers putting these AI stories on their sites and realizing it's making them more money than the human riders. That's a serious problem. I mean, we've already seen a few examples this year with SI and Invest.com, and it's happened to me where I've been plagiarized with AI.
Starting point is 00:22:21 How prevalent do you think this is going to be? I mean, the New York Times recently hired someone whose job it is to do generative AI. Where do you see this going? So I would put what the New York Times is doing in a very separate category as everything else that you just said. Like basically they hired somebody to figure out like, Is there any useful thing that we could do with generative AI? I think that's actually a fair and good question for newsrooms to ask.
Starting point is 00:22:42 Everything else, yes, we are in a lot of trouble here. The reason that I believe this is because when you think about a digital media company like Vox Media, where I used to work, BuzzFeed, where you used to work, their core competency was really in the journalism. It was in the writing. It was in coming up with creative, entertaining, informing stuff to show people, right? and they're so good at it, and I want them to be able to do it forever. But there's this other dark side of this world, and the core competency that people have
Starting point is 00:23:13 is not in the writing, it is in the SEO and in the spamming. And I am just making the bet that the spammers are going to do better in this world than the digital media companies, because the things that you would have to do to out-compete a spammer, I'm just not sure anybody really has the stomach for it, right? These spammers are going to take just to name one example. You know, a lot of the internet is just driven by people asking how to do something, right? How do I change a tire? How do I like replace my kitchen sink?
Starting point is 00:23:42 Okay. And we now probably have good answers to like 95% plus questions of that variety that anyone would ask and they're all available for free like on the internet. And, you know, I know that a lot of, you know, some of the publishers that we've been naming in this conversation, they also do their own how to content. And they reap a lot of advertising revenue off it. Often you can put affiliate link revenue in there because, oh, you need to replace a sink. Maybe you need to hire a plumber.
Starting point is 00:24:08 Click this link and we'll get a little spiff if you book a plumber that way, right? So that is just powering a huge portion of the good publisher's revenues. But then along come to spammers and they say, we no longer even need to hire humans to do any of this. We can just have an AI, monitor search trends in real times, spin up a new article about this subject every day. with, you know, and sort of promoted as being even fresher than the last thing and just sort of endlessly win this game. This is their core competency. They do not care about the writing.
Starting point is 00:24:42 All they care about is that you click on the side. They don't care if you replace your thing. They don't care if you change your tire. They just want you to click on the thing, right? And that is the sad thing, is that they are the ones who are in the best position to win here unless Google actually goes in and says, we're going to pick some winners. Or, well, even more. I mean, you might not even need these type of websites.
Starting point is 00:25:04 If you get these how queries to chatbots and they can walk you through step by step, that whole class of publisher goes away. Exactly. Step one is the spammers win. Step two is Google wins by replacing the spammers. And the distance in between those two steps might not even be that big. But like those are the next two turns of the screw that I see coming. Yeah, I mean, I know personally that if I need to go for like a how to at this point,
Starting point is 00:25:28 like number one is YouTube. but if YouTube doesn't get the job done, I'm going directly to these chatbots. Yeah, sure. Why wouldn't you? I mean, I use them to like cook meals now, you know, like give me recipes I can make with these ingredients type of thing.
Starting point is 00:25:41 Again, it's like, it is not perfect. Sometimes they will lead you astray, but it's good enough most of the time that you do just wind up using it. Casey Newton is here with us. He is the editor of platformer and the co-host of Hard Fork, which you can get in your podcast app of choice.
Starting point is 00:25:56 On the other side of this break, we're going to talk about things that are not AI but will also be very interesting in 2024, including Apple's Vision Pro, which is going to be released next year. And we're going to ask, how's it going to do? And also, we're going to talk about whether people are going to get into more romantic relationships with AI. And of course, the Threads versus Twitter battle back right after this. Hey, everyone. Let me tell you about the Hustle Daily Show, a podcast filled with business, tech news, and original stories to keep you in the loop on what's trending. More than two million
Starting point is 00:26:29 professionals read The Hustle's daily email for its irreverent and informative takes on business and tech news. Now, they have a daily podcast called The Hustle Daily Show, where their team of writers break down the biggest business headlines in 15 minutes or less and explain why you should care about them. So, search for The Hustle Daily Show and your favorite podcast app, like the one you're using right now. And we're back here on Big Technology Podcast with Casey Newt. of platformer. He's also the co-host of Hard Fork, and he believes, and this is a new prediction, that Apple's Vision Pro is going to be successful enough to revive the interest in mixed reality
Starting point is 00:27:11 and the Metaverse. So, I mean, the presentation looked cool for the Vision Pro. You think this is going to have enough adoption. We were not talking about mass consumer adoption, but enough adoption for people to think about hanging out with others in the virtual and augmented reality world? I mean, where do you see this going? I do. And I will say, I'm a bit over my skis here, because I have not actually tried it. I want to, and hopefully I will someday, but like so far I've not tried it. But the thing is, when you talk to the people have tried it, they were all impressed. And the people who tried it are not all the sort of raw, raw Apple fanboy bloggers. There's some legit journalists and, you know, a handful of hard asses I know, put the thing on and we're like,
Starting point is 00:27:52 this is different. Like, this is cool and this is different. And I don't know exactly what is but it's cool. And so when you think about what does Apple need to do to make this thing successful? Well, according to Mark German at Bloomberg, they're shooting to sell between 400,000 units. Keep in mind, this thing is insanely expensive. The average sales price is going to be $3,700. But you spend your $3,700 and you have this thing. And if they sell, what they want to sell, then they've made a billion and a half dollars in revenue. And I just think if they get there, it's just going to be so obvious to them that there is something here
Starting point is 00:28:25 and they should keep going. I think the road ahead is hard. I think the question of what is this device for is still uncertain, although I have some ideas, but I do think that for a certain kind of nerd, this is going to become a status symbol and we're going to be sort of back off to the races
Starting point is 00:28:43 with the Metaverse. What are your ideas for like what people can you do with it? Here's my big like observation. I shouldn't call it big. Here's my observation. about the Vision Pro. Everyone loves a big screen, right? Like when you bought a TV,
Starting point is 00:28:58 did you basically get the biggest screen that would fit in your space? Oh, heck yes. Yeah, okay, so so did I. This is how most people operate because it's just fun to see things on a big screen. The promise of the Vision Pro
Starting point is 00:29:09 is that there is an IMAX screen that you can put in your backpack and if you're on a plane, you put it on your face, boom, now you're staring at IMAX or you're at work and you're working on a bunch of different things, but you're staring at your IMAX display.
Starting point is 00:29:24 There's a lot of people who will have reason to look at very big screens all the time, whether it is for work or for play, maybe they're traveling. And if Apple can nail this, then I do think it unlocks some new set of uses that go beyond anything that you can do today via a webcam or other existing technologies at the moment.
Starting point is 00:29:51 So that's kind of my, my bull case is a lot of people are going to want to have a portable iMac screen. My bullcase is that the spatial photos and videos that actually like put you back in the scene are going to be so incredible that it will be something that people will buy just for the memories alone because you don't necessarily need to film it on your vision pro. You can film it on your iPhone and then go back into the scene. And I think that's going to be crazy. So my perspective is, yeah, if we do end up getting to a point where this takes off, then that will be it.
Starting point is 00:30:26 It will just, but if that's the case, it will be a very expensive memory device. And I wonder how that fits, you know, the bigger visions that someone like Mark Zuckerberg has put out. Yeah, I mean, it's clearly the Apple and the meta visions are very different. Apple, of course, does not actually call it the Metaverse. Apple seems to be positioning their device as much more of a productivity tool. meta, it's been more of a gaming console. So, you know, they're kind of in different spots there. I imagine it will kind of all converge over time.
Starting point is 00:30:56 But, you know, this is one where I just kind of assume that Apple will do the best job. Yeah. And, you know, I did like the fact that meta made their vision much more of a mixed reality version than like immersive metaverse this year. And that might be what saves this whole project where they talk about how you can be in a room and it might be you and your friend and an AI. there with you. We're not at the point
Starting point is 00:31:20 where you're really going to spend that much time with that AI, but like as we get closer to that or even these new ray bands that they're showing off that have some multimodal functionality where you like tap them
Starting point is 00:31:29 and you say, hey, what am I looking at? And it can kind of, you know, tell you what you're looking at or to translate things. And be like, all right, put these glasses away.
Starting point is 00:31:42 But, but yeah, I don't know. I think the mixed reality movement next year is really going to be fascinated. Yeah, I agree. I will say that the biggest problem I have with these devices so far is that whenever I put one on my face, it's like there's a little countdown timer that's like, how soon can I take this off my face? It's hot. I'm starting to sweat. It's like a terrible time getting the thing to just stay in focus. And then of course, you know, when you're not in mixed reality, you're sort of disoriented. I'm about to like fall down the stairs. So there's just kind of a lot of basic stuff they've got to figure out. And I still kind of think it's like three to five years before that feels like mainstream technology. well i'll tell you something that people will keep uh will have people to keep these devices on their faces and not have that countdown timer it's when there's a artificially intelligent romantic partner that they're hanging out with in the metaverse or mixed reality or stuff and
Starting point is 00:32:35 that's one of your predictions which i i thought this was the most interesting and i really thought this is this is feel this is something that we can really see is that you say the number of people who say they are in romantic relationships with AI companies will increase sharply. So, first of all, are there people today that say that they're in romantic relationships with AIs? Yeah, there's an app called Replica. It's Replica with a K. And they make this product where you pay a subscription and then you can chat with a companion,
Starting point is 00:33:08 get to know it over time. It's all AI. And they had a bit of a controversy earlier this year when they, to use video game language, they nerfed the chatbots. And when I say nerfed, you used to be able to have very sexually explicit conversations with them.
Starting point is 00:33:24 And then they came in and they said, oh, no, no, you can't be that sexually explicit. And there was sort of this outpouring of anger and grief on the subreddit devoted to replica, because for a lot of people, this had become a sort of very important companion in their lives.
Starting point is 00:33:41 And, you know, some people hear that, and they think, like, that's so crazy. Like, how could you possibly be in, a relationship with a chat bot, but I just hear that and I'm like, of course, of course it's this way. Like, you know, I don't know about you, Alex. There are so many people in my life where my entire relationship with them consists of me sending them text messages and then maybe a phone call twice a year. What if I told you that the technology exists where you can actually just simulate that already? And you can. And then you just consider like the loneliness epidemic that
Starting point is 00:34:09 we have in this country and just like sort of how horny people are generally. And the idea that like you wouldn't pay 10 bucks a month for a sort of perfectly kind, empathetic, always available romantic and sexual companion that was into all of the exact same things that you were into? It's like, of course people are going to want that. So the only thing that's really holding it back is that essentially no one has sort of said we're going to go all the way in on making romantic partners, which does mean essentially get as sexually explicit as you want.
Starting point is 00:34:45 But the minute that somebody is willing to do that and they can figure out a way to distribute it because, of course, you know, something like that might not be allowed in Apple's App Store or might not be allowed in the Google Play Store. Once they can figure that out, billions. It's going to blow up. There are billions of dollars to be made.
Starting point is 00:34:59 So do you think that if someone's in a, and I don't have any ideas of this nature, but if someone's in a committed relationship and they develop another relationship with an AI, is that cheating? Good question. I suspect norms around this will evolve. You know, I think there are already things that people do in the course of their relationships that some people think they're cheating and some people aren't, you know?
Starting point is 00:35:23 Their relationships where people are like, while we're in a relationship, don't look at porn because, like, that's cheating, you know? I think most people would say, that seems a little crazy. I think that, like, we can be in a relationship together and, like, you can still look at porn every once in a while. With a chat bot, I think the question is, does it feel more like a person or does it feel more like watching court? And I think that in the answer to that question lies whether it's cheating or not. I think, I mean, I totally agree with your prediction. And I think maybe next year we'll see like these AI romantic bots start to like really ruin
Starting point is 00:35:58 IRL relationships. Like can't you wait, can't you like see in your mind like the wired story where someone's like there's like a big picture of a guy in his driveway and he's like I left my wife for a chat bot and I'm good with it. Yes, because again, think about it. It's like, you know, no, very few relationships are perfect, right? Because we're human beings and we get angry and we get frustrated and we snap at each other. And we are forgetful.
Starting point is 00:36:26 We're not always thought, oh, God, I forgot our anniversary. The chat box is not going to have any of these problems. It is never going to snap you. It is never going to be unkind. It is never going to forget your anniversary. And so, you know, now look, obviously. human beings have a lot of advantages. For example, their flesh, a huge advantage. But like, you know, it's not all smooth sailing ahead for the humans. Casey, can you imagine when we're
Starting point is 00:36:51 in a world where we're like wearing these lightweight glasses that do exactly the same thing, the Vision Pro do? A Vision Pro does. And you're like out to dinner with a friend and they bring their AI girlfriend or boyfriend that you can see through the glasses and they can see through the glasses and it feels like they're really there and they're like insisting that you treat them nicely and this is going to get freaking weird. That's interesting. I hadn't considered that dimension of it. Did like in my
Starting point is 00:37:20 mind, your AI companion is going to be like a private thing and like you might talk about your relationship but I think it'll be in the sort of the way in which you might talk about your relationship with a therapist where like your friends will know like their name and like a few things about them but like most
Starting point is 00:37:36 of the details of those conversations will not be volunteered. That's my guess. So, like, you know, I wouldn't bring my therapist to dinner with my friends, and I'm going to guess most people are not going to bring their chatbot companions. Well, if we're going to hit AGI within 20 years, just wait, Casey. I think, uh, wise man once told me, just imagine how far technology has come past 20 years. See, I already don't want to think.
Starting point is 00:37:57 I already want to go lie down just hearing you talk like this. Last segment is the Threads Twitter thing. So you have a prediction that Threads is going to overtake X. which it's called now in daily users and become the leading text-based social network. I mean, daily users, it's going to kind of be tough because they've had a lot of difficulty,
Starting point is 00:38:20 really, in retaining users on threads. So how do you think they're going to be able to reverse that? Is it just like more notifications with an Instagram? All right. Let me tell you something about threads, Alex. This app came out in July.
Starting point is 00:38:33 Lay down some knowledge. It comes out in July. It rockets to 100 million users. And yes, some of them do disappear. appear. But as of the most recent meta quarterly earnings, Zuckerberg says it has about 100 million monthly users. In any other year, if a new social network had come along and gone to 100 million monthly users, we would all be saying this is the next big thing in social. But we're in a world where, and it's mostly the journalists and then just, you know, the sort of like the late
Starting point is 00:39:05 adopters, but basically anybody who built a big audience on Twitter, and then, again, some these late adopters, they just don't want to think about having to build an audience anywhere else, and they're just determined to go down with a ship that is X. And so that has just led to, I think, a lot of really fuzzy-headed thinking about what the future look like here. What do we know? Every three weeks, there is some fresh outrage on X. Most recently, it was him bringing back Alex Jones. I saw Kanye West have started tweeting today. And I'm just here to tell you, nothing good is going to happen on X in the next 12 months. And so after that process plays out at the same time, you have some of the most competent app builders in the entire world, just
Starting point is 00:39:45 sort of picking off a bunch of low-hanging fruit, working on the product, working on the marketing and the distribution and the partnerships and the community management and the content moderation and everything else. And you fast forward 12 months and you remember that this thing gets to leverage the entire Instagram graph to grow and get engagement. It's like, this is going to be the thing that wins. They've already tested integration with the Fediverse. So my expectation is a year from now. They're going to be a lot further along in that project this week that we're recording this. They just welcome the European Union onto threads. I've seen my following go up a bunch just as, you know, those folks start to get online. So pick your signal. But like this thing is taking
Starting point is 00:40:28 off and I am shocked at how few people are willing to just call it for what it is. Yeah, I like it. threadhead, they're every day. And the thing, though, that might hold them back has been the fact that they don't want news or politics on the thing. It's much more hard to follow. That's just marketing. That's just marketing. It doesn't make any sense. Every day I open up threads, what do I say? Have you ever opened up threads and not seen news? It's not about seeing news. It's about the fact that I'm serious about it. I think that the fact is that the feed is much slower than Twitter. Twitter operates just much quicker, even if you're on the 4U page. Whereas like threads, it's just like, it's just harder to get that in the moment stuff. Maybe it's the
Starting point is 00:41:09 size of the threads. Maybe it's the fact that the algorithm is putting stuff that's a little bit, you know, delayed on your feed. But like in a breaking news event, it's much more difficult to follow on threads. Could just be me though. I think that's fair. That brings up something else that's going to happen in the next year, which is that threads is probably going to release an API, which means that publishers are going to be able to publish it to her directly. It means that emergency services are going to be able to set up little automated bots to tell you when there's a fire in your neighborhood or some sort of natural disaster. And that, I think, is going to bring a lot more of that real-time feel to threads. Now, you know, there are some
Starting point is 00:41:43 philosophical questions to be answered. And I suppose meta could decide, you know, we just want this to look very different from Twitter. But I just think that after that API comes out, it just starts to feel more vibrant and more newsy, more real-time. Is it possible that this text-based network idea is just not such a popular thing and that Twitter exploding the way that it is, I mean, it's only down 10%. But it's not, it's not having a good run. Maybe it just kind of ends like society's interest and need for something like this. Do you forget about when I told you that this thing has a hundred million users a month? Like, why is everyone so convinced that this thing is about to fall into the ocean or that like tweeting is over, text is over? There are
Starting point is 00:42:25 so many people that just want a cute little app on their phone. They can open up. They can see what's happening in real time. They can trade a few jokes with their friends. They can talk about what's happening in the basketball game. Like, that is just a neat. That's the reason why they were able to get to 100 million people in days is because that's how much pent-up demand there is for a competent version of what Twitter used to be.
Starting point is 00:42:45 So, yeah, I am very bullish on this over the longer. Okay, I'll take that point. Do you think you're like, please stop ranting. You're making the guess uncomfortable. No, what, come on. I think that if you get me and I, If you bring up a good point here, I'm not going to be staunching on my initial thought. I'm definitely interested in learning what, you know, what guests bring to the table.
Starting point is 00:43:08 So thank you. Thanks for teaching. My pleasure. Okay. One of mine is that Waymo's going to run away with self-driving. I mean, maybe it's not exactly the boldest prediction. Now that Cruz and Tesla have really hit a low point. What happened to Cruz?
Starting point is 00:43:24 Jesus Christ. Not good. you can't you can't yeah here let me tell you my story with cruise this year step one kevin and i my podcast hard for kevin and i go on a little ride in a self-driving cruise we had so much fun it was great everything worked great okay it was great my wife and i did it too we loved it like we were it was called fossil it drove us around the city felt safe but you guys are married now yeah forgot this that's beautiful congratulations what did that happened this year it did wow that's the real
Starting point is 00:43:57 story of 2023. Congratulations. All right. Thank you. Step two. Kyle Vote, the CEO comes on the podcast. We ask a bunch of questions. Everything's fine. It's, you know, we're having a good time. Step three, there's one crash, okay? And it seems bad. It seems that someone may have lied about the video of this crash. Next thing you know, Kyle votes out, the co-founder's out. Most recently I saw like- 24% of the company gone. It's 24% of the company gets laid off, like all these other top executives leave and another like we're restructuring everything. I was like, all it took to bring this company down was one car accident?
Starting point is 00:44:32 Like, that is a brittle-ass company. It's a great point. I mean, what do you think? Is there a bigger story that we're missing? I mean, what I don't know, I don't have any, like, you know, reported knowledge about this. But you have to assume that they did lie about something and that it looked like they, they, they like, this was not like a lie of omission. It was like they actually were actively deceitful. in a way where they had to clear out, like, the entire executive rights.
Starting point is 00:44:59 And, man, that just does not happen a lot in Silicon Valley. I mean, that's crazy. Yeah. I mean, it definitely feels like the punishment does not fit the crime, especially like it's basically the end of this company. Like, who knows, end, but it's not good. It's not good. But in the meantime, I do know I have Waymo access.
Starting point is 00:45:18 So, you know, and it's so great. And I do love it. But I just will say, it's like, in order to take one of these things, you usually have to walk five minutes to get picked. It's not going to come to where you are. You have to walk five minutes, okay? And then when it drops you off, you're going to have to walk five minutes there
Starting point is 00:45:34 to wherever you're going. And then the one last thing is that there will be like an 80% premium over what you would have paid Uber a lip. But if you're willing to put up with all that, very fun way to get around town. It's awesome. Those rides are smooth.
Starting point is 00:45:49 Got to admit it. Very smooth rides. Very smooth rides. Yeah. Casey, it's a joy as always to speak with you in this festive holiday New Year time. Thanks for coming on. Do you want to tell folks
Starting point is 00:45:58 where they can get Platformer and Hard Fork? I would love to. Thank you, Alex. If you would like to read about whether all my predictions come true next year, you can find me at Platformer. News, and my podcast is Hard Fork. And wherever you're listening to this podcast,
Starting point is 00:46:14 you could probably find it there too. And I encourage you to do it. You know what I always say to people, it's not important to me that you download my podcast? I'm sorry. Oh, I already botched the joke. All right, I'll start over again and pretend that you're going to edit this.
Starting point is 00:46:24 What I always say to people is, it's not important to me that you listen to this podcast? Wait, I already screwed up again. You know what? I'm going to abandon this joke and just say, download a hard fork and listen to it. It's been a long year. Did I mention that? It has. It's been a long, very interesting year.
Starting point is 00:46:39 And always great to end it with you, Casey. Thank you so much for joining. And thanks everybody for listening. What a year. We'll have a show coming up, I think, on Friday. If this is going Wednesday, we'll have it on Friday. If it's going on Friday, we'll have it on Wednesday. So stay tuned.
Starting point is 00:46:54 For that. Thank you, Nick Guantany for handling the audio. And thanks to all of you, the listeners. Again, great having you here. We will see you next time on Big Technology Podcast.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.