Big Technology Podcast - Reid Hoffman: Why The AI Investment Will Pay Off

Episode Date: January 29, 2025

Reid Hoffman is the co-founder of LinkedIn, a legendary Silicon Valley investor, and author of the new book Superagency: What Could Possibly Go Right with Our AI Future. Hoffman joins Big Technology P...odcast to discuss his optimistic case for AI, the massive investments flooding into the field, and whether they can possibly pay off. Tune in to hear Hoffman's insider perspective on OpenAI's $6.6 billion raise, the emergence of Chinese AI competitor DeepSeek, and why he believes these unprecedented investments will seem small in retrospect. We also cover the evolving Microsoft-OpenAI relationship, tech CEOs gravitating toward Trump, and Hoffman's views on AI regulation and TikTok's future. Hit play for a deep dive into AI's trajectory from one of the industry's most influential voices. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Want a discount for Big Technology on Substack? Here’s 40% off for the first year: https://tinyurl.com/bigtechnology Questions? Feedback? Write to: bigtechnologypodcast@gmail.com

Transcript
Discussion (0)
Starting point is 00:00:00 Reid Hoffman is here to talk about the promise and the business of AI. We'll be back with the legendary investor and entrepreneur right after this. Welcome to Big Technology Podcast, a show for Cool Ed, a nuanced conversation of the tech world and beyond. LinkedIn founder, investor, and author, Reid Hoffman, is here to talk with us about the state of AI and his new book, Super Agency. What could possibly go right with our AI future? Reid, been looking forward to speaking with you for a long time and great to finally get a chance to do it. Welcome to the show. It's great to be here. Thanks for having me. Definitely. I have a copy of the book with me now. This is the galley, the preview. And I think the book that debuts will look a little different kind of cooler. I know you're doing some custom covers. We can talk about that. And the book is all about artificial intelligence, your belief in the promise of AI. I love the what could go right subtitle. And I think that a lot of Silicon Valley is asking what could go right. And they have to find things that go right. Because
Starting point is 00:01:00 If you think about the way that these companies have been invested in, it's wild. And maybe it's like the true Silicon Valley story where you're going to lose a lot of money and then hope that you'll make some money on the other side. But Silicon Valley has never done it like this. And, you know, there's no one that knows it better than you in terms of the way that these works, the investing and the entrepreneurship side. Just a few numbers. Open AI raised the largest VC round in history last year, $6.6 billion.
Starting point is 00:01:29 Anthropic raised. $4 billion is on track to raise another $2 billion. Elon Musk has raised billions, you know, in his own right, Nvidia, $3-plus trillion market cap. Don't these investments have to pay off in a way that is going to be difficult for the math to work out? I actually think that those investments will actually, in fact, be pretty straightforward. It's not to say that there won't be just like the kind of early Internet and others, there won't be a bunch of crazy investments.
Starting point is 00:02:00 that will essentially not make money. But when you think about what AI is going to bring to the transformation of human life and human work, the investments will, I think, seem small in retrospect. And obviously, that seems crazy from a viewpoint when you're saying billions of dollars in venture capital. But actually, when you think about kind of like when every single thing that has a compute unit in it. And it's not just phones and computers, which will be huge. But, you know, speakers and lights and thermostats and cars
Starting point is 00:02:42 and, you know, planes, trains, and automobiles. Everything is going to get smarter. And as part of that, you know, kind of intelligence adds a huge amount of value, a huge amount of efficiency. And so the question really isn't, if it will pay out, you know, majorly, which I think it will, but which investments and over what time frame? Because those have some, some variable. But, you know, part of what I encourage people, part of the reason why, you know, wrote super agency, is anyone who's not actually engaged in using
Starting point is 00:03:18 AI today. And not just for, hey, I open my refrigerator, I have a set of ingredients, what should I make for dinner. That's a fun use case, easy. Or a sonnet for my friend's birthday party or whatever else. That's fun. But for serious things, for important things that range from like, oh, I've got this health concern. I want to talk about personally to, hey, I'm trying to do this kind of work and, you know, how does it help me with that work? And I think already today, if you play with, you know, GBD4, co-pilot, sonnet, other things, you will find that it can add real value today. today. Look, I agree with you in all this. And I think the thing that you mentioned that's most important is the time frame. Because most VC funds, tell me if I'm wrong, usually try to get a
Starting point is 00:04:05 return in seven years. And if you think about Open AI, right, let's just take that as an example. You were an early investor in Open AI. It's been a while at this point already. But even the most recent rounds, 6.6 billion, you know, in seven years, you want to be able to return 10x on that money, so 60 billion, and they're losing a lot of money. They lost five billion last year, according to reports. So how does the math work? So all venture as venture rounds work, you know, start with the company, like frequently in series A, series B, there's zero revenue, right? So they're always losing money. And it's kind of the curve of the expectation of revenue, you know, kind of compounding over the cost curve. And so that's just business as usual.
Starting point is 00:04:54 Now, it's a higher scale with Open AI with, you know, kind of having, you know, kind of a, you know, both, you know, a massive footprint already, a massive kind of leadership position and, you know, kind of revenue that that you're expecting to grow in a substantive frame. But this is actually one of the reasons why Silicon Valley tends to create so many of the technology companies that dominate the globe. because if you take an overly kind of classic DCF analysis, business school prof analysis, which much of the world does in investing and doesn't say, no, actually, in fact, it's a feature that you're investing massively on something that at the moment looks, you know, like today's eye banking analysis is not pretty good, but two, three, five, seven. And by the way, venture investments can go out to 10, you know, kind of years. And plus, if it's, if it's compounding and growing the right way can be, you know, kind of just epic investments. And it isn't just
Starting point is 00:05:56 AI things. I mean, this is part of like, you know, Greylock's investment in Airbnb, which I led the series A of, had that similar characteristics of just bleeding money and then later being the amazing business we see today. And that's that judgment call on a risk discounted basis is part of what you're doing. And, you know, the investment that investors made last year is kind of more of a growth venture round relative to it doesn't necessarily require 10x. I mean, we have trillion dollar companies now. So, you know, trillion dollar companies are definitely within, you know, feasibility. But it doesn't require a trillion dollar company to make real money because part of what happens 10x is when you expect zero as a possibility. Lower multiples
Starting point is 00:06:40 are fine when you know that you already have a baseline massive value that you're building on top of. But there's still questions about the ROI here. So basically, If the ROI doesn't come through, I think you believe it will. But if it doesn't, that could be a real problem. Again, it's thinking about the size. Don't you think that that has some blowback? And it's interesting to me, one more thing about that. A lot of the big traditional VCs did not participate in the most recent rounds.
Starting point is 00:07:03 I don't know if Gray liked it. I know Andrews and Horowitz, I think, sat out after evaluating. And then you saw Josh Kushner's fund really led the Open AI round. But the traditional names in the Valley were not involved. Well, there's kind of different risk profiles and different, you know, games. Like, for example, in Valley history, DST led around in Facebook when the traditional funds were all like, oh, we'll only do a three to five billion pre, you know, with lots of caveats and everything else. And DST came in and said, we'll do a 10 billion post and no caveats.
Starting point is 00:07:42 And it was an epic investment for DST and a Facebook. just like literally, you know, like blazing amounts of money and establishment of a position. And so sometimes when the traditional VCs go, hey, we're only early stage or we'll only do this, like new firms come in and show that through a smart bet can make a ton of money. And so that is also historical Silicon Valley position. And, you know, I think that the questions around, you know, you say, well, when can you invest billion, is a venture firm. And the answer is, look, you're trying to hit a return multiple. But it's much easier when you believe, with some vigor and evidence, that your chance of going below
Starting point is 00:08:30 1x is very, very small. And, you know, when you look at Open AI's position, you go, well, actually, in fact, is my position guaranteed on the downside seems, you know, like a good bet. And then do I believe in compounding with a risk discount that I can make a multiple? on the upside, that's when, you know, investors make money. And part of venture is, there's, you know, it's traditional and a lot of venture investments that, you know, firm X goes, I'm going to, I'm going to bet on company A when firms Y and Z go, oh, that's nutty. And sometimes, you know, firm X is right. And sometimes they're not. I mean, this is, you know, whether it's, you know, like for example, we're speaking for Greylock, you know, when Greylock invested at Facebook at a 500 million pre-money valuation,
Starting point is 00:09:19 literally all of the companion VC firms were, oh, that's crazy, this college website, Greylock's demonstrating that it's past, past time, et cetera, et cetera. That's how you make potential, and they're risky, of course, but that's how you make epic investments. So then how does this change the AI field? I mean, you're an early investor in Open AI. The idea was for it to be open. It's not open anymore. I mean, you don't see a lot of open source releases from them. They don't publish as much as they intended to at the beginning. It's kind of a perpetual joke from Elon that they're closed AI. And, you know, this technology that you
Starting point is 00:09:58 invested in, I think, because you thought it's going to be a fundamental building block of the future for humanity now has such pressure on it to deliver that profit for the investors that put the money in. So how does that change the field? Well, so first, in terms of being open, there's lots of different parameters of open, and Open AI has been, you know, working to have like open, you know, kind of, you know, with obviously identity and registration for safety, but Open AI API accessed. It wasn't called Open Source AI. It was called Open AI, right?
Starting point is 00:10:32 Or Open Model AI wasn't called that either. And so the question is, is the parameters of open. And sure, people would say, well, I think what you should be open is all your meeting should be Zoom broadcasts to the world. And that's what I think Open should be. I mean, it's not an outsider's ability to claim Open. It's insiders. And for, you know, kind of the entertainment, you know, Elon was part of that decision for, you know, as kind of co-CEO in terms of making that decision.
Starting point is 00:10:58 And so that's how Open AI has always operated. Now, I think that Open AI continues to stay on its mission, which is how do we make AGI for the maximum benefit of humanity. And there's various things that it has done for, you know, kind of catalyzing safety practices, catalyzed, you know, kind of knowledge about what kinds of things would be the right kinds of design for bringing it into, you know, kind of everyday usage. And I think part of what OpenAI also led the field in was doing this thing. And I call my book Super Agency iterative deployment, which is make it available to, you know, hundreds of millions of people around the world so they can play with it, give feedback,
Starting point is 00:11:45 you know, kind of say, hey, these are the things that really work. These things don't work as well and be able to get familiarized and participate in the conversation of it evolving. And I think that is also part of the openness. And so I think that all fits within the Open AI mission for what they're doing. It doesn't mean that there aren't footfalls. Doesn't mean there aren't corrections. You know, in early days, Open AI was publishing a lot and then went well. From a safety perspective, We think we should slow down publishing some because part of being, you know, for the benefit of humanity is paying a lot of attention to what are called alignment issues within AI. And Open AI has continued to be one of the major leaders in that.
Starting point is 00:12:25 And so I think it's, you know, I'm still very, I think the Open AI has stayed true to its mission and continues to do so with vigor. But the money does change things. And you sort of tend to see it as a consumer when you're, in an app and you're like, oh, they're growth hacking me. And that's because they have to return some value to investors and they need you to get back on the app, either to see an ad or they just want to show Wall Street that their daily active or monthly active user count went up. It changes the product. And so I'm curious what you think all this investment does to the product. Like, do we
Starting point is 00:13:01 get a different product because of this need to show returns? Well, one of the things, look, There's kind of a little bit of a trope within a bunch of media that money somehow is, is anti the interest of individuals or anti the interest of society. It's actually how we power everything in society. And so, you know, having products. I'm not anti money. I get it. I'm just, you know, I hear what you're saying. So money itself is not necessarily a problem. It's a question of how the money plays in. So for example, money playing in as within the, within the kind of functioning of society, products and services, you know, integration in industry, etc., broadly most of those things are mostly very positive. Now, of course, in a pure mission standpoint, you'd say, hey,
Starting point is 00:13:48 we could make our service, you know, kind of free for everybody, and that would be a really great thing. And Open AI is actually, you know, spending a much of money, making it free or subsidized to get the widest possible exposure, engagement in the, and learning about these products and being able to use them. A lot of Open AI products are offered at below what their cost right now in order to do that. But of course, they will, as an invested company, need to begin, you know, kind of, you know, over time, showing returns. It means that they need to also focus as intensely on their money-making capabilities as their, you know, engage the world, you know, kind of possibilities. But ultimately, I think, just like many commercial products, that's not antithetical
Starting point is 00:14:37 to its mission, because succeeding in raising capital, succeeding in building these products, and succeeding and offering them to the world is part of the, you know, pro-humanity mission. And, you know, we live in a capital world where these things are decided by capital. So I think that's a good thing. Now, might it say, hey, we're going to focus on some real money-making products for enterprise, before we get to the mass market, you know, kind of global south, you know, things that are, you know, for people who are in less wealthy economies, you could have impacts like that. That's a natural part of how the capitalist system works. But I know that Open AI's mission for humanity won't allow it to be, you know, it'll allow it to be strategic,
Starting point is 00:15:25 but not distracted from its focus. So you think that, we can still get AI technology that's going to benefit humanity as the North Star, even if there's an imperative to return to investors. 100%. And I think that's already what we're seeing in motion right now. You know, as long as we're talking about opening, I have one more question for you, because you are on the Microsoft board. And it's been pretty obvious to everybody watching from the sidelines that, you know, there was this really close-knit partnership. then some governance weirdness at OpenAI.
Starting point is 00:16:01 Sam is fired. Sam comes back. And then next thing you know, Microsoft hires Mustafa Soleiman, who you co-founded inflection with. And it seems like there's some healthy competition, but competition, where Microsoft is trying to start to build some of the same products
Starting point is 00:16:19 that you might see within Open AI, and maybe insulate itself from some of the risk that became painfully aware of during the incident with Altman. So what's the status of that relationship? And can Microsoft and Open AI still form a strong partnership and still have a tight bond, given all that's transpired and where things seem to be heading? Well, I think they're both partnering and competing. And actually, I think that's healthy for both companies and healthy for the ecosystem. So they're partnering because there's a lot of ways in which Open AI uses Azure infrastructure, which
Starting point is 00:16:57 you know, has been, you know, building, you know, amazing new capabilities through being on this journey with open AI. I think that, you know, open AI sells things to enterprises and consumers that are, you know, it's like, you can pay open AI or you can pay Microsoft. And so they have a, that there's competing in that. And I think that's healthy for the ecosystem and for also both companies. And so, but, you know, they also have ways that, you know, they, by partnering, they also potentially both, you know, both get a lot of benefits from that. And so I think it's a good ecosystem. I think when the history books are written, I think this will be one of the kind of epic
Starting point is 00:17:39 partnerships that, you know, kind of plays out into the field. And, you know, there's now lots of people who, you know, it's a growing number of competitors. You know, so it's not just Google, not just anthropic. you know, Amazon's beginning to do stuff directly itself as well. There's, you know, Deep Seek and others within China. And so the competitive sphere, you know, as anticipated is growing. Yeah. And you're setting me up perfectly for the next question, read. So thank you for that, which is you started off by saying the question really is time frame and who's going to win. Who do you think is going to win? Well, the good news is I think there's going to be multiple winners.
Starting point is 00:18:22 and, you know, maybe that, you know, you know, one company wins at one thing, one company wins at another, you know, the incumbents are obviously trying to bring AI into the areas that they have strong positions. So, you know, Microsoft's very focused on AI for the enterprise. Google's very focused on AI for search. And also, it doesn't mean that the only thing that any of these companies are doing, you know, part of what I think, you know, open AI is doing is, is saying, hey, we need to have a kind of a third strong position with kind of an AGI for humanity mission. And so I think there are going to be multiple winners.
Starting point is 00:19:01 And I, you know, I, you know, and Greylock have continued to invest in a variety of startups because we think there's going to be a massive number of startup positions that will be really interesting as well. And so I think that there's going to be lots of winners. And so the kind of who's the winner is, I think, actually, in fact, it's like, who's the winner of the internet? Well, obviously, Google had a really strong position as did Amazon, as at Facebook. But there were a bunch of others as well. And so I think that's similar.
Starting point is 00:19:34 There's going to be many winners. But do you think there's room for all these foundational model companies to succeed and pay off? One of my perspectives is that, like, there's going to be some company that's just going to sort of collapse under the weight. of what it needs to deliver, whether that's an anthropic, I don't know, or Open AI. It just doesn't seem to me like they can all succeed. But maybe I'm wrong. What do you think?
Starting point is 00:19:57 Well, I do think that there will be efforts at foundational models that won't work. And especially, I think, late entrance. Like late entrants have a harder, you know, catch-up routine and a lot of kind of expense to go to it, especially given that the early folks like Open AI have a very broad-based, like, To some degree, if you ask the average, you know, worker or person in the street, what is, what is AI?
Starting point is 00:20:23 They'll say, chat GPT, right? And so that gives, you know, open AI kind of a strong, you know, kind of initial position. But, you know, I think that the later entrance to frontier models as startups may have particular challenges. Does Elon Musk fall into that category? I mean, he's a late entrant with XAI, but he's building a million GPU cluster out of Memphis. TBD. Now, it's a huge expense line, you know, prior to kind of a revenue position. But, you know, that, as we talked about earlier, can be a classic Silicon Valley position. So I think it's a, he's a, he's a, you know, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the guy is a, at least in the U.S., and now, if I. talk about AI and I don't mention DeepSeek, I get a lot of comments. It seems to indicate to me that China, so for folks who are listening who don't know about it, it is a large language model
Starting point is 00:21:28 out of China according to its own benchmarks, so take this with, you know, for what it's worth, it outperforms Meta's Lama 3.1 and OpenAI's GPT40. It seems like China's in the game. Do you agree? Absolutely. It was one of the conversations that I'd frequently have. the last couple years where they'd say, oh, you're just bringing up China competition as kind of the bug bear. We're like, no, no, part of my job as a investor, as an entrepreneur, as a theorist, is to predict where the puck is moving to. And the Chinese are strong and vigorous. There's lots of very capable, very hardworking entrepreneurial folks with technological depth. And so it doesn't surprise me that Deep Seek has entered the field.
Starting point is 00:22:17 with Digger. And I think that, you know, part of when you look at kind of what is going to be playing out with AI, China, and multiple companies within it, because, you know, I haven't evaluated yet mini-max, which also just came out recently, but will have a number of very strong contenders for this in the world technology landscape. And so I think it's a, you know, the game is on. or game is afoot if we want to be Sherlock Holmes. Every little thing that the U.S. and the West have done to hamper China's ability to play in cutting-edge technology doesn't seem like it's working. There's been restrictions to the silicon that they can use in China, and now Deep Seek has shown there in the game. Now, look, maybe it started with a Lama model, and they've just kind of Frankensteined it.
Starting point is 00:23:10 But anyway, they've done it. It exists. Then you think about phones, right? there have been efforts to make sure that the processors within phones that are the most advanced don't get into the hands of Chinese phone manufacturers. I think the Huawei Mate 70 is a real advance and maybe on par with the iPhone in some ways or maybe U.S. smartphones in some ways. Anyway, it's definitely cutting into the margins of Apple. Apple struggling within China. U.S. handset makers are struggling there. And it's almost like a point of national pride there.
Starting point is 00:23:44 I was there recently, just for a very brief stop, but it seems to me like the government ban of iPhones within Chinese government offices is legit. And, you know, part of the way that they can do that is because they've gotten around these restrictions. So talk a little bit about why the West has failed on that front. But I wouldn't necessarily say it's failed because most the restrictions are kind of call it headwinds or are kind of slowing, not stopping. Right. And so the question is, what degree has it actually successfully slowed? Now, part of the reason why I'm a supporter of the Chips Act and other kind of efforts is because I think part of the thing that we, as the world, including America, have to kind of demand from China is kind of a level playing field. So if they say, hey, we can sell our stuff everywhere in the world, you know, and we can subsidize it in various ways that make it, you know, less, you know, kind of free market competitive, and when you're restricted from how you can sell things here,
Starting point is 00:24:50 that's a kind of a mercantilist policy that you have to essentially respond to. And I think that part of the Chips Act is to say, well, if you're going to be playing this mercantiless policy, we're also going to put some breaks on, you know, kind of your ability to develop the kind of the large size, very effective clusters that we think the next generation of A.L models are going to be very dependent on one because it doesn't surprise me. I mean, GBD4 was built on A-100s. It doesn't surprise me that with the current kind of A-100s and other things, you could build another GPD-4 model, so that's not as surprising. And, you know, we have H-100s going to Blackwells, and the question is, what is the speed at which we're going to be getting into that?
Starting point is 00:25:40 And I think that's part of the game that's, you know, still afoot. And so I don't think it's a failure. And I do think that it's the right thing for the U.S. to be doing, given that China is generally taking a kind of a mercantilist competitive position. And I think that's something that the U.S. and other kind of countries, Europe and everything else, should be opposed to. You mentioned the Blackwell, which is NVIDIA's cutting-edge chips. All the headlines have been showing that they've been having some. trouble getting them to production or getting the yields they want and there have been some delays. I feel like Reed, you'd probably have a pretty good window until what's going on there.
Starting point is 00:26:24 Maybe not like what's happening in the halls of NVIDIA, though you might, but certainly what type of delays we're seeing, given that you work with Microsoft and you invest in startups. Like, is this a real concern? Like, is this going to be a bigger story over time if they can't get their yields up and produce the Blackwell chip the way that everybody is anticipating? Well, it's definitely, if they can't get their year olds up and produce it, that will be a, you know, concerning thing for Nvidia and we'll slow down some of, you know, kind of AI's development because the faster chips in a dense environment is actually, in fact, instrumental and helpful. And is that a legit concern? It slows it down. It doesn't stop it.
Starting point is 00:27:04 And, but, you know, on the other hand, and, you know, look, I think this is, again, one of the things where the actual timing matters, it's actually not a typical. for either NVIDIA or chip manufacturers to sometimes be to have six to 12 month error bars in the stuff they're doing and even though public markets tend to be the oh my god this quarter and it's like well the real thing is
Starting point is 00:27:28 what does the next two to three years look like and so I you know maybe this is my longer term venture perspective I tend to go well a quarter here a quarter there doesn't really matter in the three to 10 year kind of time horizon And so maybe I'm a little bit too casual on this, but, you know, time will tell.
Starting point is 00:27:48 Yeah, you can be chill given your position, but public market investors will freak out. But I guess we'll cross that bridge when we come to it. Okay, I want to speak more about your book, which I think is sort of like in some ways a manifesto to the world talking about the positive uses of AI and why that book needed to be written. So we know that there's some issues with trust in AI, and maybe that's the reason. I'm going to ask you a bunch of questions about it when we come back right after this. Hey, everyone. Let me tell you about The Hustle Daily Show, a podcast filled with business, tech news, and original stories to keep you in the loop on what's trending. More than 2 million professionals read The Hustle's daily email for its irreverent and informative takes on business and tech news.
Starting point is 00:28:33 Now they have a daily podcast called The Hustle Daily Show, where their team of writers break down the biggest business headlines, in 15 minutes or less, and explain why you should care about them. So, search for The Hustled Daily Show and your favorite podcast app, like the one you're using right now. And we're back here on Big Technology Podcasts with Reid Hoffman. Reid has a new bookout called Super Agency. What could possibly go right about our AI future? And as I tease before the break, I think it's going to be quite interesting to see this
Starting point is 00:29:03 question, what could possibly go right? Because a lot of people seem to be thinking what could go wrong. Every survey you look at, I was just looking at one beforehand, has a remarkable level of distrust in AI. I think it was a PWC study that I just looked at that said 61% of people do not trust artificial intelligence. So I'm curious if you could comment on why there's so much distrust here. And then again, your book is all about what could go right. There's a great chapter in it all about, like, think about the best case scenario. And so tell me a little bit about the motivation behind that.
Starting point is 00:29:38 Well, very much broadly, most human beings start with a new technology with distrust. You could see similar dialogue, you know, in recent history with the Internet, our mobile phone, and kind of go, oh, my gosh, this is going to, like, corrupt our children. This is going to, you know, interfere with our ability to have, you know, a quiet time to think. There's going to be mayhem. And so that's always with new, massive technologies, the first human response. that human response is reflected in everything from, you know, individual consumers. What do you think about this thing you don't know?
Starting point is 00:30:13 It's like, I don't know, to, you know, journalists, to academics, you know, to political leaders, and even sometimes technologists in this. And so that's the kind of typical pattern. So part of the reason why I, you know, kind of wrote super agency was to say, by the way, this pattern has gone back to the history of the written word and the printing press and has done everything from like the power loom and industrial revolution, cars, planes, you know, the entire thing. And so this is actually, in fact, a common pattern of this technology means the end of, you know,
Starting point is 00:30:50 what I think of as human agency. And I think actually, in fact, at the end it ends up being super agency. And as opposed to that, it's like, no, no, we create great things with these. Now, that doesn't mean it's, there's no concerns. The transitions are almost always difficult. We want to learn from the past to make these transitions now better, even though there's, you know, new difficulties as well in the transitions. And part of the reason why a focus on agency is not just because we have this discussion with AI
Starting point is 00:31:25 as being this new agentic technology, which, you know, adds a little cohesion to people. But it's because when you think about the kinds of things that people are worried about, whether it's privacy, whether it's misinformation, whether it's jobs, you know, other things. This all kind of actually resolves the questions around human agency. And actually when you look at both the historical things with these, you know, massively disruptive technologies and also AI, I think we're going to see a great increase in human agency. we just, you know, kind of, and it will happen, I think, no matter what, but I think we want to navigate it in even better ways. And that's part of the reason why, you know, writing a book. Right.
Starting point is 00:32:08 And so it's so interesting because a lot of those technologies that you mentioned, you mentioned the printing press, I think which was followed by lots of war because people started fighting wars over ideas. The loom, I mean, the industrial revolution wasn't pretty for folks in the immediate aftermath. Like, a lot of the introduction of these technologies kind of sucked for the people who were there during the transition. And I was reading through your blurbs and I found the funniest one. It's from Yuval Noah Harari who like really writes about how tough a lot of these transitions have been. He says, I disagree with some of the main arguments, but I nevertheless hope they are right.
Starting point is 00:32:44 Read it and judge for yourself. But what makes you convince that we're not going to have a problematic start here? Well, I do think that there will be challenged and like I, I'm not, I'm not. trying to dilute anyone that there won't be real transition issues. We as human beings adopt new technologies fitfully and conflictually and with a lot of, you know, kind of like storm and drong. And so I anticipate problems. Part of the reason why to write super agency to kind of say, hey, how do we navigate these transitions is we say, well, what are the issues we're going to have and what are the positive futures we want to get to? And how do we
Starting point is 00:33:21 navigate that in the most human and most compassionate, wise, possible way, doesn't mean there won't be pain and challenged and suffering. But, for example, a classic one with AI is to say, well, this is going to change a lot of jobs. And it will because it's kind of a new tool set, like a new tool set. Like if you say, hey, I'm a professional today and I don't use smartphones and I don't use computers, you're like, I'm not really sure you're a professional, right? It's a required part of the tool set in terms of how you operate. And a bunch of the tools within in each profession. It's like, well, I'm a graphic designer, but I don't use Figma and I don't use Adobe. And you're like, no, you're probably not a graphic designer, really. And so I think that
Starting point is 00:34:01 AI is going to have that kind of transformation. Now, you say, well, what's going to happen? It's like, well, a bunch of jobs that human beings are going to be done, or it's going to be replaced by other humans using AI. But AI can help with that transition. It can help the human learn how to use it. It can help the human use AI to do the job in the new way, can help find if it's like, well, I'm no longer well suited for this. What other jobs might not be able to find? How can I learn and do those? AI can be part of the solution. And that's part of the kind of the kind of optimist message. It's not that, hey, everyone's lives, everyone's work stays exactly the same. And now we just have a little bit of salt and pepper of AI. No, no, no, there's
Starting point is 00:34:48 going to be big transitions. But we can be helped with those transitions. And then people say, well, but I don't want to do the transition. It's like, well, you know, I understand, I'm sympathetic. We should try to be as helpful as possible. But if, you know, as I go into some depth in the, in the book, if the Luddites had successfully destroyed all the power looms and said, well, no, we want to have independent weaving, then that would have meant that their children, their grandchildren, et cetera, would have been kind of doomed to poverty and less relevance. And so that's part of the reason why, you know, kind of in the global sphere of technology adoption, it's very important in the cognitive industrial revolution that for not just for
Starting point is 00:35:35 yourself, but for your children, grandchildren, your society, that you're in the forefront of doing this. And that's one of the things that is part of being a responsible citizen, a responsible member of your industry. And by the way, it doesn't mean that your life has to suck. I mean, you can learn the new technology. You can make it work. On that note, AI can code pretty well. Is learn to code good advice? Yes. Although, by the way, of course, coding will be changing. So as opposed to the previous skill set, that's kind of like the example of transformation, is like, okay, I learn to do very precise semantics of how, when I'm using, you know, objective C, how I do memory management, you know, et cetera, et cetera. You'll need to understand
Starting point is 00:36:20 the concepts of kind of code, but AI will, will, will, will be the kind of generation, a massive generator, a massive quality assurance checker, a bunch of other things. You'll still want to be learning the kind of, as it were, the pattern of the computing and coding mindset. But now, as opposed to like the, oh, I'm really, really good at the rigid syntax, that will be less relevant, not zero relevant, but less relevant. And what will be more relevant is conceptualizing. Like, oh, this is the kind of thing you can do with code. And one of the things I think will be interesting with the AIA Revolution is all of us will have a coding copilot and assistant on our PC, on our phone, that will help us do these things. And so when we think about, like,
Starting point is 00:37:10 you know, how do we accomplish some kind of professional task, how we do information gathering, information analysis, report generation, et cetera, we will actually, in fact, be using coding assistance to help us do this. And now that will be available to everybody. And that will, you know, kind of speed up the level of informational intelligence. Part of the reason why in the book we call it an informational GPS in terms of how you operate. And so I think, yes, AI is great at code. And yes, that will be a massive accelerant across a wide variety of professions and a wide variety of industries.
Starting point is 00:37:46 Now, as I read the book, maybe this was explicit, maybe it was implicit. But I had a little, I basically heard your voice through the lines telling policymakers, oh, don't regulate this or don't make a mistake regulating this. and so I want to ask you just practically do you think there's a chance that this stuff gets regulated and if so what regulation do you think might be put into place you know regulation is a hot topic but I don't think a lot of people have interest in it because are at least in the U.S. Our lawmakers talk a big game but they do basically nothing on tech. Maybe in Europe they do and but Europe has been plagued with companies not wanting to
Starting point is 00:38:31 to release products there. I mean, even Apple intelligence isn't available in Europe, and Apple intelligence does basically nothing. So I'm curious what your take is on the regulation side. Well, the problem with most people's default modes of regulation, including a lot of regulators and including a lot of European regulators, is you say, well, our job is to protect bad things from happening. And by the way, the simplest way to do that is to prevent anything from happening. Right. And so you really massively slow everything down, which means that you, you know, disadvantage your own innovators, your own industry. It's part of the reason why the tech industry, you know, tends to be evolving very fast from the U.S. and from China and not as much from Europe because of that kind of
Starting point is 00:39:16 mindset and approach. And then people say, oh, you're saying you should not regulate at all. It's like, well, no, but try to be smart about it. So, for example, when you're building things in the future, and we have this chapter in the super agency saying innovation is safety, you can actually build a bunch of things that in the future that are part of that safety and that future safety and it's important to get to. And by the way, what you want is you want small errors and small problems that you're learning, hence iterative deployment, that you then fix as it gets to scale. And that's the way you create a safe future. Now, for example, most often when people get to regulating, and they think about regulation, and important technologies, they always get to
Starting point is 00:40:00 regulating, and preventing bad things from happening at scales, an important function for regulation in the government. But they say, well, I'll just start like saying, thou shalt and thou shalt not right now. It's like, well, actually, in fact, engaging in dialogue, finding out what's happening, if you can articulate what your concern is as a metric that you can run, you say, hey, I'd like to see information about job loss, job displacement, and other kinds of things. I'd like to see what's happening in that so that if there is such, I can start figuring out if I need to do anything, create an incentive that's different, create a rule that's different in order to do that. And that engaging in dialogue and measurement first is a great way to enable things to happen. And you have to always be remembering that the tool set of the future can be so much better than the tool set of now.
Starting point is 00:40:54 And we don't want to to stop that tool set, not just for its opportunities, but for also what it might mean for the things that regulation cares about, like safety and kind of the well-functioning of society. Okay, so this show is going to air last week of January. Inaguration has already happened in this world. We're talking right before it. And looking back in terms of the way that regulation has played out over the past four years, I'm curious what your opinion is of the job that Lena Kahn did as the head of the FTC. I mean, she's currently still the FTC. I'm sure that when this airs, she'll either be out or on the way out.
Starting point is 00:41:36 What's your view on her performance? Well, I've made, you know, kind of some, you know, comments on television shows before. And I thought that, you know, she misunderstands. kind of the role of how to stop the aggregation of power in large tech companies. Because she says, well, if she stops large tech companies from buying small startups, what that means is venture capitalists like myself, and this was made entirely as a venture capitalist statement, don't invest major dollars in companies that might be end up competing with large tech companies because you need, as we were talking about open AI, you need to have
Starting point is 00:42:16 that one-x possibility of getting back to make major dollars. So as opposed to stopping the aggregation of major tech power by slowing down all this M&A, actually, in fact, that policy will actually, in fact, aggregate power in the large tech by having less startup competition. And that's my personal. Now, she's done a great job on like drug pricing and anti-competeens. a bunch of other things. So I made kind of a specific thing that is like, you know, we want to be in favor of investing in startups at every level of scale. And we need to be enabling that
Starting point is 00:42:58 to be creating the diversity of competition because we want to be not, you know, five to seven large tech companies heading to three. We want to be five to seven large tech companies heading to 20. And venture investment is what enables that. And so then what did you guys do with inflection? I mean, it was one of the more confusing, I don't know if you call it an acquisition or a migration or, all right. So folks, what happened, I think, was that Mustafa Suleiman and a bunch of folks from inflection went over to Microsoft to start to run consumer products, almost as if Microsoft acquired the company, but the company stayed, you know, standing and is, I guess, still operating. Is this just a consequence of what was going on in the regulatory environment at the moment? And how does that work? Well, the basic thought from an inflection standpoint is that the original model wouldn't really work, which is building frontier models for doing a consumer agent, that the cost curves to the revenue curves, and what you would have to do wouldn't work as a startup. And so we kind of looked at it and said, we have to pivot. The company has pivoted to a, you know, kind of a B2B model and is working with a variety of companies now. It keeps its landmark and still very special kind of agent pie live. So, you know, pie, PI, personal intelligence, pun intended.
Starting point is 00:44:24 You know, AI, you can find it on, you know, both the Internet and also on various, you know, app stores. And to do that pivot. And so, you know, part of what Mustafa is like, well, but I really wanted to be creating this agent. And so we could make this deal work, you know, economically by getting this kind of of big licensing deal and ability to kind of do a non-exclusive license of the technology and bring over a set of the team that really wants to continue to be working on the consumer-scale agent while we'll hire other people who are like interested in the B2B model. And so, you know, it's a lot of different moving parts, but, you know, it's something that
Starting point is 00:45:05 investors of inflection were happy with. It was something that allows the kind of Mustafa and the team to continue to build, you know, broad-based consumer agent. And yes, it was a, it was a, you know, kind of a complex deal, but, you know, large-scale deals usually are complex. In a different regulatory environment, that's an acquisition, right? It certainly could be. And by the way, inflection, you know, who knows, maybe some of year will still be acquired. UFTC potentially on the way. So, yeah. You're politically involved. It's been, I mean, we're talking about. We're talking about. We're talking again mid-January. It's been pretty remarkable to see the parade of tech CEOs,
Starting point is 00:45:48 everyone from Mark Zuckerberg to Jeff Bezos, Elon, we know, gravitating towards Trump for the second term in a way that they didn't for the first. What's happening there? What do you think they're trying to get out of Trump? And do you view this as a change in their politics, an authentic change in their politics, or is this pragmatic? Well, I think, you know, one of the things about technology is a more important global business. And if you're doing global, you have to have good relationships with multiple governments, including, of course, your home country. And I think that that's, you know, part of what you're seeing reflected in what's happening here. And I do think that one of the things that's really important is we focus as, you know, kind of Americans and how do we build American prosperity, American industry, how do we kind of, you know, take a leadership position in the world?
Starting point is 00:46:40 it's really important to be working with the government well. And so I think what you're seeing is people saying, hey, that's, that's part of how we build great things for America. And so they're doing the things that they think are important to do there. So it sounds like you don't really think this is an authentic move then. I think it depends on different folks. And, you know, I look, part of my thing is we had an election in 24, in 25. What everyone should be focused on is how do we build?
Starting point is 00:47:10 you know, kind of more strength into American industry and the American middle class and kind of our position. And so that's what I'm personally focused on in 25. And so, you know, that's what I think everyone's doing in different ways. But, but, you know, I think people should answer this for themselves. Do you think they're going to get what they want? You know, I mean, pregame speculation, you know, given that we're so close to being in the game, I think we want to watch to see how it plays out. Okay, that's fair. Elon Musk is now extremely close to Trump.
Starting point is 00:47:48 They call him the first buddy. He's basically living at Marlago. He does not have a good relationship with Sam Altman. And it's already been discussion of whether he will use his proximity to Trump to try to challenge Altman and Open AI. I mean, some ideas have been that the U.S. government looks more deeply into the transition from non-profit to for-profit and, you know, potentially is, looks at some punishments there. Do you, do you see that as a feasible thing? Like, I mean, I know, again, this is speculation, but you know both of them.
Starting point is 00:48:26 Is that a real concern? Well, again, pregame, about to happen over-speculation mistake. But, you know, on a positive note, you know, Sam has tweeted that, you know, he doesn't anticipate Elon, you know, engaging in that kind of behavior. And Elon has retweeted those sections saying, yep, that's totally right. So that's at least positive data. But again, we're in a, we're in kind of pregame speculation. So, you know, I think it's really important for us to be building and doing the right thing. and I do think it's really important to have, you know, kind of like a vigorous AI industry
Starting point is 00:49:06 for, you know, kind of American prosperity. And I think that's, that's, you know, so hopeful. Okay, yeah, everybody, you would imagine everybody be on the same page on that. So I guess we're, again, yeah, we'll see it all play out. All right, let's do a quick lightning round with some of your tweets. I think you have a pretty good Twitter presence. and I think it's always nice because the things that people fire off
Starting point is 00:49:33 in a moment, maybe that's pretty telling in terms of what they think about the world. So a couple from you, one, very soon voice will become the main interface and AI will get better at learning what exactly you're looking for. So you really think voice is going to be why voice? 100%.
Starting point is 00:49:49 One of the party tricks I do is I bring out my phone and put ChatGPD on audio mode to show people Because part of when we learn to use phones or PCs, we've learned with GUIs and everything else, we've made it softer, but there's a very precise semantics. One of the things that language models allow us to do is to talk to it. And so now, like, you can prompt, you know, chat CBD with just like kind of almost like word salad and still get something very interesting. So you don't have to, and so it allows you to be more, you know, kind of incoate and speculative and brainstorming it and you can actually have a useful conversation and create useful artifacts from that, whether it's on your phone or PC. And so that's part of the reason why I've made that as a
Starting point is 00:50:38 prediction. Okay. Yeah, I'm with you on that. It's pretty amazing to speaking with these things. Okay. Last one I want to talk to you about is prompting. You mentioned prompting just before. Prompting is a skill that can 10x you as an employee, creative, et cetera. As we learn how to prompt AI, I wonder if we'll be able to better prompt each other. Any advice for prompting? I mean, you have your own AI bot, but we are all trying to prompt better.
Starting point is 00:51:03 What's working for you? So here's a simple thing. And obviously the book before Super Agency was impromptu, which is trying to show people, not just show tell how it's amplification intelligence, but show, and through kind of some examples of the Chrome of
Starting point is 00:51:19 prompting. But here's a simple thing, is that the large language models are very good at role-taking. So you prompt it with, I want you to answer this from this perspective. Now, a classic one is you want to sharpen your thinking. Like, so I think X, are you against me? That's take the role of the contrary advocate. But also, by the way, it could be the, I would be curious about like what other ways you might argue for my position. But then, as you begin to get to it, you say, well, so I'm writing a book on super agency. I'm writing a book on, you know, kind of like, what does the technology mean for the kind of the evolution of humanity and
Starting point is 00:51:58 human agency? So here is my arguments. What would a historian of technology say about this? What would a European historian of technology say about this? And you can begin to get some richer perspectives on this to help sharpen your thinking, because as you tell the AI to adopt this particular kind of prompting, like a kind of informational perspective, It's one of the things that can make it useful and directed and deeper in the things you're looking at. So think about the different kinds of role, like if I was talking to an expert of the mode of, you know, whatever, a marketing expert, a sales expert, a financial service expert, a short trading, a person who does public market trading who specialized in shorts. What would they say here? And you can do all that, and that can really help you in your prompting.
Starting point is 00:52:54 That's cool. And I'm sure if the books aren't already there, you could probably upload a book, like some good sales or marketing books, and say, I want you to read this and now, you know, come up with the strategy based on the book. Like, Hope is not a strategy. You know, throw that in there and then see what it spits out. All right, Reed. So great to speak with you. Let's call the book out one more time. Book is called Super Agency. What could possibly go right with our AI future. in all bookstores today encourage you to go pick it up read great to speak with you for the first time thanks again for founding lincoln of course we're part of the lincoln podcast network so alex always a pleasure all everybody thanks for listening we'll see you next time on big
Starting point is 00:53:32 technology podcasts

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.