No Priors: Artificial Intelligence | Technology | Startups - The case for AI optimism

Episode Date: December 21, 2023

AI doomerism and calls to regulate the emerging technology is at a fever pitch but today’s guest, Reid Hoffman is a vocal AI optimist who views slowing down innovation as anti-humanistic. Reid needs... no introduction, he’s the co-founder of PayPal, Linkedin, and most recently Inflection AI which is building empathetic AI companions. He is also a board member at Microsoft and former board member at OpenAI. On this week’s episode, Reid joins Sarah and Elad to talk about the historical case for an optimistic outlook on emerging technology like AI, advice for workers who fear AI may replace them, and why it’s impossible to regulate before you innovate. Plus, some predictions. Aside from his storied experience in technology, Reid is an author, podcaster, and political activist. Most recently, he co-authors a book with GPT 4 called Impromptu: Amplifying Our Humanity Through AI. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @alyssahhenry Show Notes:  (0:00) Reid Hoffman’s birdseye view on the state of AI (3:37) AI and human collaboration in workflows (5:23) What’s causing AI doomerism (12:28) Advice for whitecollar workers (16:45) Why Reid isn’t retiring (18:25) How Inflection started (22:06) Surprising ways people are using Inflection (25:34) Western bias and AI ethics (30:58) Structural challenges in governing AI (33:15) Most exciting whitespace in AI (35:00) GPT 5 and Innovations coming in the next two years (44:00) What future should we be building?

Transcript
Discussion (0)
Starting point is 00:00:00 Hi, listeners, and welcome to another episode of No Pryors. This week, we're joined by my longtime friend and partner, Reid Hoffman. He needs no introduction as co-founder of PayPal, LinkedIn, and now inflection AI, as well as Microsoft board member and former Open AI founding board member. He's a prolific author, podcaster, and political activist, and he's also one of my favorite technology optimists, big picture thinkers, and supporter of people and founders. Welcome, Reed. Thanks for doing this. Great to be here and love what you guys are doing and, you know, a long-time friends and partners with both of you.
Starting point is 00:00:37 So this is awesome. We will start with a small question, which is what is your view on the state of AI today? What do we need more of and less of? Well, I mean, the obvious thing that AI that everyone probably listening to this podcast already agrees with is that it's somewhere between the largest, you know, tech transformation of our lifetime and perhaps the largest tech transformation. of human history. One of the things I use to describe it is like steam engine of the mind. So just like the steam engine
Starting point is 00:01:07 gave us physical powers, you know, kind of superpowers of construction and transport and manufacturing and a bunch of other things, this will give us a whole bunch of mental superpowers.
Starting point is 00:01:17 It's both the implication of humanity, which is part of what the impromptu book was gesturing towards. And also there will be some places where we will create, you know, kind of substitution, replacement of work in various ways. And obviously we'll get into some depth on that. But I think that's the macro picture. And then with that, of course, there's tons of
Starting point is 00:01:42 things that are current status and current needs. And I think everyone tends to a little bit overpredict like how quickly things, like everything will change next year. And that's not going to happen. But then they tend to underpredict, you know, 10, 20 years in some ways. And in terms of how the transitions. Although, you know, obviously, because just like all technologies, the doomsayers come out first, whether it's the printing press, electricity, everything else is like, this is the end of the world. You can go back and you can find this is the end of the world in each of these things. You know, the printing press was described as degrading human capabilities through cognition and spreading misinformation as an example. But, you know, what I'd say that probably
Starting point is 00:02:28 as an arc, the thing that I would want to see more of in the, and that's part of the reason why I did impromptu the way I did, in the creation, theorization, and the design of what we're doing in our intelligence is more in the kind of symbiotic amplification loop. We tend to, as technologists, say, well, I'm going to have autonomous vehicles and they're going to drive separately, which I think is a good thing in that case, because I think, you know, you don't need an amplification loop. You just need effective logistics, you know, safety, you know, save the 40,000 deaths that we currently have in human driven vehicles and so forth. You can go in depth in that if that's useful. But like, like the fact is there's going to be
Starting point is 00:03:12 a whole bunch of things that are actually going to be better with people plus AI that plus is a thing to focus on. And I think we haven't nearly as much. And that's, of course, part of the reason I wrote impromptu. Do you have a favorite example of where you already see like the amplification loop you're talking about or AI and humans working collaboratively together? So a friend of mine's kid first started with exposure to GPD4 and was like, eh, I'm going to do sonnas, whatever, whatever. I don't care. Wait, how old is this kid for context? 15.
Starting point is 00:03:48 Okay. You know, right kid, you know, really interested in organic chemistry and realized that she could place scientific papers in GB4 and say, explain this to me. I'm a 15-year-old. And already, like, an entire world opened up to an end. And I learned something from that because I was like, oh, yeah, there's occasionally these really complicated papers that I'm looking at going, I don't have the two hours that try to decode, hey, I can do what she's doing. That's smart. Right. And so that exists today. Part of the thing that I try to tell, the various concerned pessimists that say, well, you know, is my job, you know, to limit what the
Starting point is 00:04:34 large tech companies are doing with artificial intelligence? Well, actually, in fact, we have a line of sight to a medical assistant on every smartphone. That's five billion smartphones in the world. And, you know, much less than a billion people have access to doctors. You have a line a site to a tutor on every subject for every age group, for everybody, your actually job is to get that $5 billion access to all of that, right? This example of being able to use it as a tutor today, if you just apply it a little bit in amazing ways for everybody, right, who has access, access being the key thing that I would suggest governments should be working on. is, you know, stellar.
Starting point is 00:05:23 I've been working in technology now for about 20 years, and this is the biggest potential impact of global health and global equity that I've seen, and yet it's also the biggest immediate doomerism I've seen. And it feels like the foundation for that doomerism was laid many years ago. Why do you think that exists, or what do you think caused so much almost negative sentiment
Starting point is 00:05:43 or pessimism or call for regulation so early by a number of people in the AI community? Well... In the AI community, I think a lot of it is very well-meaning, but conceptually flawed frequently. And there's a couple different arcs you can go into it. Maybe one of the most common arcs, especially that gets to the so-called X-risk people, since we're talking to the people who refer themselves as Dumers, P-10 Dumers or whatever. One of the things I find most amusing is people say, first step.
Starting point is 00:06:20 totally agree with this, very strong insight, human beings are very bad at making predictions and instincts off existential curves. Then they go, and then my prediction is, and you're like, well, wait a minute, you should take your first sentence seriously. Because, for example, what they do is they go, well, we have an exponential increase in compute, it's increase in cognitive functions, so then I'm going to hand wave a little bit and say that's an increase in IQ, we're going to have superintelligence, and then this is what's going to happen. And you're like, well, it's unclear. Like, for example, if the increase, and cognitive functions is actually more like an increase in savants of various ways,
Starting point is 00:06:54 then the superintelligence describing, by the way, GBD4 is already super intelligent relative to a number of human capabilities, is actually not that alarming. Go play with GBD4. It's not alarming. It's actually enhancing and amplifying in various ways. And I think that kind of thing of going, you know, I come to an observation, exponential curve, and I go, oh, shit, right. And I'm trying to be. helpful. And by the way, of course, the calls for regulation are, hey, I shouldn't solo make this decision because I'm a tech creator. I should get, you know, broader sense of society and the representative of society involved in this, not realizing, of course, all you're really doing is
Starting point is 00:07:34 calling for panic. For example, when I talked to some of the authors of the six-month pause letter, I was like, well, what did you think was going to happen? And they're like, well, we just hoping everyone was going to pause. I'm like, okay, I thought I was going to need to talk to you about how tech development works, but let's talk about humanity first. Because I think you misunderstand humanity, right? Like, that's not what's going to happen. You don't set up a flare and say, everyone should pause for six months out. Oh, look, eight billion people paused. On your theory of the universe, the UN would be a highly functioning organization that we would all use for a bunch of things that doesn't work. Yeah. One of the things that is sort of surprising to me is
Starting point is 00:08:13 how clearly laid out, like, the variations of Dumer scenario are and how little color there is in the optimistic scenarios, right? And so this is one of the reasons I think you've been a really important, like, positive voice on the ways in which humanity will be pushed forward by AI and collaboration. Because, as you said, both of you said, it's entirely predictable that there will be some sort of panic around every new transformational technology. going back to, like, because of the advent of the telephone, no one will ever leave their home again. Yes, exactly. And so I think it's very, and this is around news cycles as well, it's very easy to amplify a negative scenario. Also because the, I think the set of fiction that actually inspires lots of technologists is much more dystopian in sci-fi than utopian because there's no conflict. Yeah. Well, especially in the video, right? There is some stuff in written that's pretty good. You know, Ian Banks, et cetera, et cetera. But the video is always like person versus machine. And, you know, the machine has to play the conflict evil role. And one of the things in 2019, I went and talked to all the CA people saying, look, you're damaging humanity with all these stories. You should put the machine also on the positive category. It could be person plus good machine against bad machine. That's fine. But like have some.
Starting point is 00:09:43 understanding that there's a, there's a good role, that there's a potential good role for this. And like when you look at all these technologies, there's a ton of, this is one of the reasons why I didn't sign the 22-word statement of, you know, AI should be treated as an existential risk along with climate change, pandemic, et cetera. Because the mistake, a bunch of people I treasure sign this, this letter, Sam Altman, you know, Mustafa Soleiman, et cetera. And, and there's 22-word statement. And the reason is, is because unlike other things, climate change, pandemic, etc. They don't have anything in the positive column. When you get to pandemic, maybe the only way to solve pandemic is AI. Well, a certain help for climate is AI. It's net,
Starting point is 00:10:25 I think, strongly in the positive column. It doesn't to say that it might not add some existential risk characteristics to the overall portfolio, but your portfolio is improved within it relative to reduction of it. And the reason I think that everybody, that a lot of people, and this is one of the things that I think, you know, given we're doing here is the critics think they're virtuous because they go, oh, there's this, there's this danger that we should trumpet. And you're like, well, actually, in fact, you may be doing more harm than you're doing good in your attempt to be virtuous, because by trumpeting the negativity, you're not shaping where we could be in positive. And so my challenge to the critics is say, you have to be articulating where we
Starting point is 00:11:07 should be going to and what we should be doing. And then we can navigate around them. Now, I also don't think that the, that the, oh, all technology is just great. We don't need to think about safety at all. That's Bozoville, right? Of course, like, it's like, look, you can clearly do things dumbly with technology. There's a bunch of stuff around viruses that people, like, by not being careful and all this, can be really dumb about or genetic manipulation. You're like, no, no, no, you have to be, you have to be intelligent and careful about it.
Starting point is 00:11:36 but going to a future that is so much better than the present, that's the goal and that's there. And if you're not articulating that that's possible, that how you think your critiques or risks could help navigate to getting there, then actually, in fact, you're being destructive versus constructive. And I think part of the reason why people do this is they go, oh, I know that I'm just being good when I articulate this fear and this risk. And you're like, actually, no, you've got a conceptual mistake. You're actually maybe even being bad, right? So how to get to the good future is actually the hard work. So do the hard work.
Starting point is 00:12:16 Consider the opportunity cost of all the good that we think AI might do in the short and long term. One more thought on this in terms of more the short term. What is your advice for the many people, increasingly in white-collar jobs who are concerned about AI replacing? them. For various reasons, in the medium and the long term, I'm actually pretty positive and counter a lot of, a lot of very smart AI thinkers that go, oh, my God, we may have rampant unemployment and a bunch of other stuff. And this is one of the medium term ones that I actually really respect and think is going to be super, because, you know, steam engine. Steam engine of mine. Steam engine, you know, a couple can create capitalism and others had huge human consequences
Starting point is 00:13:01 of transformation of society that's important to navigate. That's as important here, and in part because the speed of transformation will be a lot faster. You have 5 billion smartphones, your ability, you know, in commuter devices and the Internet, your ability to have that transformation hit is at a much more intense and focused wave. And, you know, call it, you know, mid-level white-collar jobs, including some upper-level white-collar jobs, are one of the ones that are going to get transformed. first and most ferociously. And so I think first the advice to the to the folks, which is
Starting point is 00:13:40 start playing with AI, use it as amplification, you've got to learn it. I understand you may say, hey, look, I'm 40, 50, I've got my nice experienced position, I'm comfy, I do not want my society change. Look, the person who was driving the horse and buggy carriage felt the same way. The Luddite weavers felt the same way. You know, it's like, no, no, this is, you got to retrod out your learning and some curiosity. It doesn't have to be perfect. You don't have to be the A student. You just have to be engaged in learning the tool some. The same way you were learning Excel for doing your accounting thing and so forth, then just, you know, learn some on it. Now, here is the good
Starting point is 00:14:21 news. The good news is, and it's a general arc, and this is one of the things I was trying to do with impromptu, and I'm going to do some more writing on this next year, is that anything that AI creates as a challenge, AI can also be part of the solution because you go, well, okay, so it's going to displace a whole bunch of customer service people. Yep. All right. So what can you build for customer service people that can help them figure out what other jobs they might be able to do, how they might find those jobs, how they might learn those jobs, how they might do those jobs? And let's make sure those AI tools are built too. So to help with the transition on people. You say, well, okay, there's a whole bunch of paper filing. And
Starting point is 00:15:01 accountants or form management in marketing groups, you know, kind of doing stuff. And that's all going to be much less human effort relative to the amount of work that's going on. Those people, how do they learn new jobs? Now, part of the thing that I think is the reason I'm more optimistic over time, the transition I pay a lot of attention to, but optimistic of what they target. Because of course, the exponential people tend to say, well, no, no, no, but then they're just going to be better than all humans. And humans, you know, can't be doing. doing anything. I actually think that if you take that these are kind of these progressing, adding a whole bunch of tasks, we learn and adapt to other things. And so when you say to an individual,
Starting point is 00:15:44 let's help with the transitions of the individuals to finding other kinds of things. And by the way, the other thing is, it's like, let's use this as a parallel, truck driving. So, you know, Aurora is obviously trying to do the completely autonomous truck. Well, if every truck manufacturing in the world basically started manufacturing AV trucks tomorrow, right? It'll be 10 years before more than half of the trucks on the road or AV trucks, right? That gives you time to adjust. That gives you time to make this work. And I think that there is more time for adjustment than the usual like, you know, five alarm fire, you know, ringing the bell, both for the individual. and for organizations and for society to do.
Starting point is 00:16:32 And so it's like, look, let's navigate into it and be paying attention to and planning and trying to create it. But once again, AI can be part of the solution. I guess speaking of career transitions, you've had, I think, one of the most impressive careers in Silicon Valley. You know, you were, you started a company that was an early social networking company in the 90s. You were at PayPal as initially a board member or senior person there. You started LinkedIn, which is one of the most important social products in the world and sort of productivity tools. You ran Greylock, the Venture Fund. And so you've had this amazing career arc,
Starting point is 00:17:05 and usually once people hit your moment in time, they kind of say, okay, I'm done. And, you know, they moved to the, you know, Tahiti or whatever it is, wherever people park their boats. Instead, you've decided to start inflection, which recently releases Chatbot Pie, which is, you know, focused on empathetic chatbots and human interaction and everything else.
Starting point is 00:17:25 Can you tell us more about your decision and not only, like, why start this specific company, but why even do anything at this point? Well, I'm not very good at being bored. I hate boredom, cocktail parties or waiting in line. And so that's part of it. The other thing, of course, is how do we lead meaningful lives? It's because we leave the world in a much better place than we found it.
Starting point is 00:17:49 And you can work at any level of scale. I think it's very honorable to say, hey, I'm working at my local senior community. For me, you know, obviously blitzscaling and master. for the scale podcast and all the rest of the stuff. You know, scale is my particular thing. I just have no idea when I'll retire. And I don't really have that much of an interest in yachts.
Starting point is 00:18:10 I do have an interest in getting the Tahiti at some point. I've heard about it. It sounds kind of an interesting place to visit. Yeah. It's in the middle of the ocean in case you don't know much about it. I have heard. It's very relaxing. Yeah, some have said.
Starting point is 00:18:22 So you started inflection. And can you tell us a little about how that came about and the focus of the company? So Mustafa Suleiman and Karen Simonian, we're talking about this amazing transformation that's going to transform all industries, going to affect every different kind of path where language and cognition plays a role in society. Like everyone on the planet, it's going to have a medical assistant if we can just get them access to even our friend's smartphone. And there's all this stuff's going to happen. And you say, well, what exactly it's going to be happening with AI in 10 years? from now. And all three of us can I say predictions. And all three of us are kind of probably
Starting point is 00:19:02 look foolish in two years and whatever prediction we make today, right, in terms of how this works. That's the nature of the game. We go, okay, startups, you're trying to go, well, things that would live as massive, interesting independent companies developing a product. And so one of the things we came to was every individual will have a personal intelligence, a pie that's for them, right? that's for Sarah, that's for Elad, that's for Reed, and helps you navigate whatever the particular version of your life is. So we were talking about me a little earlier. It's like, well, I try to do these scale things, but, you know, other people, you know, volunteer at the senior centers are, but what's the thing that's useful to them? And we said, well, actually,
Starting point is 00:19:41 in fact, like something that is kind of a tool companion that reflects off whatever thing that you're particularly grappling with. It can range from how do I fix my flat top? to I had this kind of challenging conversation with a friend and I want to debug it or I'm trying to think about like what I should do next in my work or something else and have a have something that that can be there for you. And yes, it's it has elements of a therapist, but it's not a therapist, right? Because it's, it's actually deeply knowledgeable in the world and it's not supposed to be just reflecting the, you know, Elad, tell me the thing that you were most troubled about in your relationship with their mother. It's, hey, I'm, I'm here to to provide a,
Starting point is 00:20:24 lends the world and to help you. And like, for example, unlike the movie Her, where it's like, oh, talk to me, don't go to the world. It's like if you show up and say, oh, you're my only friend, Pai, it's like, oh, we should help you get other friends. Let's talk about the importance of friendship and people you can talk to. And maybe there's some people you could talk to at it because it's helping you with the world. And of course, then when you begin to design it, you think, well, what would be the right thing for a lot of the people in the world? It may not, it's probably, nothing is probably for everybody, but it's like, well, something that's compassionate kind, something that has a kind of a point of view. So it doesn't just like reflect, like if you show up and say, I'm a
Starting point is 00:21:04 white supremacist and I think race X is evil, it doesn't go, oh, I'm with you. I agree. It goes, well, really, you should think about that. Like, it's much better to be compassionate and to realize that we're all humans, you know, and kind of work you too. And so it has a point of view in how it's operating, but with a view of helping you and amplifying you as a way of doing it, and then bringing, you know, the enormous set of resources that these, that this kind of amazing large-scale language models can bring in it. And that, we said, okay, that product should exist. It'll be one of the fixed points at, you know, X years in the future. And we see that clearly, so we're going to be building towards it. And as far as I can tell, I mean, you know,
Starting point is 00:21:49 you guys are both highly active AI investors. I think on that path, we're the ones, you know, who are most, like, of the serious teams, were the ones who are dedicated to that path versus other paths. How have you seen people use the products so far in terms of other typical types of interactions? Is it reflecting this original intention? Like, how do you view user behavior relative to the product? I think we're getting that. I mean, you know, just like the surprise I kind of shared with a GPD4, which is like, wow, that's an amazing use case that's great.
Starting point is 00:22:24 Like, actually, one of the people at Greylock is a new parent. And one of the things she came up to tell me about was, oh, my God, it's giving me great, like, how do I navigate, you know, all the things that's my first time parent? And like, what are the things I should do? What should I pay attention to? You know, which things should I, like, read more about? which things should I really obsess about which things do I not need to worry about. And it's just like it's it's there like when I go, oh, I encounter this and I can ask right now and it helps you right now. That's awesome. It's the thing that's useful to you. And so there's just a
Starting point is 00:23:00 whole stack of them. And part of the reason I was using like the flat tire example is because I had personally conceptualized pie entirely conceptually. It's like, you know, how do I help navigate my my path through human society, whether it's work and the people I'm talking to you or friends and da-da. Well, someone went up to Pi and said, okay, how do I fix my flat tire? Right. And helped. It was interactions with Pi that got needed to update my, my recommendation, everyone, to experiment with AI because it's like, look, don't just go try to do something like, well, okay, I'm sitting in front of a GPD4. I'm going to write a sonnet, right? Like, because I haven't written a sonnet and write sonnet and so I've seen them and blah. Great. Don't.
Starting point is 00:23:41 go ahead and do that. There's nothing to say and don't do that. But my recommendation of people is, and this gets to the white collar work thing that you raised earlier, Sarah, as well, is like, no, no, try it with something that matters to you, right? Like, that you may not expect it to get a good answer. And by the way, sometimes you won't. These things are not perfect in all kinds of ways. Sometimes you go, well, that was kind of lame and useless. Like, when I first got access to GBD4, you know, months before it was publicly accessed because I was on the Open AI board, I sat down and said, how can I read Hoffman make money through investing in artificial intelligence? Because I just wanted to try it. And it was useless. It was the classic MBA, like, I don't understand anything about investing. And I'm going to write something that sounds really smart. Like you're going to study markets and address large tabs and you're going to know which technological transformation. And then you're going to go find teams that are doing that. And it's like, no, that's not the way this technology investing thing works.
Starting point is 00:24:41 how you might teach it in, you know, if you're not knowledgeable at seemingly smart MBA course, but it's not how it works. You'll find some of the answers are not useful to you, but you'll find other of them. Read, I don't know. We're just following the steps. It seems to be going pretty well so far. Well, for example, it can be useful when you say, I have an associate and you go, what's all of this stuff, where should I focus my due diligence? Actually, in fact, giving you a summary on that stuff for an associate can actually be very useful. Which things is it useful for is the key thing to start experimenting on? Because some of the stuff it's great and some of the stuff it's like, eh, not so much. And I think you have
Starting point is 00:25:23 a really key embedded point here, which you mentioned earlier, which is it's who's it relevant for and that's very personalized in terms of the specific context of the individual. One thing related to that that you mentioned was that you wanted it to have an opinionated perspective. You know, you wanted to come with some pre-existing framework or pre-thought-out perspective on the world. And I think the racism one is actually a very cogent one, given a lot of what's happening in the world today relative to universities and the perception of them in terms of, you know, are they doing the right thing or not relative to anti-Semitism or race or other things. Many of the people who actually work on AI ethics come out of these institutions that are now
Starting point is 00:26:01 being viewed increasingly as potentially biased. How do you think about where that perspective should come from and who should actually decide what the right perspective is because you look at, for example, Falcon and the UAE is an open source LLM. And I think one of the reasons they're doing it is because they don't necessarily want the Western perspective to be thrust on every single AI model. And it's a very specific Western perspective. So I'm just curious how you think about the ethical and moral frameworks. They should be applied to AI and who should actually determine it, which perhaps is an even more important question. Well, the thing that I think is very much baseline is I think you should be
Starting point is 00:26:37 The developers, and we'll get to the full answer to your question, but the developers should be honest, open, and transparent about what they're designing to. And one of the things that frequently a lot of Silicon Valley people say, which I think is Bozoville, is that technology is value neutral, right? I actually think values are embedded in it in various ways, and I don't think that's a bad thing. I just think it's like one of the reasons why I love with the economist is one of my is one of my favorite magazines or the Atlantic is because they don't go, we're value neutral.
Starting point is 00:27:09 They go, here are our values, right? And here's what we're trying to do and hold us accountable to the way that we are articulating what we're trying to say. And it gives you a much more intelligent perspective. And I think that that's what technology companies should be doing.
Starting point is 00:27:24 I think that's what, you know, AI company should be doing. And I think the kind of AI agents and whatnot is a way of doing it. And so I think that's what's most important. Now, ultimately, you start with, like when you're doing startups or initial products and there's a field of at least some choice, I think it's the developers of the products. I think it's the companies being transparent, open about things. We're doing X for this reason. This is why we're doing it. And I think one of the things that is technology companies, one of the challenges, of course, is they become more ubiquitous and important across all of society, e.g. shaping our collective mindsets, whether it's serve, or, you know, social networks or, you know, kind of, you know, video networks or other kinds of things as ways of doing this, this does have society level impact. And so there's responsibility to not just the individual as a participant in a customer, but society as a customer.
Starting point is 00:28:21 And how do you navigate that? And I think that's important across all these things. And I think that's important as we begin to get the AI stuff to scale. And you could say it's important to have a certain amount of diversity. and participation in that for a set of options and perhaps limitations. Because if you say, hey, I'm going to create an AI that's going to enable terrorists around the world, we're like, well, we don't think that's a good idea. Right.
Starting point is 00:28:48 And we're going to do something about that. Or, for example, you know, a stunning failure on a question, we're going to create an AI that helps people articulate and advocate for genocide. You know, like, no, that is. clearly bad. Genocide against any human category. Any is terrible and evil. Full stop. And so I think, you know, there's a dialogue within society about that. I do think that one of the things that, you know, is an uncomfortable truth is that, you know, go, oh, AI is shaping everything. And so everyone wants to put their hands on it in order to shape it. And yet technology is built by small
Starting point is 00:29:30 groups of people doing things. And you just can't have AI built by UN committee. It just doesn't exist. And it's one of the things that academics mostly don't understand because they've never, most of them, not all, of course, but most have never built anything, don't really understand how these organization works. Don't understand how technology development works. They think if you just kind of write an essay, then a technology piece will come out of it. And you're like, that's not how it works. You have to understand that and understand how technology is built in various ways. in order to make that happen. And then you have to try to shape it.
Starting point is 00:30:03 That's part of the reason why that productive dialogue about, like, not you guys are evil because you fucked up on this bias case for this, blah, blah, blah, blah, blah, lead the, lead the witch hunt. That's not useful. It's like, well, actually, in fact, your stuff on race is not good. Here's some ways you could make it better and here's some benchmarks that you could do in order to avoid it. And I'm going to, and by the way, if after I say that, you're not listening to me
Starting point is 00:30:28 and you're not making it better in some way, then fine, I'll go to the streets with it, right? Because we should be better on bias and race and all the rest. One of the things that you said that really resonates with me is the idea that you're going to, like technology products, they take a point of view. They're built in a certain way by a relatively small group of people. And the way you govern them if they have impact on society is you interact with those people, right? And then you hold that group accountable.
Starting point is 00:30:55 I think one of the challenges is I feel like a big. driver of the current narrative is, well, like, because we didn't regulate and control social media companies that ended up being publishers that surfaced or drove certain points of view, we need to get that right with AI very early. I think the challenge is, like, that's true in many parts of society, right? Maybe it is uncomfortable because it is a set of people that are going to have outsized influence on society. By the basic nature of building the thing, like, there's a There's not really a way around that, right? All you can do is interact with them and govern the thing. And I think we should also expect to see that more around academia, or I would ask for it. Yeah, and I think it's a good thought in academia. I mean, look, one of the things that people don't understand is the only way you make progress of technology and get to it is you deploy, you learn, you iterate. And so you're going to have errors. There's no way to not have any errors. I mean, I would love it to have zero bias errors. And on terms of the AI regulation, yeah, I've heard the same thing. It's like, oh, my God, we made this total mistake because we allowed the social networks to go without regulation. And, you know, I think that the, uh,
Starting point is 00:32:10 The problem is you don't really know the shape that you need to navigate it in until you begin to see it. And so, like, I went to the UK AI summit, this safety and innovation summit at the beginning of November. It was a very good summit. The British government, I think, you know, triggered a whole bunch of stuff to kind of go in the right way. But one of the dumbass things that I heard at the summit was, and this time, we will not allow innovation before we regulate. You're like, well, that's dumb on several levels. One, we've already innovated. Two, there's no way to do that.
Starting point is 00:32:45 None of us know how to do that. And what's more, generally speaking, regulation is enshrining the past against the future. And if you look at every industry that goes really intends in regulation, it slows down intensely on innovation. And if you say, well, that's what we should be doing in AI. It's like, look, I think you are categorically wrong and harming humanity. think about it. Let's try to get to the medical assistant for everybody. I guess speaking of benefits, how do you think about the areas of AI where there's a
Starting point is 00:33:18 biggest sort of startup available opportunities? Because often when you look at these technology waves, there's a set of value that goes to incumbents. You know, it's somebody who already owns a workflow for a SaaS tool or whatever, they just layer on AI versus things that are greenfield where suddenly you can do something new and exciting. Maybe that'd be something like Harvey for legal or other areas. Are there specific areas that you are most excited about or keenly looking for companies to exist in or, you know, alternatively think could be big areas for startup innovation?
Starting point is 00:33:48 There's areas that I think are underdone that I would really like to see, cybersecurity with AI. You know, I think it would be very good to have that relative to society. I think that the notion of, you know, how do we make these transitions for the white quality? workers is I think something that, you know, I would like to see more of. I think, you know, the reason we don't is because it's not the best economic opportunity, possibly. And so people are all focused on the best economic opportunity. And by the way, as an entrepreneur and as investor who resembles that remark, right? I'm sympathetic. But the, but you know, like how do we get those things as well? But I think there's just, just tons. Like I literally come up with a new
Starting point is 00:34:36 AI thing that I think about, oh, I could help get that co-created like every week. And I just had the kind of the resources to do it. It would be like spawning new things. Yeah. And then I guess related to that, I think you were really forward looking in terms of AI as a very important area of technology. And I remember going to an event you organized, I think it was like eight years ago or something where it was like a small group discussion of AI in the future and things like that. As you look forward in the next generation of AI, so say we go from GPT 4 to GPT Are there specific technological leaps that you anticipate happening even with that single increment? Or how do you think about the pace of innovation, but also the big shifts that are about to happen from a technology perspective over the next 12, 24 months?
Starting point is 00:35:20 So I think there's two things at least. And I think there's going to be much more. So like always like part of the delight, the reason of the three of us do this is we learn things that we just hadn't thought about. And those really bright entrepreneurs come to us and we're like, oh, that's really great. And because, you know, the thousands of people innervating through the network is, right? One is we're going to get a lot more robust and capable on all the language model transformation. So, you know, whether it's a coding assistant, a legal assistant, a medical assistant, a meeting note taker, a, you know, an amplification of slideshows through Tome or or a workflow with Coda or any of this other stuff, that's all going to get just better. Right. The second thing is, is the part of the superpowers of these things, because it's a scale, compute thing, is breadth. So like, how does the protein folding lead to drug discovery or, you know, or other things like that? Like, things that are very broad space in this will also get special purpose tools that will be, I think, magnificent. I don't know if the result for those.
Starting point is 00:36:35 special purpose tools will be in a year or two. But the intensity of the work in the beginning of it, there will be, there will be, you know, gold and platinum from that kind of over time. And I think that's over short time, like small in years, but we'll begin to see it over the next year or two. And I think those, the one level scale, you know, because that's, you know, the 10x, GPD4, GPD5, the one level scale will unlock some of that stuff. And that's part of it. Now, I'm certain that there's other things that two years from now will look back and say, oh, yeah, that was, that was maybe even now. And ours was obvious, but something I missed in that answer.
Starting point is 00:37:11 By the way, I'd be curious, your guys answer to that. What would, what would you guys say to that two-year GPD4 to GPD-5? Yeah, I mean, I think there's going to be three steps in, or three areas of capability improvement. I think one is going to be, to your point on baseline models, both in terms of broadening the knowledge base as well as the increase in the ability around chain of logic or the ability to think or, you know, do simple thinking. So I think there's one thing all around that and how much better these capabilities get.
Starting point is 00:37:40 And you see that, for example, between GPT3 or 3.5 and 4, where you had big steps ups and medical knowledge and understanding, legal understanding, et cetera. And you can do things on GPD4 that you just can't on GPT 3.5 or equivalent models. And I think we'll see a step up in other functionality for other fields with that, as well as sort of that chain of logic. I think a second is just augmentation of these things. Augmentation may include forms of memory, so you can actually loop back in a more reasonable way to sort of chain logic or chain actions. Augmentation may be things like RAG or the ability to augment other types of information or data sets in.
Starting point is 00:38:16 And so I think we're going to see a lot of capabilities around that. And obviously there's a big debate right now in terms of fine-tuning versus just increased context windows and prompt engineering and how other things play off of each other and how that affects generalizability. but I think we'll make a lot of progress on those sorts of things. And then third is I think we'll make a lot of progress in bespoke models for specific application areas. And that may be biotech to your point or specific protein folding or it could be materials or robotics or could be all sorts of things. But I think everybody's moving more to sort of end to end both reinforcement learning
Starting point is 00:38:48 but more like sort of deep learning approaches in some places where they hadn't applied them as deeply before and they're using more heuristics. Like self-driving would be an example of that. And it feels like the whole world is. flipping to this more modern-based approach instead of architectures. And then maybe I'd actually throw in the last thing, which is I think there will be some experimentation with new architectures. And the question is, besides transformers, and the question is, will they matter?
Starting point is 00:39:10 And I, you know, I don't know. So those would be kind of my four 12 to 24-month predictions, but they may be incorrect to your point on predicting the stuff. I think the new architectures, there's a bunch of stuff that I've been trying to work with, but I don't think new architectures will be one to two years. I'd be surprised if it was. Yeah, because you need to scale them, right? Yeah.
Starting point is 00:39:31 Yeah, I think all of this is clearly going to be wrong for all of us. It's not a judgment on both of you. It's three given what's happened to the last year. I think the two of us are going to get it right. So I don't know. Exactly. Okay, you two are going to write? Fine.
Starting point is 00:39:43 Fine. Just me. I'll be. Have convictions, Sarah. Yeah. Yeah, well, here. There's a sign behind you. Conviction. Conviction.
Starting point is 00:39:52 Conviction is making, making decisions based on those beliefs and plowing ahead and always seeking the truth, but not knowing you're going to be totally sure. So it depends on what you describe as a new architecture, right? I think that there's a lot of experimentation around new attention mechanisms right now, and that, you know, credit to Ashish and Nikki and Nome and the whole Transformers team originally and everybody who's worked on scaling it up, but that actually hasn't been that interesting of an area for a while, and I think there's much more interest in that.
Starting point is 00:40:25 I think the biggest labs are enthusiastic about code and validation for code in some sort of, you know, self-reinforcing feedback loop of improvement, which I think, like, there are obvious reasons to be optimistic about. I think this isn't quite like just advancement of GPT 4 to 5, but I'm an investor in Mistral. And the efficiency that you get of being able to do the same reasoning at much smaller scales begets more applications, right? And I think that also, like, you just get much more experimentation in that case. And I'm pretty excited about, like, when you take away that barrier to entry for application developers, you're going to get so much more experimentation because they can go take that the cost of integration and workflow and domain. main understanding and put all the energy there, right? So kind of what Reid said about, like,
Starting point is 00:41:30 all of these workflows are going to get a lot better and they're going to get a lot broader. Like companies like Harvey, like you need to go collect very specific data if you want to increase the sophistication of the legal tasks you're doing. You may or may not believe that this falls in the path of core reasoning. But one of the things I'm really excited about is the democratization of like content creation and creativity in general. That has been so dramatic this year. And I'm constantly surprised by what like Hey Jen and Pika are doing in terms of oh okay we can get avatars to walk around and take actions now or we can do we have much better controllability around video and I think for anybody who's worked on a social network as both if you have like you can
Starting point is 00:42:11 create those new content types like you get so much more expression um read mentioned like a lot you and I have talked about this but like the categories that make me most excited about impact on society, like education and health care are two of the areas that have been most resistant to like society's forays to get it to be better for cheaper. But I think the one other that I would mention here if we talk about code generation is, all right, today, you know, software is built by very small groups of people. They're often in Silicon Valley. Sometimes they're in Paris. But if you can enable more people to build software that is useful, I think that will dramatically change society. So I'm excited about that piece.
Starting point is 00:42:54 but, like, we're just going to be wrong. So I have conviction that, like, none of these predictions are exactly right. Yeah, I think Sarah sparked one other thing, which is in the mention of mistral or mistral. I never know how to say it because I can't do the French accent. Mistral. Mistral is that... Oh, my God, Arthur. I'm so sorry. I forget about it.
Starting point is 00:43:18 I do think that there's a lot of questions right now about inference versus cost and what infrastructure to use, and there's all these different folks doing everything from sort of like Stripe for open source APIs for these different models on through to different hosting solutions and everything else. And I think in two years, there'll be sort of a clear fallout of what are the set of approaches and how do you do it and what's the cheapest inference platform. And, you know, I think there'll be a lot of work done in terms of just basic ability to use these models at high scalability and low cost at, you know, across the board. And so I think that's another big shift where people are still kind of figuring
Starting point is 00:43:57 things out right now. But I think it'll be pretty solved in two years. Like what Sir was bringing up in terms of creativity, I think that part of it is we're going to have a number of superpowers that we don't currently envision. And part of it is like, for example, one of the, you know, slogans that I've barred from Kevin Scott at Microsoft is the most significant programming language in the next few years is going to turn into English. And then, of course, rapidly followed by Chinese because of the broad use of the language and being able to create things computational artifacts, code, etc. I mean, you know, even today, someone can go to these AI agents and code up a website where they wouldn't have been able to code up a website before.
Starting point is 00:44:40 And that's part of the reason I'm optimistic about there being a symbiotic relationship, you know, people plus the AIs, because I think there's that sort of, of direction. You know, part of the thing to do is to say, don't try to say, no, no, I want to keep the present exactly as it is. It's what future should we be making? And, you know, you can say, hey, there's a danger over here. Like, as we go to this, let's try to avoid that. That's totally good. But like, it's where should we be going? What should we be doing is the most important context for all of that? Is there anything we did in cover that you wanted to talk about? Well, obviously, we'll probably do this again, and I think that there's just a ton, but the way the technology is created is a small group that does something bold, takes a risk, and makes it happen.
Starting point is 00:45:36 And most people outside of the tech industry don't really fully understand that. And so we need to help them understand that's going on, but also to have the dialogue about like, look, raise your considerations and so forth. But frankly, as the cars get steered, only the people in the car really have their hand on the steering wheel. So you have to have a dialogue just like you're driving down the road and navigating with other cars and so forth about what you're doing as opposed to, you know, we will all decide this is what's going to happen. Because, you know, maybe this is top of mind from the EU Act stuff, which, you know, always makes me think that they're trying to hold on to the past. so ferociously that they're just completely willing to sacrifice the future. Anyway. When they have such an opportunity to, given like they actually have, you know, great talent in Europe working on AI now.
Starting point is 00:46:33 Yeah. No, exactly. Okay. Well, we take no paid sponsors for this program, but legitimately, you should read impromptu, drink Y3K AI Coke and listen to No Priors. Reed, thank you so much for doing this. And until next time. Thanks, Janice. Always great to see you guys. Find us on Twitter at No PryorsPod. Subscribe to our YouTube channel if you want to see our faces, follow the show on Apple Podcasts, Spotify, or wherever you listen.
Starting point is 00:47:01 That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priars.com.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.