No Priors: Artificial Intelligence | Technology | Startups - A No Priors clip show: the best of 2023

Episode Date: January 11, 2024

We’re looking back on 2023 and sharing a handful of our favorite conversations. Last year was full of insightful conversations that shaped the way we think about the most innovative movements in the... AI space. Want to hear more? Check out the full episodes here: What is Digital Life? with OpenAI Co-Founder & Chief Scientist Ilya Sutskever  How AI can help small businesses with Former Square CEO Alyssa Henry Will Everyone Have a Personal AI? With Mustafa Suleyman, Founder of DeepMind and Inflection How will AI bring us the future of medicine? With Daphne Koller from Insitro The case for AI optimism with Reid Hoffman from Inflection AI Your AI Friends Have Awoken, With Noam Shazeer Mistral 7B and the Open Source Revolution With Arthur Mensch, CEO Mistral AI The Computing Platform Underlying AI with Jensen Huang, Founder and CEO NVIDIA Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @reidhoffman l @alyssahhenry l @ilyasut l @mustafasuleyman l @DaphneKoller l @arthurmensch l @MrJensenHuang Show Notes:  (0:00) Introduction (0:27) Ilya Sutskever on the governance structure of OpenAI (3:11) Alyssa Henry on how AI can small business owners (5:25) Mustafa Suleyman on defining intelligence (8:53) Reid Hoffman’s advice for co-working with AI (11:47) Daphne Koller on probabilistic graphical models (13:15) Noam Shazeer on the possibilities of LLMs (14:27) Arthur Mensch on keeping AI open (17:19) Jensen Huang on how Nvidia decides what to work on

Transcript
Discussion (0)
Starting point is 00:00:00 Hi, No Pryor's listeners, happy 2024. This week, we're taking a look back on 2023 by bringing you clips from a few of our favorite conversations of the year. We had so many insightful guests, and these are really just scratching the surface. We'll list all of the episodes featured so you can go back and relisten to the whole conversation. Up first, we have a clip from our conversation with Ilius Sutskaber, the co-founder of OpenAI. We talked with him before all of the drama with the board asking Sam Altman to step down and then his return. So we don't touch on any of that.
Starting point is 00:00:35 But in this clip, we talk about Open AI's nonprofit roots and their evolution into the cap profit. So the goal of Open AI from the very beginning has been to make sure that artificial general intelligence, by which we mean autonomous systems, AI, that can actually do most of the job, and activities and tasks that people do, benefits all of humanity. That was the goal from the beginning. The initial thinking has been that maybe the best way to do it is by just open sourcing a lot of technology. We later and we also attempted to do it as a non-profit seemed very sensible.
Starting point is 00:01:18 This is the goal. Non-profit is the way to do it. What changed? At some point at OpenAI, we realized and we were. perhaps among the earliest to realize that to make progress in AI for real, you need a lot of compute. Now, what does a lot mean? The appetite for compute is truly endless as now clearly seen, but we realize that we will need a lot. And a nonprofit wouldn't be the way to get there, wouldn't be able to build a large cluster with a nonprofit. That's where we became, we converted
Starting point is 00:01:55 into this unusual structure called cap profit. And to my knowledge, we are the only cap profit company in the world. But the idea is that investors put in some money, but even if the company does incredibly well, they don't get more than some multiplier on top of their original investment. And the reason to do this, the reason why that makes sense, you know, there are arguments, one could make arguments against it as well, but the argument for it is that if,
Starting point is 00:02:25 you believe that the technology that we are building, AGI, could potentially be so capable as to do every single task that people do, does it mean that it might unemploy everyone? Well, I don't know, but it's not impossible. And if that's the case, it makes sense. It will make a lot of sense if the company that built such a technology would not be able to make infinite, would not be incentivized rather to make infinite profits. I don't know if it will literally play out this way because of competition in AI. So there will be multiple companies and I think that we'll have some unforeseen implications on the argument which I'm making.
Starting point is 00:03:05 But that was the thinking. Up next, we have a clip from our conversation with Alyssa Henry, the former Square CEO. We talked about how AI can help small business owners with all the complexity of the parts of the business they don't love. What's so exciting to me about kind of really how the landscape has changed and the technology advances in the last year are how much better the tools have gotten and how much more broadly applicable they are in terms of bringing kind of expert assistance to much larger audience, right? But it effectively unlocked the consumer and started to then show what this technology could do when then further integrated. into domain-specific areas. You go talk to small business owners. Most of them will tell you, gosh, I know I should be doing marketing, right?
Starting point is 00:03:54 Like, I know if I was more effective in doing that and reaching out with my customers, you know, I could drive more business. But I got to tell you, you know, I work all day. And then I come home at night and I've got to take care, you know, take care of my family. And then it's 8 p.m. And I'm starting to think about, gosh, you know, do I just be chill for a minute? Or, you know, am I going to spend the next three hours trying to, create an image and write text for the campaign and everything like that.
Starting point is 00:04:21 And what I tell you is like, I know I should be doing this stuff, but it's just too hard and it takes too much time. And I'm not an expert. Like I got into doing this because I love cupcakes, not because I like writing email marketing, right? And so what's exciting about all this technology, that's one example, but there's so many of these kind of different things where just the ease of use
Starting point is 00:04:40 and the accessibility opens up what previously was effectively just massive white space. right it was customers or people that if it was easy enough to use if it was accessible enough if it was cheap enough they go yeah that would be that would be huge for me but it was wasn't accessible it was too expensive it was too hard to go find and hire a marketing consultant to do it for me and the ROI wasn't there and blah blah so I think this this the evolution that's occurring right now is is exciting in part just because of really the you know previously unaddressed demand that it's unlocking. We also talked to Mustafa Suleiman, the co-founder of DeepMind,
Starting point is 00:05:25 and now co-founder and CEO of Inflection AI, about how his team worked to define intelligence and emotional intelligence and give themselves measurable benchmarks to move toward when building their models. Spent a lot of time with Shane Legg as well. And Shane was really the core driver of the ideas and the language around artificial general intelligence. I mean, he had worked on that for his PhD. with Marcus Hutter on definitions of intelligence.
Starting point is 00:05:52 I found that super inspiring. I think that was actually the turning point for me that it was pretty clear that we at least had a thesis around how we could distill the sort of essence of human intelligence into an algorithmic construct. And it was his work in, I think for his PhD thesis, he put together like 80 definitions of intelligence. and aggregated those into a single formulation, which was how do we, you know, the intelligence
Starting point is 00:06:25 is the ability to perform well across a wide range of problems. And he basically, you know, gave us a measurement, an engineering kind of measurement that allowed us to constantly measure progress towards, you know, whether we were actually producing an algorithm, which was inherently general, i.e., it could do many things well at the same time. Is that the working definition you use for intelligence today? Actually, no. I've changed. I think that there's a more nuanced version of that.
Starting point is 00:06:59 I think that's a good definition of intelligence, but I think in a weird way, it's over-rotated the entire field on one aspect of intelligence, which is generality. And I think Open AI and, then subsequently Anthropic and others have taken up this default sort of mantra that all that matters is can a single agent do everything, you know, can it be multimodal, can it do translation and speech generation, recognition, et cetera, et cetera. I think there's another definition which is valuable, which is the ability to direct attention or processing power to the salient features of an environment given some context, right? So actually what you want is to be able to take your raw processing horsepower and direct it in the right way at the right time because it may be that a certain tone or style,
Starting point is 00:08:01 is more appropriate, given a context. It may be that a certain expert model is more suitable, or it may be that you actually need to go and use a tool, right? And obviously, we're starting to see this emerge. And in fact, I think the key, and we can get into this, obviously, in a moment, but I think the key element that is going to really unlock this field is actually going to be the router in the middle of a series
Starting point is 00:08:26 of different systems, which are specialized, some of which don't even look like AI at all. They might just be traditional pieces of software, databases, tools, and other sorts of things. But it's the router or the kind of central brain, which is going to need to be the key decision maker. And that doesn't necessarily need to be the largest language model that we have. Up next is a snippet from a recent conversation we had with Reid Hoffman. He's talking here about how we should think about the risk of labor replacement and how people can make a plan to best work with AI. I mean, the obvious thing that AI that everyone probably listening to this podcast already agrees with is that it's somewhere between the largest, you know, tech transformation of our lifetime and perhaps the largest tech transformation of human history.
Starting point is 00:09:13 One of the things I use to describe it is like steam engine of the mind. So just like the steam engine gave us physical powers, you know, kind of superpowers of, you know, construction and transport and manufacturing and a bunch of other things, this will give us a whole bunch of of. mental superpowers. It's both the invocation of humanity, which is part of what the impromptu book was gesturing towards. And also there will be some places where we will create, you know, kind of substitution, a replacement of work in various ways. And obviously we'll get into some depth on that. But I think that's the macro picture. And then with that, of course, there's tons of things that are current status and current needs. And I think everyone tends to a little bit overpredict like how quickly things like everything will change next year and that's not
Starting point is 00:10:03 going to happen. But then they tend to underpredict, you know, 10, 20 years in some ways in terms of how the transitions. Although, you know, obviously because just like all technologies, the doomsayers come out first, whether it's the printing press, electricity, everything else is like, this is the end of the world. You can go back. And you can find this is the end of the world in each of these things. You know, the printing press was described as degrading human capabilities through cognition and spreading misinformation as an example. And but, you know, what I'd say, that probably as an arc, the thing that I would want to see more of in the,
Starting point is 00:10:44 and that's part of the reason why I did impromptu the way I did, in the creation, theorization, and the design of what we're doing in our artificial intelligence is more in the kind of symbiotic amplification loop. We tend to, as technologists, say, well, I'm going to have autonomous vehicles and they're going to drive separately, which I think is a good thing in that case, because I think, you know, you don't need an amplification loop. You just need effective logistics, you know, safety, you know, save the 40,000 deaths that we currently have in human-driven vehicles and so forth. I can go in depth in that if that's useful. But like the fact is there's going to be a whole bunch of things that are actually going to be better with people plus AI. That plus is a thing
Starting point is 00:11:32 to focus on. And I think we haven't nearly as much. And that's, of course, part of the reason I wrote impromptu. Our conversations on no priors can range from the philosophical to the extremely practical. Our conversation with Daphne Kohler from in situ was a look into how AI can improve the economics of biotech discovery. In this clip, she's talking about probabilistic graphical models as a precursor to current architectures. So I think that just like in most fields, there is a swing of a pendulum.
Starting point is 00:12:01 A lot of the early work in probabilistic graphical models was hugely influential in bringing artificial intelligence more into the world of machine learning and working with numerical data rather than just symbolic AI. And then I think the advent of deep learning pushed that to the side a little bit because there was so much power that could be gained from basically the kind of pattern recognition from raw inputs, raw images, text, and so on, without having to worry very much about interpretable representations. What I think we're starting to see right now is a pendulum starting to swing back in the sense that there is a greater understanding that you really need a bit of. both. You need that hugely powerful pattern recognition that we get from deep learning, but you also need the ability to reason about things like causality, and you also need some interpretability of your deep learning models so that you can potentially convey to a clinician
Starting point is 00:13:00 why you made the decision that you did. And so what we're ending up with as a really powerful paradigm is some kind of synthesis of the ideas from both of these disciplines coming together. Next, we have a clip from our episode with Noam Shazir, the celebrated Google engineer, and now the co-founder and CEO of Character AI, where he talks about why he's a text nerd and the possibilities of language models. I've just had my head down in language. Like here you have like something that like a problem that like can do like anything. Like I want this thing to be good enough. So I just ask it like how do you cure cancer and it like invents a solution? And you know, like so I've been totally ignoring like what everybody's been doing in all these other
Starting point is 00:13:44 modalities where like I think a lot of the early successes in deep learning have been like in images and people are like all excited about images and I kind of like completely ignored it because like you know an image is worth a thousand words but it's like a million pixels so like the text is like a thousand times as dense so like kind of big big text text nerd here but you know it's very exciting to see it take off in you know and all these other modalities as well and you know those things are going to be great it's like super useful for building products that people want to use. But I think that a lot of the core intelligence
Starting point is 00:14:20 is going to come from these text models. To wrap up our favorite moments from 2023, we have part of our conversation with Arthur Munch, the co-founder and CEO of Mistral, talking about the evolution of collaboration in the AI space and why Mestral's mission is to keep AI open. Models can output any kind of text,
Starting point is 00:14:40 and in many cases you don't want it to output any kind of text. So when you build an application, you need to think on the guardrails you want to put on the model output and potentially also on the input. So you do need to have a system that filters input that are not valid, that you deem illegal, and output that are not valid or that you deem illegal. So the way you do it in our mind is that you do create the modular architecture that the application maker can use, which means you provide the role model, so the model that hasn't been altered to ban some of its output space.
Starting point is 00:15:19 And then you propose new filters on top of that that can detect the output that we don't want. So it can be pornography, it can be hateful speech. These things you want to ban when you have a chatbot, for instance. But these things, you don't want to ban from the raw model because if you want to use the role model to do moderation, for instance, you want your model to know about this stuff. So, really, assuming that the model should be well-behaved is, I think, a wrong assumption.
Starting point is 00:15:51 You need to make the assumption that the model should know everything. And then on top of that, have some modules that moderate and guardrail the model. So that's the way we approach it. And it's a way of empowering the application maker in making a well-guarded application. And we think that it's our responsibility to make very good modules that allow guard-wailing the model correctly. It's part of the platform. And we think it's the way of, there should be some healthy competition on that domain
Starting point is 00:16:22 of different startups working on guard railing the models. And the way you make this healthy competition is not by trusting a couple of companies to do their own safety. It's rather for, it's rather the way you do it is to ask application makers to comply with some rules. So chatbots should not output hateful speech. And so that means that now the application makers need to find a good guard railing solution.
Starting point is 00:16:52 And now you have a competition where there's some economic interest in providing the best garlanding solution. And so that's the way we think the ecosystem should work. And that's the way we position ourselves. That's the way we build the platform with modular filters and modular mechanisms to control the model well. We, of course, have to mention our chat with the amazing Jensen Huang, co-founder and CEO of NVIDIA. Here he talks about how NVIDIA decides what use cases to support and what applications of AI he's most excited about personally. There are a couple of things that our company is shaped and structured to do. There's one part, a very large part of our company is designed to build very, very complicated computers perfectly.
Starting point is 00:17:38 And so that is one of its missions. And that kind of architecture, that kind of organization is a invention and refinement organization. And then we have a whole bunch of skunk works, if you will. And the reason for that is because we're trying to invent things 10 years out that we're not exactly sure whether it's going to work or not. and there's a lot of adaptation, a lot of pivoting. And so our company actually has two different ways of working. One of them is rather organic, shape-shifting all the time. If a particular investment is not working out, we give up on it, move the resources
Starting point is 00:18:25 somewhere else. And so that's the agile part of the company. And then there's a part of the company that's not rigid, but it's really refined. And so these two systems have to work side by side. Thank you all so much for listening last year. If you want to dive more deeply into any of the conversations you've heard today, we've linked the full episodes in our description. We'll be back next week with more interviews with the leading builders and thinkers in AI and technology.
Starting point is 00:18:50 Find us on Twitter at No Pryor's Pod. Subscribe to our YouTube channel if you want to see our faces, follow the show on Apple Podcasts, Spotify, or wherever you listen. That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no-dash priors.com. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.