a16z Podcast - The 80-Year Bet: Why Naveen Rao Is Rebuilding the Computer from Scratch

Episode Date: December 8, 2025

Naveen Rao is cofounder and CEO of Unconventional AI, an AI chip startup building analog computing systems designed specifically for intelligence. Previously, Naveen led AI at Databricks and founded t...wo successful companies: Mosaic (cloud computing) and Nervana (AI accelerators, acquired by Intel). In this episode, a16z’s Matt Bornstein sits down with Naveen at NeurIPS to discuss why 80 years of digital computing may be the wrong substrate for AI, how the brain runs on 20 watts while data centers consume 4% of the US energy grid, the physics of causality and what it might mean for AGI, and why now is the moment to take this unconventional bet. Stay Updated:If you enjoyed this episode, please be sure to like, subscribe, and share with your friends.Follow Naveen on X: https://x.com/NaveenGRaoFollow Matt on X: https://x.com/BornsteinMattFollow a16z on X: https://twitter.com/a16zFollow a16z on LinkedIn:https://www.linkedin.com/company/a16zFollow the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXFollow the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details, please see http://a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Transcript
Discussion (0)
Starting point is 00:00:00 I think AI is the next evolution of humanity. I think it takes us to a new level, allows us to collaborate and understand the world in much deeper ways. Naveen Rao is here an expert in AI. Nizene Rao, probably one of the smartest guys in this domain. He sees things well before anybody else sees them. You had a lot of success doing Nirvana Mosaic and Databricks. Why start a new chip company now?
Starting point is 00:00:20 First off, not a chip company per se. Most of what we're doing is really kind of looking at first principles of how learning works at physical system. NVIDIA, TSM, Google, are these potential allies from unconventional, are these competitors? I think TSM is absolutely going to be a partner. Google kind of has everything internally. And NVIDIA, of course, they've built the platform
Starting point is 00:00:39 that everyone programs on today. So are we going to be at odds with NVIDIA going forward? I don't know. We'll see what the world looks like, but there could be a world that we collaborate. Has anyone called you crazy yet for doing this? Oh, yeah, plenty of people. A squirrel brain runs on a tenth of a walk. Our AI data centers now consume 4% of,
Starting point is 00:00:57 the entire U.S. power grid, and we need 400 more gigawatts in the next decade just to keep up. Naveen Rao thinks the problem isn't power generation. It's that we've been building the wrong kind of computer for 80 years. Naveen sold his last AI chip company to Intel, and now he's back with a bet most people call crazy. Analog computing, purpose build for intelligence. In this conversation, A16Z's Matt Borten sits down with the Novin to discuss why now is the time for this unconventional bet. Our guest today is Neveen Rao, co-founder and CEO of Unconventional AI, which is an AI-chip startup. Prior to that, Neveen was at Databricks as head of AI and co-founder of two successful
Starting point is 00:01:38 companies, Mosaic in the cloud computing world and Nirvana doing AI chip accelerators before was cool. We're here reporting from Nureps, and great to have you on the podcast, Mavine. Welcome. Thanks. Thanks for having me. So you were kind of at the vanguard thinking about what the proper hardware is for running AI workloads. Absolutely. I mean, you know, it's like when you have a hammer, everything's a nail, I suppose. But the early part of my career was really about
Starting point is 00:02:02 how do I take certain algorithms and capabilities and shrink them, make them faster, put them in a form factors that make those use cases proliferate, like wireless technology or video compression. You couldn't do video compression real time on a laptop back then. There wasn't enough computing power. So you actually needed to build hardware to do those kind of thing. So early part of my career was all about that.
Starting point is 00:02:23 And then I went back to academia, did a PhD in neuroscience. And so you still kind of look at it like, hey, can I make something better? It's more efficient. And so you sold Nirvana to Intel and then founded Mosaic, which is a cloud company. It's interesting to sort of cross domains like that, I think, to be able to look at hardware and software. I would sort of argue Mosaic was really a software company. How did you make that decision? And why do you think you have these diverse interests?
Starting point is 00:02:47 Well, I think I was, I don't know, I guess you would call it an OG kind of full stack. Now, full stack engineer means something different that it didn't meant back then. I think back then it's someone who understands potentially devices like Silicon, how to do logic design, computer architecture, low-level software, maybe OS-level software, and then application. That was a full-stack engineer. And I actually had touched all those topics. So to me, it's very natural to kind of think across these boundaries.
Starting point is 00:03:13 To me, like, software and hardware is not really natural boundaries. It's just where we decide to draw the line and say, okay, this is something I make configurable or I don't. And it's like, where is the world going to consume something? Where is the problem? And then right size and figure out the solution to go and hit it. Now full stack means I know JavaScript and Python. That's right. So you've had a lot of success doing both of those things and at Databricks.
Starting point is 00:03:35 Why start a new chip company now? It is kind of crazy. It's one of these things like, actually, it was first off, say, it's not a chip company per se. Most of what we're doing is at the beginning is theory and really kind of looking at first principles of how learning works in a physical system. And the reason to go back and do this is just purely out of passion. I think we can change how a computer is built.
Starting point is 00:03:58 We've been building largely the same kind of computer for 80 years. We went digital back in the 1940s. And in undergrad, in 1990s, when I learned about the dynamics of the brain, like the brain's 20 watts of energy and the kind of computations that can happen inside of brain and neural systems, I was just blown away then. I'm still blown away by it. And I think we haven't really scratched the search. of how we can get close to that.
Starting point is 00:04:24 Biology is exquisitely efficient. It's very fast. It right sizes itself to the application at hand. When you're chilling out, you don't use much energy, but you're still aware of other threats and things like this. And once a threat happens, like everything turns on, it's very dynamic. And we really haven't built systems like this.
Starting point is 00:04:42 And I've been in the industry long enough to know that we have to have an incentive to build things. You can't just say, hey, I want to build this cool thing, therefore I go build it. Maybe in academia you can do that. But in sort of the real world, I can't. And now it's exciting because those concepts are super relevant. We're at a point in time where computing is bound by energy at the global level,
Starting point is 00:05:02 which just was never true in all of humanity. And so for those of us who aren't experts, can you describe the difference between digital and analog computing systems? And like, why do you think the architecture has evolved the way it has sort of more digital focus over decades, as you said? Yeah, I mean, very simply. Digital computers implement numerics and numerics with some sort of estimation, right? I mean, in a digital computer, a number is represented by a fixed number of bits. And that has some precision error and things like this.
Starting point is 00:05:34 It's just a way we implement the system. If you make it enough bits, like 64 bits, you can largely say that maybe the error is small. You don't have to think about it. And so the digital computer is capable of simulating anything that you can express as numbers and arithmetic. So it became a very general machine. I can literally simulate any physical process. All of physics, we try to do computational physics, right? I have an equation.
Starting point is 00:05:56 I can then write numeric solvers that sort of deal with those imprecisions in the number of bits. And so this became, obviously, computer science, the entire field now. And we went that direction actually very early on because we couldn't scale up computation. It's actually kind of an interesting conversation. If you look from back then, not that I was there, of course,
Starting point is 00:06:15 but if you look at the papers and things, they actually looked very similar to today in terms of scaling up GPU. choose. Analog computers directed from the first computers, and they worked really well. They're very efficient, but they couldn't be scaled up because of manufacturing variability. So someone said, oh, okay, you know what? I can actually say they can make a vacuum tube behavior as a higher low very reliably. We can't characterize the in-between very well, but I can say it's higher or low. And so that was kind of where we went to a digital abstraction, and then we could scale
Starting point is 00:06:42 up. ENIAC, which was built in 1945, had 18,000 vacuum tubes. Wow. So 18,000 kind of similar to how many GPUs people use now, right, for hard-scale training. So scaling things up is always a hard problem. And once you figure out how to do it, it makes a paradigm happen. And I think that's why we went to digital. But analog still is inherently more efficient because it's actually analogous computing is the way to think about it. Like, can I build a physical system that is similar to the quantity I'm trying to express
Starting point is 00:07:09 or compute over? You're effectively using the physics of the underlying medium to do the computation. And so in digital computers, we have transistors, just to make. make it sort of concrete. What kind of substrates are you talking about for analog computers? Yeah, I mean, analog computers can be lots of different things. There's wind tunnels are a great example of an analog computer in a sense where I have race car on a track or an airplane, and I want to understand how the wind moves around it. And you can, in theory, solve those things computationally. The problem is you're always going to be off.
Starting point is 00:07:41 It's very hard to know what the real system is going to look like. And doing things with computational fluid dynamics accurately is pretty hard. So people still build wind tunnels. That's actually modeling that. That's an analog computer. I think we still have lots of reasons to build these analogous type computers. Now, in the situation we're talking about, we can actually build circuits in silicon to recapitulate behaviors of neural networks.
Starting point is 00:08:04 So what we're doing today is more specified than what we're doing in years ago in a sense, is that then we were trying to automate generic calculation, which was used to calculate artillery trajectories. It was used to calculate finances, maybe some physics problems like going into space, things like that. those require determinism and specificity around these numbers and these computations. Intelligence is a different beast. You can build it out of numbers, but is it naturally built out of numbers? I don't know. A neural network is actually a stochastic machine. And so why are we
Starting point is 00:08:38 using the substrate that is highly precise and deterministic for something that's actually stochastic and distributed in nature? So we believe we can find the right isomorphism in electrical circuits that can subserve intelligence. That's a pretty wild idea, isn't it? Maybe unpack it one level deeper, because I totally agree with you. Computers for decades have been sort of the complement to human intelligence, right?
Starting point is 00:09:03 It's like, hey, my brain isn't really great at computing an orbital trajectory. That's right. And I don't want to burn up on reentry. So, like, a computer can help us with this incredible degree of precision. We're now kind of going the opposite direction, right? We're actually trying to encode more sort of fuzziness.
Starting point is 00:09:19 into computer systems. So go maybe just a little bit deeper on this idea of an analog and why intelligence is a good fit for analog systems. Well, I mean, the best examples we have of intelligent systems in nature of brains.
Starting point is 00:09:30 And it's often been said, you know, human brains run on 20 watts of energy. That is true. But if you look at mammalian brains generally, actually extremely efficient, like a squirrel or a cat, it's like a tenth of a watt.
Starting point is 00:09:40 And so there's something there that we're still missing. And not to say that we understand all of it, but part of what I think we're missing is we have, lots of abstractions in a computer that are quite lossy. In a brain, the neural network dynamics are implemented physically. So there is no abstraction.
Starting point is 00:09:57 Intelligence is the physics. They're one and the same. There's no OS and some sort of API and this and that. So there's some visual stimulus, for instance, that directly activates an actual neural network and produces some semantic response, that sort of thing. Exactly. And those things are mediated by chemical diffusion
Starting point is 00:10:14 and the physical properties of the neuron, on the physics itself. So I think absolutely it's possible to build something that's much more efficient by using physics in an analogous way. That is 100% true. Can we do it and build products out of it? It's really the question we're asking you're unconventional.
Starting point is 00:10:33 And is part of the idea that now is the right time because AI is both a huge and a unique workload. Yeah, absolutely. You know, it's interesting, just maybe some stats here. Like, the U.S. is about 50% of the world's data center capacity. and today we put about 4% of the energy grid, of the U.S. energy grid into those data centers. And this past year, 2025, was the first time
Starting point is 00:10:56 we started to see news articles about brownouts in the southwest during the summer. And, you know, just imagine what happens when this goes to 8%, 10% of the energy grid. It's not going to be a good place that we're in. So can we build more power? Absolutely, we should. Building power generation is very hard, expensive,
Starting point is 00:11:15 and it's infrastructure. Like, it takes time. You can only bring online so many kilowatts or gigawatts per year. So it's something on the order four per year. By some estimates, we need 400 gigawatts additional capacity over the next 10 years to power the demand for AI. Wow. So we have a huge shortfall.
Starting point is 00:11:34 And so we really just need to rethink this. The, you know, 15-year-old sci-fi nerd in me says, like, wow, we're mobilizing, you know, species-scale resources to, like, invent the future. We are. Then there's the practical. It's like even if we add 400 gigawatts of production capacity, our 1970s-era transmission grid is probably going to melt under the load. So yeah, so there's very serious sort of infrastructure hurdles to this, I think. It's hard to get a lot of humans to act together, right?
Starting point is 00:12:03 It's just a reality. That's what has to happen to solve these problems. What tradeoffs do you think this entails, you know, sort of the path you're pursuing versus the mainstream digital path now? Yeah, I actually don't see it as, you know, it's digital or analog. It doesn't work like that. I think there are certain types of workloads
Starting point is 00:12:20 that are amenable to these analog approaches, especially the ones that can be expressed as dynamical systems, dynamics meaning time. They have time associated with them. In the real world, every physical process has time. And in the computing world, like the numeric computing world,
Starting point is 00:12:36 we actually don't have that concept. You simulate time with numbers. Actually, simulating time is very useful in certain problems. So I think we should still build those things, and we should still have those capabilities for the problems that we need to solve that way. But for
Starting point is 00:12:51 these problems where, you know, like you said, it's a bit fuzzier. I'm trying to retrieve and summarize across multiple inputs. That's actually what brains do really well, right? They can take in tons of data and sort of formulate a model of how those things interact.
Starting point is 00:13:08 And sometimes those models can be actually extremely accurate. Like, look at an athlete. You know, you know, Alex Honol, and climbed El Capitan, right? Just think about the precision that's required. It still scares me every time I see it. It's insane, right?
Starting point is 00:13:22 And if he slips, like, just, he's off by a millimeter in some places. He dies, right? And that's true for, like, every top-level athlete. They're someone who's, you know, the Olympic. Yes, Steph Curry, you know, the story is he set up a special tracking system so he can make sure the ball was hitting the middle of the rim, not just going through. So the level of precision these guys hit with a neural network.
Starting point is 00:13:45 that's noisy, is actually quite high. So neural systems can actually do a lot of precision under certain circumstances. But what's interesting about these situations is Steph Curry, when he shoots a ball, is never going to shoot it under ideal circumstances in a game. Always, it's a unique input. And there's a lot of different input variance coming at you.
Starting point is 00:14:05 Like where the other players are precisely where you're standing. Maybe your shoes are different. Maybe the surface is a little different. Like maybe the ball is tackier or your hands are sweaty. like there's so many inputs and we kind of put them all together and integrate them and still have very accurate behavior. So brains are exceptionally good at this and, you know, that's a set of problems that is actually very useful to solve.
Starting point is 00:14:26 And now we're approaching those problems. But it doesn't mean we don't still use computational substrates to do actual computation. This is kind of an intelligence substrate. And so what types of AI models or data modalities do you expect your partner will be well-suited for? Yeah, so we're obviously starting with the state of the art today. Like transformers, diffusion models, they work. They do really good stuff. So we shouldn't throw that out.
Starting point is 00:14:55 And diffusion models and flow models are actually energy-based models are actually pretty interesting because they inherently have dynamics as part of them. They're literally written as an ordinary differential equation. So that makes it such that, hey, can I map those dynamics onto the dynamics of a physical system in some way that's either fixed. or has some principled way of evolving. And then can I basically use that physical system to implement that thing and do it very efficiently with physics?
Starting point is 00:15:24 So that's kind of the nature of what we're doing. And we will be releasing some open source and things around this to let people play around. But, you know, Transformers are really, they're a big innovation because they made the constructs of the GPU work extremely well. And it doesn't mean it's wrong, but I don't think there's nothing natural.
Starting point is 00:15:44 there's no natural law about the parameter of a transformer. Transformers parameter is a function of the nonlinearities and the way a whole thing is set up with attention. There's going to be some kind of mapping between transformer parameter spaces and these other parameter spaces. And transformers, I think, have kind of used lots of parameters to accomplish what they do.
Starting point is 00:16:06 I have to ask just since you mentioned energy-based models and Jan Lacoon has been writing quite a lot about this. Do you think pursuing these sorts of paths that you're talking about is gets us closer on the path to AGI, whatever AGI means? Honestly, I do. The reason I feel that way, and again, this is hand-wavy. I'm going to be really honest. That's why I'm putting quotes around AGO.
Starting point is 00:16:31 I think the discussion is necessarily hand-wavy. It's got to be because we just don't know. But my intuition says that anything where the basis is dynamic, which has time and causality as part of it, will be a better basis. than something that's not. So we've largely tried to remove that. And, you know, a lot of times you can write math down. It's reversible in time and things like that.
Starting point is 00:16:52 But the physical world tends not to be, at least the way we perceive it. And so can we build out of the elements of the physical world that are, you know, do have time evolution? I think that's the right basis to build something that understands causation. So I do think we'll have something that is better and will give us something closer to what we really think is intelligence.
Starting point is 00:17:13 Because, yes, we have intelligence in these machines. I don't think they're anywhere close to AGI because, I mean, they still make stupid errors. They're very useful tools. But they're not what, it's not like working with a person, right? I think most people at that. That's actually really interesting. So the sort of thing that's missing in AI behavior,
Starting point is 00:17:33 which I think a lot of us see that there's something missing but can't quite put a name to it, it sounds like you're arguing part of that is sort of a real sense of causality. Yeah. And that training, and more dynamic sort of regime may impart this kind of like a parent understanding of causality better than what we have now.
Starting point is 00:17:50 Yeah. And again, hand-wavy, but yes. I mean, look, you have kids, little kids, and you see them. I mean, children kind of innately understand causality in some ways. Like, you know, this happened and that happened. And yes, I know you can say, like, is reinforcement learning or whatever at some part of it. But there's something innate that we understand causality. In fact, that's how we move our limbs and all of that.
Starting point is 00:18:12 I know if I send a certain command. into my arm. It'll do something. So I think there's something innate about the way our brains are wired, built out of primitives that are, that do understand causation. Put unconventional in the context of the broader industry for me, like Nvidia, TSM, Google, are these, you know, potential allies for unconventional? Are these competitors? How do you think about it? Yeah, I mean, a couple of things that we set out to do when we built, we're starting this company was, see if we can find a paradigm that's analogous to intelligence within five years. And then at the five-year mark, we should be able to build somebody that's scalable from a
Starting point is 00:18:50 manufacturing standpoint. So, you know, you can think about building a computer out of many different things. But if it's not scalable from a manufacturing standpoint, we can't intercept this global energy problem. So we need to have somebody to say, okay, go build 10 million of these things, right? So I think TSMC is absolutely going to be a partner forward, you know, met with them recently. And, you know, we want to work closely with them to make sure we'd get what we need,
Starting point is 00:19:15 get fast turnaround times to prototype and all of that. Google, NVIDIA, Microsoft, all these guys are, you know, at the forefront of where the application space is. Obviously, Google kind of has everything internally. And I think they're working on sort of lower risk, but, you know, continual improvements for their hardware. With TPUs, you mean. With TPUs, yeah.
Starting point is 00:19:38 From what I can see, you know, just publicly, is, It makes total sense, right? They have a business to run, and they're trying to make their margins better. And, you know, how can I do that with all the tools I have at, you know, in front of me? InVIDIA, of course, you know, they've built the platform that every month programs on today.
Starting point is 00:19:55 So is it, are we going to be at odds with Nvidia going forward? I don't know. We'll see what the world looks like. But, I mean, we are trying to build a better substrate than Matrix Multiply. There could be a world where we collaborate on such solutions. And, you know, we're open to all of these things. Where do you personally get the motivation to get up in the morning and build this company?
Starting point is 00:20:18 I mean, you've had a lot of success in your career. This is your own startup. What, you know, what's exciting about this to you? I don't know. It's a weird thing. Like, if you haven't worked in hardware, it's hard. I've been fortunate to work in hardware and software. And, you know, I love writing a bunch of software, and then hitting a compile and seeing it work.
Starting point is 00:20:38 That's a good dopamine hit. But man, when you work on a piece of hardware and you turn that thing on, that's a big Dover made hit. That's like, this is like celebration, jumping, you know, jumping up in the air, high-fiving. It's a different thing.
Starting point is 00:20:50 And I don't know, you sort of live for these moments, you know? Like, when I was at Intel, like, I was one of the only execs who would go to the lab and the first ship would come back and I'm like, I want to see what you turn on, see what happens. Sometimes you turn on, it's like, you see the little puff of spoke coming out. You're like, uh-oh.
Starting point is 00:21:08 That's not good. But you want to be there. be part of the moment. But I think that's part of it. I think for me, personally, like, we have this opportunity now that we can really change the world of computing and make AI ubiquitous. I'm the opposite of an AI Dumer. I think AI is the next evolution of humanity. I think it takes us to a new level, allows us to collaborate, understand each other, and understand the world in much deeper ways. Totally agree. So every technology has negatives, but the positives to me so far outweigh it.
Starting point is 00:21:39 And the only way we're going to get to ubiquity is we have to change the computer. The current paradigm, as good as it is, and as far as it's taken us, is not going to take us to that level. I think that's such a great way to say it. AI actually can't help us understand each other better, help us understand ourselves better,
Starting point is 00:21:56 understand the natural world better. I don't think it's at all what some of the doomers think of replacing sort of human experience. That's a short-term thing. I mean, there will be bumps along the way, right? Technology does that. That's what happens when you've seen too many sci-fi movies. That's right.
Starting point is 00:22:13 But what's Star Trek? Yeah, totally, totally. It's great. This is a really big swing, right? Like, this is a very ambitious company. What gives you confidence that it's going to work? Or it has a reasonable shot of working? There's a number of data points.
Starting point is 00:22:29 Of course, like I said, the brains are existence proof. But there's also 40 plus years of, of academic research, which is showing a lot of promise here. People have built different devices, albeit not in the latest technology with professional engineering teams, but they have built proofs of concept that actually show some of these things work. We've also, from a theory standpoint, both from neuroscience and just pure dynamical systems and math theory, do start to understand how these systems can work.
Starting point is 00:22:59 So I think we now have pieces at different parts of the stack that show, hey, if I can combine these things the right way, I can build this. And, uh, you know, that's what great engineering is all about is like, you know, exploiting this thing that someone else built for something else, exploiting that thing. And it's, engineers are kind of like, uh, the opposite of theories. It's like, well, all right, that thing doesn't quite fit. Sand it down and make it thick, right? So it's like, we got to do a little bit of that right now and then we can build something, put it all together. That's awesome. Has anyone called you crazy yet for doing this? Oh, yeah, plenty of people. Is it like everybody?
Starting point is 00:23:34 Well, I'm used to this at this point. My family would call crazy. I was called crazy going back to grad school years ago when I had a very good career in tech. So it's fine. I think that's that you need crazy people to go out and explore. I mean, if you think about humanity, out of Africa, all that, the crazy people went out. We would be lost without crazy. You need some crazy in there.
Starting point is 00:23:55 So it's okay. I'm fine with that. And so what kind of people are you looking to bring on to the team of a very ambitious goal? who should be interested in joining you? Yeah, I mean, I think some of the traditional, traditional issue I want to say traditional, I mean over the last five years, this field of AI systems is of all, like people who are really good at taking algorithms
Starting point is 00:24:17 and mapping them very effectively to physical substrates. Those folks who understand energy-based models, flow models, gradient descent in different ways. You know, this kind of thing is what we need there. We need theorists who... can think about different ways of building coupled systems, how I can characterize the richness of dynamical systems and relating that to neural networks.
Starting point is 00:24:41 So there is a theory aspect of this. Then there's folks who are like at the system architecture level. It's like, all right, here's what the theory says. This is what I can really build. How do I bridge that gap? And then there's the people actually physically building this stuff, like analog circuit people, actually digital circuit people too. We're going to have a mixed signal here.
Starting point is 00:24:58 So that's the whole stack. The stack is, it's hard because, because these are all things that no one's really pushed to that level. Like, when we build this chip, our first prototype, it's going to be probably one of the larger, maybe the largest analog chip people have ever built, which is kind of weird. First time you do something, things don't usually work the way you think they.
Starting point is 00:25:17 So you can get in on that Cerebrus Jensen game where they're each pulling the biggest possible wafer out of an oven. Something like that, yeah, yeah, exactly, right? Put a few vacuum tubes on top for effect. Yeah, I need blinking lights. Yeah, exactly. We're not going to have cool heat sinks. It's going to be super cool. It's going to be cold.
Starting point is 00:25:35 Like, you don't need big insights, you know. So I hope they make something that looks interesting here. This is a funny time for top AI people, right, where you have sort of the option. If you want to start a company, there's a lot of venture capitalists who probably would fund you. If you want to get a cushy job at a big company, you can get a very cushy job and kind of do some interesting things.
Starting point is 00:25:58 Or, you know, people can join a startup like unconventional that, you know, has a lot of the, nice aspects people look for in sort of AI careers and are taking like super sort of big swings. I'm just sort of curious, you've been on all sides of this. Like, do you have any advice for, you know, younger people starting out in their careers or how do you think about this? I think you get such a breath of working in a startup that at the beginning of your career, that will pay dividends later on. Because like I said, like the reason I can think across the stack is because I did all those things very early in my career, you know? I built hardware. I built
Starting point is 00:26:30 software, build applications. And in big companies, it's not, it's not aimless fault. It's just a way it is. Like, you get hired to do a thing and you do that thing over and over again. You're really good at doing that thing. And that's fine. You need people who are really good at doing specific things. But if you want to be prepared for change in the future, being really good at one thing is probably less valuable than being very good at, but slightly good at a lot of things. Yeah, that's interesting. Is it fair to say unconventional sort of a practical search lab is that kind of the culture you're going for absolutely yeah i mean first few years it really is open-ended i don't want to close doors like i'm really specific about this like i always try to bring
Starting point is 00:27:09 the conversation back because those people like oh that's going to be hard to manufacture's like stop don't think about that will it work first come up with existence proofs then we go back and try to engineer it and you know all the tradeoffs they're in but if you make those tradeoffs up front you don't go into a good place so yes we're really thinking wide open but with an eye on future we are building a product. And to your point, it takes not only people with diverse skill sets, but people with kind of high agency to try new things and learn new things and kind of integrate across the stack. Yeah, I mean, I think what I've done really well across the companies I've built has been going after hard problems, which kind of lends itself with smart
Starting point is 00:27:50 people wanting to come in and try solve them. They see a challenge. It's like, here's the amount of risk of climate. But then giving them agency, and I sort of look at it like, what decisions can I make as a leader to increase agency of the org overall. Like me making top style decision may be a global, globally better for the company in the short term. But I think long term, we will do better if more people have agency and can try more things out. So personally, I like to find ways to get out of the way when I see people who are very
Starting point is 00:28:22 passionate about trying something. He's like, okay, well, you really want to do this. That makes sense. Go for it, you know. And then you own it. You own both the good and the bad, right? That's agency to me. It's like, you got to say, like, okay, I fucked up.
Starting point is 00:28:33 That's okay too, but give people the room to do that, you know? Anything else you want to say before we wrap up? I mean, I think this is like an opportunity to do something that is generationally will be felt. You know, to me, that's what gets me up in the morning is, you know, you can go work on a product and make a tweak and people will use it. That's great. But like, in five years, many times people forget those things. But if we are successful here, the world will not forget this for a very long time. This will be written in history books.
Starting point is 00:29:05 And so I feel like those opportunities are rare. Thanks for listening to this episode of the A16Z podcast. If you like this episode, be sure to like, comment, subscribe, leave us a rating or review and share it with your friends and family. For more episodes, go to YouTube, Apple Podcast, and Spotify. Follow us on X at A16Z and subscribe to our substack at A8. a16z.substack.com. Thanks again for listening, and I'll see you in the next episode. As a reminder, the content here is for informational purposes only. Should not be taken as
Starting point is 00:29:39 legal business, tax, or investment advice, or be used to evaluate any investment or security and is not directed at any investors or potential investors in any A16Z fund. Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast. For more details, including a link to our investments, please see A16Z.com forward slash disclosures.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.