The Infra Pod - Let's review our favorite infra hot takes in 2024!

Episode Date: December 31, 2024

Ian and Tim this time around sat down and talk about their learnings interviewing the great guests we had on this podcast, and picked our favorite moments and learnings. And of course, our own spicy ...hot takes for 2025!

Transcript
Discussion (0)
Starting point is 00:00:00 Well, welcome to the InfraPod. It's Tim from Essence VC and Ian, let's go. This is Ian Livingston, angel investor, pretender of knowing, of knowledge about Infra, and just overall lover of dev tools and the future of how we build apps. Tim, how excited are you to do our year-in-review recap? Both of the podcasts, but just Infra
Starting point is 00:00:26 and all of the fun that's occurred in the last year. Yeah, you know, I don't think we ever really plan out our episodes at all, right? But the themes that we go talk about and the things that we just keep recurring in all our conversations seems to get to pretty much like a pattern I'm even observing. So I'm excited.
Starting point is 00:00:46 We're excited. Let's just do this. Do this thing? Cool. Well, on this episode, we'll talk a little bit about our favorite moments, but more importantly, our takeaways from the last year. What have we learned?
Starting point is 00:00:59 What have we noticed? What are the big trends? What are the things we think that are cool? And we'll finish you off with a little prediction on what we think is going to happen with Inferon apps and everything in 2025 based on our purview and where we end up. And I think that's all pretty spicy and a nice little holiday snack for all of our great listeners out there. So Tim, what's your favorite moment over the last year? I think we've put out like somehow 20 episodes or something. Where are we sitting?
Starting point is 00:01:26 What are some of your favorite moments? My favorite moment so far is when we got Guy to talk about what he's doing and Tesla. Like to me, like the depth of what he was able to describe. And sort of like the both things he's doing. Like it's just so memorable for me, you know? Because getting Guyon, of course, he has a company that raises a lot of money. But on the flip side,
Starting point is 00:01:53 seeing somebody with that background and sort of like the ability to kind of change the world in that way, you believe there might be something really happening, right? And not saying that all the other guests are not but they're all building something really interesting right but like he's yet to set on some new journey here and so that's pretty memorable for me because i remember we were chatting about like hey you know let's get a little bit more prepared we're ready for somebody
Starting point is 00:02:21 that becomes more memorable right we're like, this is a much bigger deal. Let's get ready to do it. Totally didn't disappoint at all. Like talking to Guy in that episode, it was the longest episode we had. Like it was an hour fully. We recorded longer than an hour, right? We cut it down to like 50-ish minutes.
Starting point is 00:02:38 Time flew fast on that one. I didn't feel like it was a long chatter. Not at all. I mean, Guy's a phenomenal storyteller. I think with Onadot, one's a phenomenal storyteller. I think, without a doubt, one of the best storytellers we have in dev tools infrastructure in terms of the way that he thinks
Starting point is 00:02:51 and the metamolicy he creates and his ability to communicate it. I love that episode. And I think the thing that's so interesting about what Guy was doing, what he's really talking about, is the interface change and the trust required as a result
Starting point is 00:03:03 to enable the interface change to occur and what that interface change and the trust required as a result that enable interface change to occur and what that interface change from the IDE to the specs or from a higher level abstraction that comes from moving away from structured code to something that's less formal, more readable, more human digestible, which is sort of this idea of these specs and this composite system to specs
Starting point is 00:03:22 and with AI that read the spec and then maintain the system for you. It is both just like radically different than the way things work, but also is a very democratizing mission if true. And so I agree with you. That was like a pretty great episode and very interesting endeavor and infrastructure and very bold and something you totally expect from a third time founder. Yeah. And what's your favorite moments then? Mine is, it's hard to follow up on Guy. But look, I think one of my favorite episodes was
Starting point is 00:03:51 when we talked to Akshay Shah, CTO of Buff. It was such a great episode. He had a very specific view on the world and why the world should be the way that it is. And it was just one of the episodes where I got a lot of feedback from people who were like, this is incredible. This guy gets it. It was so clearly delivered. It was interesting. It was intriguing.
Starting point is 00:04:12 And I think what we've learned, and we were just going through what all did we even talk about this year, what we've learned is the best guests, and we said this on another episode, the best guests are ones that just come in with something that's really bold, opinionated, but backed up with experience and story. Incredible data based on their experience about why they believe the future is the way that it should be. And so that's by far one of my most favorite moments. I remember that episode is, he said it's his very first time doing a podcast, right?
Starting point is 00:04:39 I don't actually believe that. I was like, I've never done a podcast before. I'm like, I can't actually believe that. I was like, I've never done a podcast before. I can't tell. He was pretty poised and didn't really have any retakes or edits needed. He just gone through it. I guess he's just a natural. No rehearsing or practice needed at all to be a good podcast guest or host, if you want to do it.
Starting point is 00:05:00 Anyway, he's amazing. I think people that really spend a lot of time digging into deep-ended infrastructure, when you ask them questions that we're asking, they get more and more excited and easier to answer all the stuff that we're curious about too. That's my memorable thing about that episode at least. Should we dig into our takeaways? We'll stop prognosticating how amazing we think our podcast is
Starting point is 00:05:24 and start talking about what we think we learned or i guess we had to we had to drop some some stuff we learned right um yeah so i'll start then i'll start so i think you know we're gonna each share two things right and i'll probably start with this first i want to leave probably the more interesting one in the later one. And it's no surprise, right? Obviously, the people we talked to this year and even last year has been so much about the future of data infrastructure around how S3 is becoming so much of the standard place
Starting point is 00:05:56 where all the data is stored. And so it's nothing new. But remember, like, you know, this year, even talking to Modo, you know, talking Lance, and obviously talking to Tiger's Data. Two years ago, we were talking to WarpStream, CedarDB, and we got Chris' materialized view on. All surrounded on the future of data is no longer has to be stored in these separate databases in silo. We're going to see data actually much, much more standardized in the cloud data lake themselves.
Starting point is 00:06:33 And so the future of the data infrastructure, look at all these companies. Like, Modus, the compute layer doesn't really care where data is, but so much of the customers are directly going to S3. Lance is an S3 native data store. Warpstream, Tigress building a new S3. All this is just assuming. Next time I still need to use my data and I do an export Postgres whatever thingy to finally go to a Kafka or whatever. That whole world of I need thousands of ETL and Kafka's to do anything, it's probably changing. And it's
Starting point is 00:07:08 quite cool to actually see that our guests is pretty much assumed it's going to happen. If you want to actually build a company in this space, you kind of have to be assuming certain things going to happen and bet on them, right? Anyway, this is not even a new topic, but it's really cool to confirm people are now wanting data not to go everywhere. We want to have a much more simpler stack. This simpler stack requires, unfortunately, new data infrastructure. Because all the older stuff can't go interoperable at all.
Starting point is 00:07:38 So that's kind of fun for me. It's super fun. I mean, it's fun for me too. I think one of the things that's cool is you kind of have a prescription of how to actually build data infrastructure in a way that customers will be capable of adopting it. Even with WarpStream, Kafka-compatible APIs, and S3,
Starting point is 00:07:55 there are starting to become recipes for what does it look like to build data infrastructure and sell it. You know it's got to be S3. The data plane has to be in their cloud. To me, you have no side control plane. That's cool. But a lot of these topics, it's becoming known.
Starting point is 00:08:11 Whereas three, five years ago, it wasn't clear what was going to work and what wasn't. Do you do SaaS first and have it all run inside yours? We're starting to get recipes that work, which means that broadly the cloud is maturing. And to be honest, it kind it flows into my next topic, which is at the end of the day, my biggest takeaway, and this isn't directly from an episode, but it's something I've definitely realized as a result of having all these different conversations,
Starting point is 00:08:38 is that the moat that the major cloud vendors have, which is IAM and the VPC, it's a road and away. They still got data egress fees, for sure. So the data is going to stay in S3. But the idea that IAM or the VPC locks you into their cloud, which was their original real mode, in terms of security,
Starting point is 00:08:57 it was like, oh, it has to be an IAM, it has to be VPC, that's starting to fall apart. Because at the end of the day, some of these systems, you're giving cross-access into your core systems across cloud account, right?
Starting point is 00:09:08 So it's not holistically owned by your cloud account anymore. And sometimes those things are runtimes that exist outside AWS or GCP. They're running on their own thing, right? We've seen the rise of vendors like Fly, things like Modal,
Starting point is 00:09:21 which are just absolute rocket ships. Vercel is going to be IPO-ing next year, another absolute rocket ship. All of that compute, all that runtime lives outside your IAM or your VPC. And these companies are starting to eat into the enterprise as well. And so a big takeaway for me was, actually, over the next three to five years, the way that we build and sell software, all the things that were preventative earlier on in terms of adopting the way that I think about architecture,
Starting point is 00:09:48 that's all going to shift to truly more of a composite set of vendors. And there's going to be a bunch of norms and substrates and expected practice underneath how you connect it together. And so that's one of my big takeaways from this year is, oh, the pieces are actually starting to fit together. And we have a clear driver as to why this is occurring. And it's all about AI, AI, the pieces are actually starting to fit together and we have a clear driver as to why this is occurring. And it's all about AI, AI, AI, which is more to say you want to add a little bit of intelligence to your app. We now have LLMs that are easy for anyone to pick up and use
Starting point is 00:10:16 and build some chat-based features, some natural language-based feature. And you want to build that into your app. And you want to do it, whether it's support chatbot or you're building some type of net new native experience, you want to do it because now we have to compete on it. And so that's a reason for all these enterprises, for the business to make big investment cycles into actually adopting these vendors.
Starting point is 00:10:36 So I think that's really exciting and also a reason for investment into new infrastructure. Yeah, yeah. Well, that's a pretty good teaser, sir. I think it's so interesting because people assumed back in the day, either all this stuff, either there's a feature in the platform
Starting point is 00:10:54 or the clouds are just going to be a couple. But like I said, if we're going to see the not just the Versa models of the world, fly of the worlds, but specialized clouds and all different platforms growing. Even talking to guests today,
Starting point is 00:11:07 even mentioning Lambda, you know, serverless is becoming like a default option. It's so hard to embed all the old school identity stuff everywhere. You know, not even just VPC IMs, right? Like it's all going to be changed in some way. I think maybe that will be something we can predict on how it will change and when. Talk about AI, AI, AI. All right, I'll jump into what I want to, I think it's something I've really been thinking
Starting point is 00:11:34 about recently, just given the guys episode and all that kind of stuff. Because I think even just like summarizing the episodes, we talk about AI, which is pretty much every single damn one at this point. But the last one we had, Modern, we talked about how AI is changing code refactoring, and their differentiation was the semantic. They lost the semantic tree, they were able to actually capture a lot of context to actually generate AI,
Starting point is 00:11:57 versus just me looking at code and just doing all kinds of code refactoring without all the compiler context. Guy is also arguing, we need a new spec. We need a new spec to help and just doing all kind of code refactoring without all the compiler context, right? Guy is also kind of arguing, we need a new spec. We need a new spec to help guide the LLM to know exactly what is the thing to produce and how to validate them, right? And all the necessary pieces into it.
Starting point is 00:12:16 Even GPT scripts is trying to add more context than just me generating just random code. We need descriptions on how to like guide and test and add only functions and stuff like that. And so like all of this conversations one way or another, my biggest takeaway is like, I think AI, we all been focusing so much on the LLMs themselves these days, like how big is my parameters?
Starting point is 00:12:40 How much context window? What kind of fancy inference, whatever, fancy new architecture, all this kind of stuff. But then the actual thing you wanted to do really, really well, where you bring it into code, you bring it into security, you bring it into development, you bring it to whatever, you know, or natural language answering questions. What I'm hearing, not even just on our podcast, but even just outside talking to a bunch of founders, the thing that really empowers the thing to work with LLMs is really almost like data structures that's around. And these data, these contexts, or whatever thing we call them, are the hardest thing to really able to generate and maintain. We don't even know what it looks like.
Starting point is 00:13:22 We don't really have AI systems everywhere yet. So we don't even know what a spec looks like, right? You know, guide, he talks about like, we probably have a bunch of things there. It's a community effort, right? There's probably not going to be a single, one single file, you know, that holds everything. There's going to be a brand new level of definition and data structures required to make AI to work. But just so cool to kind of hear that if we look at how AI actually would become true everywhere, is that the LLMs will continue to be smaller and cheaper or whatever they're doing. But the most important part beyond just the LN vendors doing their thing and selling more
Starting point is 00:13:58 GPUs is we actually need to go figure out for the problem we're solving, what is this data structure we need to start to out for the problem we're solving, what is this data structure we need to start to able to contribute and write to that can actually able to do way better job of what we want, right? Because we either need validation or guiding principles or something context related to help let the LLM see that this is all the relevant information, almost like rag in some level. But that rag isn't just stuff all my Notion and Google Docs somewhere. It has to be much, much more specific. And that information, that context,
Starting point is 00:14:38 will require a very different culture, process, technology, different compiler that's compiling code without getting that data. It requires a different company completely trying to come up with a new standard, requires some level of information or knowledge graph or whatever thing people have been trying to figure out with. That is the fun part to me.
Starting point is 00:14:58 Talking to all the practitioners, it's just so more refreshing that we're not just reducing our problem set to just LMs. Because I think that's just so boring in my mind. I just want to keep thinking about 70 parameters, 800 parameters, and NeurIPS is the only thing you should attend and nothing else.
Starting point is 00:15:15 It seems like our world just reduces down to just two conferences on Earth you need to go. NeurIPS and something else. ICML. Nothing else matters on Earth anymore. We're just like, we're in our, you know, we're in our, whatever, 30, 40, 50s, you know, still holding on to Kubernetes.
Starting point is 00:15:34 Grumpy, unfra, oldies, you know? Like the world still is important around LLMs, you know? It's just, I don't think we talk about it that much. I think that's a great take away. I mean, that episode of Moderna was just so interesting. And one of the things that you really call out there is, there's a couple of things we learned this year, right? We basically exhausted all of the public data available to train.
Starting point is 00:16:00 But more importantly, I feel like we're broadly in the pets, not cattle stage of AI. In the sense that all the models are massive. The cost to run them is huge. The systems are very not repeatable. Everyone basically building one off. The hardware is hyper-specialized and we continue to pursue more specialized hardware. There's lots of new inference hardware chips
Starting point is 00:16:21 and all these other different things coming down the pipe. Broadly, that's because we're trying to pursue the ability for compute and prediction or the use or inference to be cheap, right? And so that's the next phase. It's like, how do we get these AI systems to be cattle instead of pets? How do we get them to be repeatable instead of pets?
Starting point is 00:16:40 And that's how I've been thinking about it. The last couple of years have been like, okay, let's just scale, scale, scale, specialized hardware, specialized compute, specialized. Everything was just tune, tune, tune, tune, tune. And over time, what happened in the early 2000s, Google is a good example of this. Everyone was deploying specialized hardware.
Starting point is 00:17:00 There are a bunch of pets all over the place. And Google basically was like, no, no, no. We're just going to rack and stack commodity hardware. And we're going to invest in building the software layer that makes it possible for software to be treated like a bunch of cattle. And if the hard disk crashes, whatever, there's like a thousand other ones that have a portion of the data someplace. Or they're up there serving compute. And so it feels like with AI, or LLMsMs and more broadly, we're at the sort of like pets not cattle stage and we need to get to the cattle stage. And then I think the unique
Starting point is 00:17:29 thing that you said is like the unique value in AI for the next phase, it still just comes down to like what unique data do you have? But more importantly, what's the unique data structure that you have that enables these systems to actually like interpret what's going on, right? We're still in the phase of how do you model the problem? And I think the Lossless Tree episode, talking about code migration, is like, actually, we still haven't yet discovered what the right modeling of the problem is
Starting point is 00:17:56 so we can actually solve it. And I think that's what I learned, at least, from Jonathan and Moderna. It's like, oh, actually, this problem space is really complicated because it's really a context issue to even have the context understand what a potential good solution even looked like and then how to even evaluate what a good solution is.
Starting point is 00:18:12 That was really quite fascinating. Yeah, I think so too. And also just last footnote, I think it's almost like we used to have no computers before, right? So we always have to human labor everywhere to do everything related. But once we have computer scientists,
Starting point is 00:18:29 since we're able to have any compute primitives, now we have to start to think of what is the way we can even turn this into a set of APIs or programs to solve shipping, phone calls, and all the sort of related things that you always do by hand. And I think AI is almost like the next generation. Okay, AI can enable to solve this now.
Starting point is 00:18:51 But you can't just throw this to the computers. Computers have no idea what the hell you're talking about. AI still has no idea what you're talking about either. So it's like this new abstraction layer that needs to be required. And that's hard. I don't think we have an easy answer at all. That's an opportunity for so many things of the companies to come in and actually solve a broader set of problems for them. That's a great segue into my final takeaway,
Starting point is 00:19:17 which is now the NeurIPS has occurred. I wasn't there, but with the death of pre-training, we kind of talked about it. It's like we've run out of public data. We've gotten to the point where more compute and more data doesn't equal better model. We're at diminishing returns, if not more, and now it's about what are the things we can do on top of the model? So we kind of have the rise of these composite AI systems. This is the best example of being like a one from OpenAI. It's just like, oh, we're going to do a bunch of stuff on
Starting point is 00:19:43 top of the model, build a bunch of systems, a bunch of RAG systems, a bunch of training, so it can get better at planning. So we're going to use these different models and different ways to generate these plans and then execute through these plans to derive an answer for you. So we have some concept,
Starting point is 00:19:57 like we're basically taking chain of thought and we're going to actually build a system out of chain of thought, right? It's basically what O1 is at the end of the day. And so we're now in this phase of the models will still keep getting better, but they're certainly going to get cheaper faster than they get better now.
Starting point is 00:20:10 And now the question is, how can we use the models plus the data we have and build systems that actually result in better results and can tackle net new problems? Because unless we have a breakthrough in architecture or we somehow stumble across another massive set of data, new tokens that are trainable,
Starting point is 00:20:28 where do we get the next level of intelligence bump from? And so one way to think of it is the rise of AI engineering. Another way to think of it is just the rise of the fact that, hey, now we're in systems problem land. We're very much more similar to the rise of microservices and service-oriented architecture
Starting point is 00:20:43 last decade or two decades ago in terms of AI and compute. Yeah, it's so fascinating. If we continue to be able to get models even smaller, and able to have the leverage to be able to call multiple models, like the MOE thing, but in a much more larger and dynamic scale. Because today, MOEs are more like a fixed model thing. If I think of models almost like the same way I think of NPM packages, for example, I can have a thousand NPM packages in my app.
Starting point is 00:21:12 I'm just using a bunch of these as calls or something in that nature. Maybe one day I'll think of all my models as an NPM package install, and I have 500 models. My app just calls whenever it wants to. It'll be fascinating. I don't think we even got to even close to that
Starting point is 00:21:31 sort of level of understanding or infra. But I feel like we can get there soon where the models can get so much cheaper. You don't need a full model everywhere. Actually, people just need a couple layers. That's all the research recently is all like, you just need a couple layers. That's all the research recently. You just need a small layer and it's going to be activated for most calls.
Starting point is 00:21:50 Yeah, and this is something we actually learned. It's so interesting. LLM's going through a lot of the same arc of what we had with AlexaNet in 2017, 2018. It's like, oh, we can do all these things. If you just have the first couple layers and you can quantize, you can reduce the encoding. There were so many options you could do
Starting point is 00:22:04 to reduce it with AlexaNet and now we're like, oh, it actually turns all those things broadly generalized also to this LM Cosker problem. One point I'd make, by the way, just to talk about your MOE point around model composition is, and we're starting to see the beginnings of this, but
Starting point is 00:22:19 not in a generalized way. And I think one of the interesting things that came out near the end of this year from Anthropic and Cloud was the model context provider protocol, which is like, how do I have a pluggable way to provide context into the usage of Cloud? And so it's certainly not a system. It's not a full-fledged protocol.
Starting point is 00:22:38 But it's the beginnings of thinking about, well, can we have version 0001 of what is a protocol that allows you to composite a bunch of different intelligence systems together, both in providing them context, but potentially also guardrails and enforcement. And that's a missing piece from missing, but it's also a sign of the future of these sort of,
Starting point is 00:22:57 what does this rise of composite AI systems even mean? Well, one thing I feel like, after reading all this sort of protocol the protocol that anthropocopic come from is not even about the protocol itself like i feel like that is of course interesting and what's fun of course you're seeing people adopting that protocol already right people are building around it cloudflare everything adopted it it's almost like how all the hot ai projects today have all come from random places, random people,
Starting point is 00:23:28 but they all caught fire and some of them keep growing, some of them didn't. But there's almost like a demand for protocols as well. We have demanding a new architecture, but also demanding new protocols to solve a bunch of these questions.
Starting point is 00:23:41 And it truly doesn't matter. You don't have to be only open AI that can come up with this anymore. Actually, that's the fun part for a startup to have an opportunity. If you have a way to solve a real problem, and you don't think of it as just a single vendor thing,
Starting point is 00:23:56 you set up a protocol and you see real applications around it, and you solve them in real, people will start adopting everywhere. This is the fascinating part. AI funding, everything is already crazy busy. But if you have a real breakthrough, it will go wild. And it doesn't have to come from a certain big places.
Starting point is 00:24:14 Absolutely. This is the opportunity. I think the other thing is, so many people are going to like the broadening out of who's actually deploying AI will drive these needs. As we try to drive LLMs and these new interface changes and these new capabilities into the broad base of software, we're going to have to develop these things.
Starting point is 00:24:31 This is opportunity. It's not all going to be captured by open AI. It's not all going to be captured by Anthropic. They will capture some of the pie. But the broad base pie is still going to be all in the app layer, which is where intelligence and data meets actual human users that create value. It's so fascinating to see.
Starting point is 00:24:50 This space is so crowded, but still so early. It's so early. Yeah, the real system thinkers has the real opportunity, right? That's the most exciting part. So I guess we should move on to our spicy hot takes for, I don hot takes for 2025. We're not going to go 2030 or 35, I think.
Starting point is 00:25:15 Do you want to start what you think will happen next year? Oh, absolutely. And it's a build on what I think happened last year. I think AI, doomerism is dead, and I think it's going to finally get kicked out the door at the end of next year. If there was one thing we started 2024 with, it was just obscene levels of AI is going to eat all of us, the world's over, it's going to take all our jobs, it's going to blow up all the nukes in the silo.
Starting point is 00:25:38 Everyone, girlfriends, all the wives, all the employees. Exactly. Everything's over. Everyone in Hollywood is going to get fired. We're not going to need writers. We're not going to need anything anymore. And yet, I think if there's one prediction for next year, I think this is dead. We stopped really talking about it.
Starting point is 00:25:57 There'll still be people that will be trying to sell the viewpoint because they make money off of it. But broadly, I think the rest of society will come to the conclusion that those of us who have been spending any time with these systems are like, oh, actually, the limitations here are actually quite huge. We made a thing that's better at predicting what we are trying to do, which will make us faster, but it doesn't replace ingenuity and value of humanity. And if only it actually just turns out to be a slightly better paintbrush. And that's pretty fantastic. So that's my prediction for 2025. It's kind of spicy. And I think as a result of that,
Starting point is 00:26:27 it's because more people will actually be exposed to what this incarnation of artificial intelligence is, because they'll be broad-based, more available. And so as a result of that, they'll see where it's good. And also, it's extreme weaknesses, which are many that we haven't talked about, but they really exist. How about you, Tim? What's your big prediction? I can't wait to hear it. I guess maybe this is such a recency bias
Starting point is 00:26:51 because of our episode we recorded today. But it's going to make me steal a little bit of what our guest today talks about. It's basically this idea that Kubernetes is no longer going to be what everyone's attention is. But I don't think it has been an attention at all anymore. But really, it's like 2025. I feel like even 2024, I already kind of sense
Starting point is 00:27:13 that people have been moving away of really wanting to even dig and even want to know what's happening in that layer. It's just a thing to run my software. It's not really that perfectly able to hide all complexity anyways. But there's no better tooling out there, so I'm just stuck there.
Starting point is 00:27:30 So I feel like infra, you know, next year, it's almost like the rise of Gen Z infra, you know? In the way I describe it. I don't know. It's not a better term. I don't even know how to describe it. Right? It's like there's a sort of age group
Starting point is 00:27:44 associated with all the infrastructure that we carry. It has nothing to do with the Kubernetes. It has nothing to do with... I bet they may be not even logged into Amazon themselves. They've never seen an Amazon dashboard at all. It's so fascinating. These folks
Starting point is 00:27:59 have largely just reduced their problem to a single... Even a CLI is already complicated. Like deploy, oh wow, I still need to open up my little Unix, like little terminal, is as far as they will go. Everything is just a GUI and some sort of dashboard-y thing, and that's it, right? And the promise of all these nicer, simpler abstraction and problems,
Starting point is 00:28:24 and they're trying to fit everything they could in that world. And I just thought it was fascinating. You know? Like the old school of us, I feel like, you know, you got to learn more. You got to know what's Kubernetes, right? I remember when back in the day, we're an engineer. We always have to answer this question, interview question, right? The most famous one I think everybody asks is like, tell me what happens when you go to a URL
Starting point is 00:28:45 or something it used to be a real interview question tell me as much details the router has to do this look up my browser has a DNS cache I don't know man it just feels like we're just grumpy
Starting point is 00:29:02 people still care about how hardware circuits work, right? On the gates and logics and stuff. Real computer scientists don't even care about them anymore. And to some degree, I think 2025, it's almost like the sort of moving away from that lower level and we have to start embracing the higher levels anymore. Because that's where people mostly feel like all these platforms are getting so much
Starting point is 00:29:26 mature now, right? Vercel is in the world. They're getting quite mature and do quite a lot of stuff now. They're going to be companies that are fully run on Vercel, fully run on Cloudflare workers, fully run on some of these managed providers and they don't really even want to care about anything else. In that world, I don't know, man.
Starting point is 00:29:42 I don't know what's going to happen, to be honest. But 2025, I feel like know, man. I don't know what's going to happen, to be honest. But 2025, I feel like we're going to see more people just doesn't even want to think about that layer. And therefore, we're going to see more people really just want to bet the whole company around this stuff.
Starting point is 00:29:57 It's just fascinating. I didn't really want to admit this is going to happen that soon, but I just guess it's coming. I love it. Gen Z infrastructure. It's totally true, though. I was having a conversation, actually Tim was involved, but one of the things I said is I feel like infrastructure is always
Starting point is 00:30:13 generational, but it's more consumer than we realize. We always talk about developer experience being this key thing, and that selling to developers is a consumer-like sale. Initially, at the grassroots, it totally is. You're selling taste, you're selling identity. One of the things that made GitHub very popular, I was 18 when GitHub came around,
Starting point is 00:30:31 and I got a GitHub mug, and I carried around the GitHub mug, and it was my entire identity. And there was a 10-year period where I was hyper-producing tons of code and writing tons of code and driving in the bus, and then eventually I became not a code monkey as much and more of manager, more of code, and driving in the bus. And then eventually I became not a code monkey as much, and more of manager, more of leader, more of different things.
Starting point is 00:30:50 And so one of the things is, yeah, it is generational. And what each generation wants from the tools that they use is different. And old men like Tim and I will sit around and be like, oh, how dare they? They're not going to be able to have vendor lock-in is going to be terrible. But then you look at AWS and you realize, well, that's crazy vendor lock-in. There's nothing stable there. So I agree with you.
Starting point is 00:31:13 These rise, these new compute platforms, things at the edge, and all these new use cases. And I think the biggest one is, if you think about how organizations buy. It's like every org in the world right now, enterprise, mid-market, doesn't matter. They're all saying, 2025 is the year that we do something with AI. What's our AI strategy? What's our LOM strategy? And the first thing they're going to do is they're going to go to their VP of Engineering or VP of Platform Engineering and be like, what are you going to do that's AI? And they're going to say, well, I'm already pre-committed to this custom roadmap that you already pre-sold me on. And then the next thing they're going to do is they're going to say, well, I'm already pre-committed to this customer roadmap that you already pre-sold me on.
Starting point is 00:31:52 And then the next thing they're going to do is they're going to spin up some R&D lab with some free flow budget and say, you have to just build AI. They're not going to build on top of the core platform. They're not going to build inside the constraints. Because what's important to the business isn't actually to drive real revenue yet from that. But it's to figure out where revenue will come from in three, five, six years. And they're going to remove all of the constraints from them just to figure it out. Your job is to figure out, be top of funnel, think about what's thought leadership look like.
Starting point is 00:32:13 And we already see that, like Cloudflare's the world, all the other ones. But the rise of R&D Lab and the Big Co is back. And it's better than ever, baby. And I agree with you. The Gen Z infra is here to stay. And all of those companies will be buyers of the Models and the Vercels
Starting point is 00:32:30 and all of the cool startups that we have on this pod and all the cool ones that we talk to and all the cool infratrends because it's going to make building easier. Yeah, yeah. It doesn't necessarily have to be a Gen Z to build these startups. Like Tigris Data, right?
Starting point is 00:32:43 You know, OVva is not gen z obviously but what he's able to capture is really focus on a set of new things like try not to include the old world anymore right you have to really move on and focus on a new set of people and i think his customers are mostly gen z's right so i don't know. To me, Gen Z Infra really just encapsulates the whole paradigm shift that's happening. Like I said, the buyers of this will be
Starting point is 00:33:13 people in our age or older, either. Absolutely. All this is driven by something actually, rather than we spoke about at the beginning of this episode, which is a change in interface, right? That's kind of what Guy's talking about. But what does this all represent? It all represents new things we can compute upon
Starting point is 00:33:30 that we couldn't compute upon before. We have indeterminism, so these statistical models now give us the ability to compute on things we couldn't compute on before. And that's pretty incredible. All these new use cases, all this new spend, and vastly these new interfaces that will be very disruptive to businesses
Starting point is 00:33:47 and drive tons of new stuff. Couldn't be more excited, honestly. 2025 is going to be wild. I know, I know. It's so fascinating. Just leave me one last comment about this. I was looking at a pitch deck. Usually they talk about why my product is better. There's one line that really strikes me.
Starting point is 00:34:03 All the other products, I forgot exactly what product it is, but all the other products in this space are all following the Unix philosophy. There's like one single command, I can do one single function. And that's like why everybody is so bad. And I'm looking at it, I was like, Jesus Christ, you know, because I think for our age group of people, like Unix philosophy is supposed to be the modern and the right thing to do, right? It was like a religious thinking, like, hey, this is what best dev tool or whatever thing is. You do one thing well, you can pipe it. Like we all follow the same paradigm thinking.
Starting point is 00:34:34 Not saying that this is necessarily right or wrong, but like just able to put it out there is like, this is why things are bad. Because all of these people, I think a certain way, are still doing the same stuff in the same certain way anyway it really just like fascinating like to re-challenge thinking willing to go back to the things we thought is ground truth they're not really ground truth really they're just suitable for a type of era with a type of nature of computes and things that we know of but things things change. Computers change all the time.
Starting point is 00:35:08 I guess we don't really live through enough until we start realizing, oh wow, not just our phones looking different anymore, actually a lot more things change in the middle than BlackBerrys or whatever. Yeah, I 100% agree.
Starting point is 00:35:22 The only constant is change. I think that old adage is true. Tim, I 100% agree. The only constant is change. I think that old adage is true. Tim, I'm going to smack on one last segment. I have one question for you. I'll answer it, but you're going to go first. If you had a piece of advice for builders right now in this age, what they need to do, what they need to be thinking about
Starting point is 00:35:36 over the holiday season to get ready for 2025, what would you be telling founders? What would you be telling other venture capitalists? What would you be telling engineers who work at companies? What's something you're like, you would you be telling other venture capitalists? What would you be telling engineers who work at companies? What's something you're like, you really should be thinking about this right now. This is something that you think is important in 2025.
Starting point is 00:35:51 It might not have been important before. This is my opinion, at least. Especially infrastructure has been probably a bit more challenging place than before just because everyone's attention is just AI everywhere. And everyone just thought AI is going to be solving everything. Like I tell founders kind of the same thing I just mentioned is that, you know, sometimes when we solve infrastructure, we're trying to always solve infrastructure to everybody.
Starting point is 00:36:14 Like solve infrastructure to old school, solve infrastructure for Kubernetes, solve infrastructure for like everyone, right? Treat them all the same and try to solve all problems. I think, you know, the biggest issue most founders think about that I always challenge them a little bit is like, hey, you know, any startup that works, you got to be non-obvious the time you start. It's already too late if it's obvious, you know? So like, what's the point? And to be non-obvious and be huge, you're betting on one single thing that can be huge. And that thing isn't going to just look like
Starting point is 00:36:46 your Kubernetes or old school stuff. It will be somewhere in the middle and probably way more gears towards the rights. And so I have this debate with a lot of our companies and some newer ones, like, hey, really try to do one thing well. And that one thing has to be non-obvious. Not completely ludicrous,
Starting point is 00:37:04 but like, it can't just be like, oh, 50 people are doing the same thing. We're going to do the exact same thing and just be non-obvious. Not completely ludicrous, but it can't just be like, oh, 50 people are doing the same thing. We're going to do the exact same thing and just have a little bit difference. People don't usually want to admit that little bit difference they think is everything. But I always challenge like, hey, if you can't even find a different way,
Starting point is 00:37:19 or you have to spend time to sit down with somebody and we'll do one-on-one chatter to finally be able to convince somebody that's fully different. That's not going to work for you, you know? And I think it's probably much for founders than investors. Right. I think that's largely true. Like founders, I think, you know, I was a founder just not that long ago. And I think we all sort of play the same path where like, we just,
Starting point is 00:37:41 we have a pre-made of mind of what is the best ask for all the time. But in reality it's like the real thing isn't always set in stone. You have to go discover it. It has to be in a different place than you originally started. And it has to be something that gets your buyer, your customer,
Starting point is 00:37:59 whoever you're targeting, super excited. And so many times founders will do things that get them excited versus what gets those people excited. Do you have a last advice to give to founders? I mean, my last minute advice that I'd give to founders, I just think to anyone thinking about infrastructure,
Starting point is 00:38:15 building companies, or building software, is there are so many assumptions I've made in the last 10 years of cloud, like Cloud 1.0, that made sense in the context of Kubernetes, and made sense in the context of I only have one cloud vendor, and made sense in the last 10 years of cloud, like Cloud 1.0, that made sense in the context of Kubernetes and made sense in the context of I only have one cloud vendor and made sense in the context of one budget and all rolls up to one person. I think we have another wave coming and all those assumptions are going to change and it's going to be complete creative disruption.
Starting point is 00:38:37 And so I think the question isn't what made sense in Cloud 1.0, it's more a question of what makes sense in Cloud 2.0 and what is Cloud 2.0, what does that look like, and how do you fit into it? And I don't think any of us have any inkling of what that is, but if I had one piece of advice, listen to the episode with Guy. That doesn't mean he's right. He could be completely wrong, but it's different. And it's really different. And so you should really understand and be following what that is.
Starting point is 00:39:02 And you should have a thesis for why it is. And it should be backed up for why, whoever your buyer is, why is this going to be 10x better than what we had before? Because we're going to have a massive shift in infrastructure. The old stuff is still going to be there. It's going to get layered.
Starting point is 00:39:16 But the new stuff is what matters. And that's where the spend is. That was good, sir.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.