The Infra Pod - AI native development is more than just using copilots! Chat with Guy Podjarny (CEO of Tessl)

Episode Date: November 15, 2024

Ian and Tim sat down with Guy Podjarny (CEO of Tessl, ex-CEO of Snyk) to talk about his new journey to build Tessl, that's aiming to build a fundamental change in AI-native software development. We sa...t down to talk about Guy's views on what's happening in AI powered developer tools, why he's taking a bold bet and how it can change this whole industry.

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome back to the pod. This is Tim from Essence VC, and let's go, Ian. This is Ian Livingston, lover of DevTools infrastructure. And I'm super excited today. We're joined by a good friend of mine, Guy Pajerni, who is currently on his next company journey, the CEO of TESOL, former, well, still founder of Snyk. Guy, could you give us a little introduction to yourself?
Starting point is 00:00:28 Well, thanks, Tim and Ian, for having me on here. I'm Guy Poggiarni, or Guypo. Someone said I'm pulling a Madonna with the one name. I am indeed founded Snyk. I founded Blaze before that and sold it to Akamai. I was CTO there for a bunch of years before founding Snyk. Now I'm the founder and CEO of TESOL, trying to reimagine software development. I'm sure we're going to talk about that a fair bit more here. And I couldn't help it. I'm an active angel
Starting point is 00:00:54 investor. I've got a little bit over a decade, about 100 angel investments. I love the learnings and excitement that I felt from learning from all these other founders. And yeah, that's me. Also a lover of DevTools, a nerd and geek about many, many, many topics, including Dev and AI. I mean, you've been kind of at it with Blaze since the very beginning, right? Like 2011, 2010. And before that, right? Which Watchfire?
Starting point is 00:01:21 Like you worked at Watchfire, I think as well. So you've been here from the get-go in terms of like the main take on the industry. And you've seen a lot of the different sort of like phases of, as well. So you've been here from the get-go in terms of the main take on the industry. And you've seen a lot of the different phases of evolution as well. So I'm super excited. My main question, though, is, Gaipo, what is it that you saw that you said, you know what, I'm going to go and start another company? I think this is a question every multi-time founder gets. You're on company three now. So what is it that made you say you know after especially after the success of sneak like what made you say you know this is the moment where it's time for me to go get back at it i want to get in the trenches and start from zero again yeah it wasn't wasn't an easy decision the sort of the push and pull maybe of it on one hand on the personal front i spent
Starting point is 00:01:59 the last year and a bit at sneak in a slightly more part-time function, trying to figure out what it is that I do. I started Snyk and I was CEO and I brought about five years in, brought in Peter McKay, who's done a great job. And I've kind of focused or sort of took on this sort of product strategy role in there. And then after a bunch of excitements and a period of time, I really kind of felt like I need to reassess what is it that I'm doing in the company and what is my role. And so on one hand, I had my sort of personal journey. We have a family charity, try to figure out, do I want to do that full time? You know, I'm doing angel investment, do I want to do that full time? Do I want the comfy life of sort of some part-time
Starting point is 00:02:36 job in a company I believe in and then promoting? And during that time, I came to realize that what I want to do is found another company. And I was resisting that conclusion. It doesn't make sense from a practicality perspective. We're going to donate all the funds I still have at Snyk. As we're doing it, I've got all the access and such that I need. So there's really no practicality to it. So it took a while to realize that it's okay to do it because that's what I want
Starting point is 00:03:06 to do. And, and that's kind of where my passion came to this realization, that satisfaction comes out of struggle. If you want to feel that satisfaction to feel, you know, kind of that meaning, then you have to put yourself into a thing you might fail in. And, you know, while I learn a lot of these journeys with, with angel investing and such, it's not the same when you don't own it, when you're not in the trenches and needing to do it.
Starting point is 00:03:26 And I guess alongside that, I was focusing a lot on Snyk's AI strategy and Snyk secures software development. And so if you want to figure out what should Snyk do about AI, you have to have some hypothesis about where is software development going because Snyk's role very much relates to that. And the more I dug into that and the more this sort of picture crystallized in my mind, the more I felt I want to build that. That's what I want to do. It also felt like a domain that matched my skills, my passion, but was not competitive to Snyk. And so, yeah, the combination of those two made me make the leap. And I will say my wife figured out that that's what I'm going to do probably a good couple of months before I did. I wanted to talk to her about it a little bit more.
Starting point is 00:04:09 She was like with a smile saying, yeah, go ahead, talk. Like, you know where it's leading, but you know, sure, let's talk about it. And here we are. That's incredible. I always, you know, I've gone through some of that myself as well.
Starting point is 00:04:23 And I always come back to like, builders just want to build. And you know, how do you enable that? Yeah. Was there some insight though that you saw as well around at the time that you were thinking about starting the company and making this transition, you know, working on the sneak AI strategy? Like, what did you see in what you were learning that said, okay, this is different. This is an opportunity to do a generational shift.
Starting point is 00:04:45 I'm sure now, third time founder, you have learned a lot about what to look for in terms of like, okay, there's a big potential opportunity here that I can kind of grab onto is what built up over that time that led to you saying, okay, I see the generational shift or I see something I attach to that gives me an opportunity to know, that's rewarding and is going to be really difficult. But the upside is there, you know, whether it's personal or financial at this point, you know, there has to be something. Right. So I'm really curious to sort of understand, like, is, you know, was there some unique insight or some unique specific observation that got you to like, OK, this is this is a place I'm going to make the bet. And I think, first of all, like what drives me is impact, you know, in terms of like, is it money? Is it sort of product?
Starting point is 00:05:28 Really, I think what motivates me is leaving a dent on the universe and doing something that I think is significant. I felt like with Snyk, we've done that, right? There's a lot to still build on the business, a lot to still do, but I feel like a core part of that mission of getting security more embedded into development
Starting point is 00:05:43 and making that be a norm of a good practice in development like i think a lot of that is is achieved and so the driver is impact and i saw a lot of things you mentioned like funding multiple companies several times like i learned a ton from that process but i also learned a ton from angel investments and from working and accompanying those journeys someone once told me that as a founder you learn in sequence and as an investor, you learn in parallel. I think it depends on your level of involvement with the companies and all that. And I think the combo, at least for me, proved very useful in building some pattern recognition and seeing things that work and don't work. And I guess what I've seen also through the
Starting point is 00:06:19 angel investments, because I was investing in AI dev tools and such, one is people are very short-term minded. I think AI is full of that. There's just so much opportunity to use AI to improve things that people are drawn to the immediate. Two is that startups are just not differentiated. You look around and you talk to two companies that are doing it. It's like, they're really the same. And they have all these false perspectives of how they will will differentiate they're talking about hey we'll have a data
Starting point is 00:06:48 mode it's like really you're going against an incumbent you're going to be around for a year doing this maybe you got seven customers maybe 70 customers and yet somehow you will have a data mode to an incumbent that has 70 000 of those and and so poor differentiation poor differentiation strategy for like for where would you build it, even if you are temporarily differentiated, very short-term orientation. And I felt like everybody was just thinking small. And the ones that were thinking big were thinking big from a tech perspective, not a product perspective. You look at all the top companies in terms of big names in the AI dev space,
Starting point is 00:07:25 and the ones that are really like the go-big players, they're very tech-first versus product-first companies. You see Magic Dev, you see Cognition, you see Poolside. Not to diminish, like I think these are amazing companies and amazing individuals building them. But my sense is that for the most part, they are building amazing technology and then sense is that for the most part, they are building amazing
Starting point is 00:07:46 technology and then out of that building the product. And I'm a product guy. I think about what is the product that is needed? How do I anticipate users changing? How do I anticipate ecosystems and markets change? From there, I build technology in service of the product. I felt these gaps were there. The long-term orientation, thinking big, thinking product first, they were all lacking, and then strong differentiation. I guess I had an answer to all of those. The picture that formed in my mind felt like I have a sense of what is the long-term destination, how to build something towards it
Starting point is 00:08:23 that is good from a user perspective and that would be sustainable. Not all of it is stuff that I can share here on the podcast. Some of those are still strategies that are in-house. But without that, I think founding an AI company right now is actually more risky than founding a company is in the generic sense. Because everything is changing. The landscape is shifting from under you.
Starting point is 00:08:46 Everything is overfunded. And so it's actually a higher risk than typical. So I think you already alluded to this topic. I'm very, very interested to really go down this rabbit hole as far as we can, which is really about this AI native developer theme or we even call it transformation. Because I'm reading really the two big blog posts you put out. about this AI native developer theme or even call it transformation, right? Because I'm reading really the two big blog posts you put out.
Starting point is 00:09:11 One is like AI native, almost like any developer. And there's also like the cloud native comparison. So maybe we'll start with the AI native developer sort of like transformation here. You put up a two by two matrix, right? Comparing like the existing tools and existing AI, almost products and tools. And Tesla is on the far right. This new category that does nobody else is there, right?
Starting point is 00:09:37 Just you. And I think it's intriguing. Everybody read this will be super intrigued, including me and I'm sure Ian as well. And we really don't know what does that mean. Of course, I think you have, like you just alluded to, you have strategies in your head. So maybe it can help normal and bystander developers like us.
Starting point is 00:09:57 Can you help us maybe give us a little bit of mental picture of, okay, how did you, maybe can you quickly summarize what this two by two is? Because I think a lot of people haven't really read that post, perhaps. How do you think about the world? And give us maybe that somewhat of a hint of, okay, the current tool set isn't the truly paradigm shift we're looking for. What could it be? Maybe we can start a little bit from that.
Starting point is 00:10:23 But I think maybe you can kind of talk about this magical 2x2. I think we'll be on a good start here. So I will sort of say this 2x2, and I published it under that title of Charting Your AI Native Journey. So AI native is dominant in its narrative. It's something that I've sort of been brewing on and evolving in my mind over the last really almost two years of exploring investments and thinking about ideas, and then eventually a Tesla.
Starting point is 00:10:49 And of course, it's Neat. There are many, many AI solutions out there. And indeed, some of them are short-term, long-term, and they oscillate between something that feels like, wouldn't this just be the next feature of open AI or anthropic or whatever, to things that feel like science fiction. It's like, no, this will never work.
Starting point is 00:11:06 And so the attempt here with the 2x2 is to give us a little bit of a structure to place companies, or at least current offerings of companies within that. And so the 2x2 goes with two dimensions. One is a dimension of change. So how much does this new tool require me to change the way I work to use it?
Starting point is 00:11:23 And then the second dimension is one of trust. It goes from attended to autonomous. How much do I need to trust it to get it right for it to be useful? So let's maybe look at some examples in the different quadrants here that this creates, right? If you draw it as a graph. On the bottom left, what you'll see is the low change, low trust environments. And those are tools that are very easy to sort of slap on. In the dev space, that might be code completion. You're already typing, right? You're literally already typing. In fact, you're already familiar to auto-completion because of IntelliSense and such. And so it's just a better auto-complete. You eyeball it, you say if it's correct or not, you just continue. There's pros and cons to that, but still very easy, very low trust.
Starting point is 00:12:02 Other domains, it might be something that points out a potential problem on an x-ray or that writes a quick SDR, like a cold outreach type email. All of these things are like, they're small units. They are things you're already doing. And therefore, you're just kind of prompted. This is the bottom left quadrant. It's kind of the low friction. Why wouldn't you use this?
Starting point is 00:12:23 It needs to be useful. So like if it doesn't provide business value, if it doesn't work well enough, it's still worthless. If it does work, then why wouldn't you use it? It just makes you better. That's the high adoption category, as I see it. And it's massively, massively competitive. That's really where you would see, how is one coding assistant better than the other coding assistant? You're going to have a million of them. How is one tool that creates tests for me
Starting point is 00:12:47 or that captures the documentation or like all these things? Like how is one different? It's oftentimes hard because they don't really invent any new methodology. They're just doing legwork and they're doing legwork in small units so you can review them. So that's the sort of the bottom left. If you go up the trust route, that's really an IP evolution. It goes from attended to autonomous. First, maybe examples of what needs to show up over there. So for
Starting point is 00:13:10 example, Intercom have Finn, their support chatbot. And Finn tries to resolve tickets when someone approaches a company autonomously. If a human has to review every response that Finn provides to confirm that it doesn't hallucinate anything, the product would be useless right so you have to trust that it gets it right for it to be useful even more extreme version of that is robotaxis you know like actually very little change nothing changed you're still interacting as you before interacted with someone on a on a chat support conversation here in robotaxis you might open open your Uber and order a taxi and get in the car and get dropped off elsewhere, but might be the high trust that you need to be able to do this. So that's kind of up with the trust route. In development world, that's a lot about this
Starting point is 00:13:54 autonomous development world. And I think it hasn't been cracked. And that's a lot of what I think the sort of the magic devs or the others, like at least from outside statements are trying to crack. It's like, I will just resolve this for you. And it's worth realizing or sort of highlighting that there's an element of the magnitude of task involved in that trust element, right? So if you're writing the next line of code, I can review it every single line. But if you are writing full applications, I can't review them every time. So for you to be useful to me, I need to trust that it works. And if not, then it doesn't serve the use case you're trying to get me to believe that it does, right? Or a lot of workflow creations.
Starting point is 00:14:36 So I think a lot of this autonomous AI engineer, whatever, AI engineer is now unloaded, an ambiguous term, right? But agent, engineer, a lot of ambiguous term, by the agent engineer. A lot of those are really up the trust axis because they're trying to do things more autonomously. And I'll say again that this is an IP game. And so if you're a company and you're trying to build over there, it's because you believe that there's an IP mode that is really hard to replicate. And I think in general, now versus two years ago, I think conviction that you can have a true IP mode in AI implementations
Starting point is 00:15:08 is different. I think two years ago, you might think that people are, they can really be ahead of the game. And now you see that every time someone is ahead, they're just three months ahead or six months ahead.
Starting point is 00:15:17 But you still hear, like you hear a cursor, for instance, which is a different type of company talking about how all they need to be is ahead on the Lex Freedom podcast by a few months, and that can be significant. So that is their strategy, is to sort of try and remain ahead,
Starting point is 00:15:32 which I find to be a challenging strategy. You still aspire to that, but it's hard for that to be your strategy. So I guess, Tim, that's a half. I don't know if I fit the briefly comment, but that's like half the quadrant. And I think it's a very helpful overview because I think a lot of people
Starting point is 00:15:49 we probably intuitively know about it but what's helpful about it, almost like a mental model, is it really helps you think about the existing world and of course it's very helpful to think about what your new world is proposing. I know we're not able to get into the details of Tesla,
Starting point is 00:16:05 obviously about the product because things are pretty early. But I'm very interested because I think this whole new IP transformation and sort of like high trust, right? To be able to generate high trust, it almost requires like this trust earning journey the product needs to require to get to the end.
Starting point is 00:16:25 And maybe hold that thought a sec. Let me sort of talk about the change route. And then we come back precisely to that, to sort of the journey. So like the trust route, like you go into that car and you need that trust. And there's a trust building exercise. Maybe you'll take shorter rides once it's sort of a small, maybe you would only do it after a friend of yours took it, right? Like you might do all these things, but it's still the trust route. I think the bigger, harder path is one of change. So change
Starting point is 00:16:49 is about changing the way you work. And I think probably the best analogy there is the text to video or text to image representation. So I'm fortunate to be an investor in Synthesia. Synthesia is sort of a text to video solution that focuses on training and things like that. One of the leaders in the space. And the way you create the video in Synthesia is entirely different to the way you would create a video pre-AI. Like there's really nothing in common about how you would sort of set up a studio and get actors and shoot with a video and all that and how you would sort of write that text. And because of that, that is dramatic change in who is it that would be able to operate that successfully? You know, what are the skills? What are the workflows in the business that you would build? And there's effectively zero chance that the winner in the like AI generated
Starting point is 00:17:35 videos will be the same companies that are the ones producing the videos in person today. But the difficulty over here is when would you start using it and how would you use it? And Synthesia have actually kind of figured out a good specific niche or sort of slice. So maybe that's a demonstration of a journey around these training videos that already were very methodical in the different languages and such. But when you go and you go to runway or you go to the more loose form generations i mean how do you generate a video in that fashion i mean there are new problems that you would need to deal with that are brand new you don't really know how to overcome them around you know character inconsistency it's only a different person you
Starting point is 00:18:17 don't get that problem right when you're shooting a video and suddenly a doppelganger of a of a of an actor shows up instead of the original actor. You know, someone similar, but not quite the same. And so these are new problems, they're new workflows. But if you succeed in building a new way that is truly better, and text-to-video is a good example of one in which there's like massive advantages, if successful, it's like dramatically, dramatically cheaper and faster, then you can unlock tremendous value and you can really win that space. And so I think that's really the disruption corner. That's the place in which you can really rethink an industry. But the slowdown factor isn't just trust, it's change and people are slow to change. And so I think you can develop IP faster than you
Starting point is 00:18:59 can get people to change. And it's actually quite hard, like if I bring it back to development, it's quite hard to think about what does this mean in development? I think a little bit of that is this notion of chat-based creation. And so everybody immediately gets the code completion, but you'd find conflicting and sometimes hostile opinions about generation with chat, because that's a new way of doing it. You know, it's a different methodology. How do I, do I talk about my document? Like, how do I work with that? And so it takes longer to adopt. Maybe it's better, maybe it's not,
Starting point is 00:19:32 but it takes longer to adopt and it's a difference. I think the V0 or the sort of the bootstrap of an application this way, again, it's a different way of doing it. So it gets you somewhere. Is it the right where? Is it the right starting point? What do designers think about it? Well, some of them have very kind of vicious opinions about them and some of them are in
Starting point is 00:19:51 love with it. And so I think that change route is interesting. And then let me kind of quick, just sort of quickly talk about the top right, which is really just a combination of those and to an extent in what you sort of referred to when you talk about how do you build the trust? Like that's the hardest quadrant, the sort of the top right. It is how do you build the trust like that's the hardest quadrant the sort of the top right it is a new way of doing it and assumes trust and so really at the moment if you just build for the top right nobody would use it they you require them to change
Starting point is 00:20:15 and to trust at the same time it's just not going to happen but eventually every company should understand what is its top right positioning every market market, if you're going to build a product, build a company, acquire a company to any domain, you have to have a thesis about what is in the top right, because that is what we will eventually have is when the technology is trusted and whatever useful changes are there to be had, they have been adopted. We'll get back to Tesla and AI Native Development, But what we're talking about in the AI Native Development model is let's start by imagining the top right. And then there will be many journeys to get there.
Starting point is 00:20:54 But you have to agree first, directionally, about what we think that top right is. And you have to have that be a lively conversation and have a lot of people engaged and try things in it and all that for us to be able to kind of form that destination. And if all your building is for the bottom left, you might get some immediate dollars, but your company is going to become invalid or irrelevant. That was a very short answer. But it was a great answer. The way I think about this is it's a self-driving car model, right? Like, you know, in many ways, like the assisted sort of driving cruise control, right, is a good example of a co-pilot, you know, fully self-driving car where you still have to have, I have this thing on my Subaru or like,
Starting point is 00:21:34 if I have my hands on the wheel, the car will drive itself, but I'm still like there. And then there's, you know, the full self-driving version, which like Tesla's like, ah, geez, you can just do whatever you want, right? And then in future you have, you know, obviously the robo taxis, like if anyone's been to Waymo, that is incredible. And that, you know, you can just do whatever you want, right? And then in future, you have, obviously, the robo-taxis. If anyone's been to Waymo, that is incredible.
Starting point is 00:21:48 And if I were to think about it, the AI-native future in development would be the Waymo version of whatever software development is. One of the things I think about is, today, the example is the co-pilot, the auto-completion or the chat. Basically, this change in how the developer works inside the IDE, and so we've seen 50% speedups. What do you think an example of an AI-native future for a software developer looks like?
Starting point is 00:22:13 Is there a mental model you have or a gap in experience that you've been thinking about that demonstrates the difference between I'm driving a car and I always have my eyes on the odometer in the car in front of me to, oh, now a taxi is driving me around. Have you thought about what that could look like? First of all, it's interesting that in the self-driving car, there are actually examples of two big companies that are taking different journeys to it. Waymo is taking the IP route, and they have no co-pilot. You're jumping straight into self-driving. Their only graduation is the
Starting point is 00:22:43 distances of the cities they operate in, while Tesla is taking the assisted driving route and trying to evolve their features like that. And time will tell what's better. And they might both be viable, but they're very different. So it's interesting, like these are two giants and they're both taking very different routes to it. I think the software analogy to self-driving cars is a little bit tough because self-driving cars have a physical limitation. And so you can't really make a self-driving car 10 times faster, right? Or you can make it much better in various ways. And over time, you can create a reality that is 100 times better if there's no parking lots anymore and all that space has been reclaimed by cities.
Starting point is 00:23:22 And there are fewer accidents and car ownership becomes a non-issue because everybody's just kind of reusing these cars. So you can imagine a better future, but the driving itself, the driving experience is shorter. I think for software development, there's actually a better opportunity because you can both improve the journey itself, get from point A to point B by 100x, and you can improve the ecosystem and the totality of it by another factor that is similar. Maybe I can talk a little bit about what I think is AI native development. Does that make sense? That'd be amazing, yeah.
Starting point is 00:23:57 Again, I think this is a group definition, like in a community, it's a new paradigm and we need to work together as a community to doing it. And just sort of point out as a bit of a plug that we do have a conference that we're funding and operating and all that, but with a lot of kind of bright opinions about it that is running what I think would be late the week that this airs on November 21st called the AI Native DevCon. And we're going to have a lot of bright speakers
Starting point is 00:24:22 over there talking about all sorts of aspects of AI native development. But with that said, let me talk a little bit about our definition. So I think software will today, software development is very code-centric. You get some requirements, you write some code, very quickly you make 100 decisions in the code that never make it anywhere else, or it might be too low resolution,
Starting point is 00:24:41 they might have not been bothered to go out. And the code intertwines what needs to be done with how to do it. It's like literally in the same lines. You read the code and you need to parse out what is it doing and how is it doing it and separate the two. And the LLMs that try to learn the code, they need to do the same. And I believe that we will move to a world that is spec-centric in which we can separate two, and a user can specify what they want, which is not a trivial problem, and AI will handle the implementation. In that world, where AI does the coding and the implementation, there are many, many things that suddenly become dramatically better. For instance, AI-native software will be autonomously maintained. Maintenance,
Starting point is 00:25:27 by definition, is change the software without changing the spec. Keep it behaving the same way, but change the operating system, change the dependency, fix that vulnerability, but don't change the spec. And so if you start from anchoring on the spec and have a good verification mechanism to know that the software is working correctly, you don't need to maintain that anymore. And maintenance is like the productivity killer. So that alone is like massively valuable. It is dramatically more accessible because there are just that many more people
Starting point is 00:25:55 that can specify what they want and can be the judge of that. And even sort of think architecturally sometimes, but are not able to write code. It would be very adaptable to your environment. So you'd be able to create software that, you know, on Ian's infrastructure is optimized for that and on Tim's infrastructure is optimized for their environment or that learns from the data and operations of how your specific users are using the system and your specific
Starting point is 00:26:21 business needs, right? Are you kind of flush with cash and you're really looking to provide the best experience and you want to optimize for latency and such? Or are you more worried about costs and trying to do it? Or maybe different times a day you want different things for different users. And so an extreme level of adaptability and personalization and automation can be there. It can have a deep relationship with data. I read some stats that I'm not sure if it's coherent, but it aligns with my general thinking, which is that software generally is thought that it can be 10,000 times faster
Starting point is 00:26:55 if you really fully squeezed and optimized every aspect of it. But that's expensive because like human time is expensive, but maybe with AI maybe with AI, if you're tied to it. So it's just better on so many fronts. And I guess our conviction is that building software like that, being able to specify what you want, being able to provide and then evolve the verification mechanisms to know, to trust that the software works the way you want. And then thinking about how that software works over time,
Starting point is 00:27:26 like thinking about how do you write the specification? How do you write the verifications? How do you edit those in future versions of it? How do you package those? How do you version something like this? This is language agnostic. You can have a JavaScript version, a Python version of it. Are they the same version?
Starting point is 00:27:41 How do I think about them? What happens if they have some language-specific ecosystem change on it? Is that a new version for both? How do you observe a system like that? How do you know what has occurred? And so all of those require a different software factory. They require a different development paradigm and methodology, and they require a different development platform. What we're sort of perceiving here, and we can go into each of these paths deeper if you'd like, is that on one hand, we think this is a new development paradigm. And we think this is software development. And a lot of the answers, like we will not be the ones thinking about them. And they will also not have one answer, both because of the complexity of software and the world. So it could be that the way you would verify, you know, a mobile game
Starting point is 00:28:29 is just very different to the way you would verify an e-commerce site and to the way that you would, you know, verify, you know, an internal in-house application or any of these others. But also because it's not going to be a point in time, right? Like it will change now, maybe in some magical world, you know, sort of figured out all the ways to doing it, like over time, you know, different organizational concepts, but also new technology comes along, things change, there are opinions and multiple ways to do it. And so we think this AI-native development, like this is a methodology
Starting point is 00:28:57 and it has some, like in DevOps, like in CICD, there are some themes that are recurring. There are some practices that span ecosystems, but there are many tools and they plug together and they work together. And so we think there's an importance for a dev movement around it, which is why we're running the conference, which is why we're looking to build and help foster a community around what is A&A development, what is that top right for development. And on the other side, on the company side, on the product side, is we're trying to think what is AI development, what is that top right for development. And on the other side, on the company side, on the product side, is we're trying to think
Starting point is 00:29:27 what does a development platform for that look like? How do you write those specs? How do you advance it? And how do you plug tools along the way? Clearly, to build a product, it has to be usable. It can't be dependent on something else. So it's useful in its own right, but how do we think about that as something
Starting point is 00:29:44 that facilitates the participation of others versus excluding them? Yeah, so that's my view on AI native development. And I've got many more. I can drill into any of these things. I think they're all kind of deep topics.
Starting point is 00:29:56 Yeah, I'm sure we can have a 24-hour podcast because it's actually really exciting. The more you mention it, it gets a little bit clearer what you're actually trying to achieve. And you think about the future, like you mentioned about Waymo,
Starting point is 00:30:11 once you're able to actually fully trust that this car can actually get you from point A to point B, your whole life changes potentially, right? The whole rules and everything around changes a lot. So I think definitely it's really exciting to think about the future developments. If we can fully trust a particular spec that is actually able to encapsulate
Starting point is 00:30:32 what we really want to achieve, then the implementation details may not matter as much. But of course, the biggest challenge is the spec. It's the creation, maintenance, and the sort of iterations and the team dynamics around the spec, right? And so I'm sure there's so much details and probably we're able to go through most of it. But you already mentioned like
Starting point is 00:30:54 there is this specification of what it kind of does and a specification of how it verifies it actually does the correct things. And there's so many things in between. Can you talk about, you already looked at it a little bit. I'm very curious about the spec aspects. What are the major challenges to make sure a spec can do what it does? Because right now, when I think about it,
Starting point is 00:31:17 we've gone from test-driven developments, all these little small little paradigm shifts of trying to make sure we're getting the right thing to deliver. But none of these are coverageable enough. And cannot keep up with the changes and all the complexity behind the systems. So it's really hard to imagine there's a spec that can actually
Starting point is 00:31:35 take care of everything. So is there a particular mind or pattern or even challenges we had to overcome when it comes to actually figuring out how to get the spec correct? Because I think that could be something to highlight, right? This is not an easy thing at all.
Starting point is 00:31:53 Here's actually really hard things we had to figure out along the way. Yeah, I would add, so I think everything you say is correct. It's hard to imagine this type of spec, and I'll share my thoughts in a sec. I would also add, how do you make it fun to write such a spec? Because if it's an agonizing process, it might be functional, but if it's not fun, people are not going to do it. And so not only does it need to be a powerful spec, it also needs to be one that is fun to create,
Starting point is 00:32:17 or it is a process that makes it fun to create it. There are actually two reasons why I think we'll move to spec-centric development. The first one is probably most obvious, which is machines can write code now. But the second that is as important, if not more, is that machines can fill out the spec. If I tell an LLM, create a tic-tac-toe game, that's a spec. It's a pretty terrible spec, but it's a spec. And the reason it's a spec is because the LLM can fill in the gaps. It can decide whether it's a web game or a mobile game. It can decide whether it follows the rules of tic-tac-toe and what are those and whether there's a multiplayer or machine. And hopefully in a smart system, it can interact with you and know when to ask you questions as well.
Starting point is 00:32:56 And say, well, tell me, do you want this to be a web game or a mobile game? But one way or the other, it can figure out which questions to ask and it can figure out how to fill out the spec. And I think that is a massive unlock that allows us to break this mold that we have today. We have either formal specifications, which are such a nightmare to write, that they only make sense in the most harsh conditions where you really have to have that, when it's a deep medical device or aerospace.
Starting point is 00:33:23 And even there, they're limited, but they're very painful to write. You'd prefer from an experience perspective to just write the code. Or on the other side, you have no code, which are kind of toys, like they're really smart configurations. All the activity, all the ideas have been pre-created, and now you're just choosing how to organize them
Starting point is 00:33:38 and then watch flows to run them. And so it's always been very limited. And as they evolve, they become code, like in Apex and in Salesforce. So I think the ability to fill out the gaps, fill in the gaps, is really the unlock that LLMs introduce that are as important, if not more important, than the ability to create code. And the ability of the system to fill in the gaps correctly is going to be one of
Starting point is 00:34:04 the strengths of specific platforms as they come in so if you're a marketing agency and you've already built five websites for me and i'm coming to you and i'm asking you to build the sixth website i only need to give you a very small spec because you're well familiar with me you're well familiar with the domain and you'd be able to fill in the gaps very well but if you. But if I try to come to the same individual and I'm asking to build a mobile game for me, I might need to give you very detailed specifications. If you're some sort of low-cost agency that I'm engaging
Starting point is 00:34:34 that I've never worked with before for the same marketing website, I might need to provide very detailed specifications. And so I think the ability to kind of fill in the gaps is very important. So that's one important bit about the spec. The second is that I don't think there is a spec. I think there will be multiple specs. Once again, the way you would specify a mobile game is probably very different than the way you would specify a software library, right? Or a marketing website. There will be shared traits, but if it's more visual, you might need something that's more visual. If it's very algorithmic, you might need
Starting point is 00:35:11 an ability to provide those. And it comes back to me thinking about this as a movement, as a paradigm, as an ecosystem. Today, we don't have one. We have multiple languages. We have multiple development environments. We have methodologies that are different around their strengths and weaknesses. Is it more about iteration or is it more about safety? And I don't think that that changes in the AI era. Everything becomes faster, but you still need to choose your trade-offs. You still need new platforms and systems that are able to adapt to new technologies and to new preferences and to the industry's preferences. And so I think there will be multiple specs.
Starting point is 00:35:50 I think they will have many formats. Verification is a subset of the spec because it comes back again, like same statement, right? The way you would verify a mobile game is different than the way you would verify an algorithm trading system than the way you would verify a marketing website. And so you need these verifications there. I don't have a perfect answer to it. I think the ability to state it is important. And TDD helps us and like mocks help us and a bunch of history of what we've been building helps us. I would also point out that just like regular software,
Starting point is 00:36:22 it doesn't have to be upfront. And there's an element of learning from data. And you build a system that works well enough and that you engage with well enough. And then with the data, that system can learn and can evolve and can become better over time. And yeah, a bunch of interesting topics here. For instance, if you said, I want a button,
Starting point is 00:36:41 and the LLM decides that that button will be red, and you might be, well, I don't really care. It's fine. If the button is red or it's blue, I don't really care. But for the next version that comes along, you sort of change the title. You might not be okay with that button changing now from red to blue. And so you might want some visibility into the LLM's decisions and into the persistence of them and whether
Starting point is 00:37:05 they're about to change or usability. And so all of those are the reason we think this requires a different software factory. These are not the types of problems that exist today in the regular tools around software development. They're not just another file in your Git repository. They're not just another test in your build. There are different types of interactions, and we think they need to be figured out. That's a somewhat long-winded answer here. It's a long-winded answer,
Starting point is 00:37:32 but I think it really helps me a lot, to be honest, fully understand your thinking. One of the things I wanted to zoom in on is you've talked about this being a community effort. A mental model for comparison of like the cloud wave, right? That, you know, the 2010 to let's say 2020 period was what we collaborate on. Ultimately, like this sort of runtime operating system of moving the cloud became Kubernetes, right? That was sort of the thing that became the community. The core part of like, well, is that the core of everyone's stack?
Starting point is 00:38:01 Everyone kind of contribute around it with attachment to the GitHub and some type of CI CD. So it talks to like, okay, there's some core pieces of software, there's some core community movement, and that community movement is represented by some shared platform that we're all building on top of. I'm curious, in your mental model, is the spec like a community thing?
Starting point is 00:38:20 Is that company proprietary? What parts of this community movement do you think are, let's say, process methodology? So we all believe that the future of software development is this way and prescribed in this manner, in the same way that the Agile manifesto kind of described the shift from waterfall to an Agile method in software development. And what parts do you think need to be core software that are collaborated upon, like things that just occur in the open source? I think it's just a combination of many of those.
Starting point is 00:38:47 I don't think spec is a single standard. I think there will be multiples. So some specific specs will be company specific and will probably be a bit more closed. They might have to be, like maybe SAP has their own specification methodology that has to do with their systems, right? And specs might compose over time, right? And specs might compose over time, right? And maybe they connect different formats. So I don't know, I can't tell you all of the above. But I think spec is a higher level comment. It's like saying it's all code owned by,
Starting point is 00:39:13 I think there will be multiple standards, and some will not be standard, and some will wish they were standards. Others are not. So I think those will evolve. But I think community collaboration, even in the cloud is actually much bigger than what you've described. I mean, yes, we might have settled on Kubernetes. But before that, maybe there's a settling on containers. And what about microservices? And what about mesh networks? And how do they interact between them?
Starting point is 00:39:39 And what about API standards and how those communicate? What about licensing for software? Maybe that's even an active discussion yet about hosting something on the cloud versus not. What about access permissions and IM models now and jumping like a bastion hosts? So there are all these things that really have evolved as a community. And you can say, well, when I think about cloud, I don't just think about specifically the infrastructure as a service. I think about CACD and microservices and DevOps and the whole way that we develop software that has changed dramatically. There are methodologies there that are continuous. You can say
Starting point is 00:40:15 generally the best practice is frequent deployments or frequent small builds that get deployed and an ability to roll them back. Not everybody achieves it all the way, but I think it's generally accepted as the ideal. You want immutable infrastructure is desirable. Not everybody invests in it because it has some costs and sometimes effort. Observability is a thing you should have. And so there are practices that are part of a high-level best practice that has evolved.
Starting point is 00:40:44 And then there are specifics that are ecosystem- a high-level best practice that has evolved. And then there are specifics that are ecosystem-specific or just opinions. If you talk about observability, many, many opinions. If you talk about CICD and even maybe my comment about frequency, talk about monorepos versus small repos, then it's okay. It's thriving. It's the way progress happens.
Starting point is 00:41:01 So when I think about my perception of, say, like a cognition, right, with Devin, or many of these sort of platforms, they're kind of closed environments. They're maybe more the kind of the Microsoft of old, you know, sort of saying, come into our world garden. They might be building something that's very powerful, but I don't think that's the thing that stands the test of time.
Starting point is 00:41:22 And so that openness comes both from an open methodology, from some open source components, which I think have to exist, and from pluggable and composable infrastructure. So I host a podcast called the AI Native Dev. I'm not very creative with names. I had the secure developer before and now the AI Native Dev. I guess that's what happens when you build dev movements. And I had Matt Billman from Netlify, the CEO and co-founder of Netlify on it. And he made a really, really interesting point around how every new technology paradigm sort of challenges
Starting point is 00:41:53 the open web or the openness because they really kind of drive you towards that closed environment. You see this with mobile and you see this with, I guess, kind of with the internet at the beginning with sort of the chat systems and social networks. And he points out that, you know, I think we're going to need to deal with that now. You're saying, I guess, kind of with the internet at the beginning, with sort of the chat systems and social networks today. And he points out that, you know, I think we're going to need to deal with that now. You're saying, I think in Matt's case,
Starting point is 00:42:10 he was referring maybe more to like the Vercel versus sort of Netlify reality. I'm not sure. I'm putting words in his mouth here. But, okay, do we build these as features of an ecosystem? Or is it that we build this as collaborative capabilities? I mean, I'm mean favor of the open web like that to continue and the uh so i think there's a big evolution there i want to say something if my answers weren't long enough you know i've got something additional to add to it that wasn't even in the question which is i think eventually this notion of software creation that becomes so easy
Starting point is 00:42:42 because you just request what you want and you're able to interact with the system and it gets to learn you. So it fills in the gaps. I think it becomes like literacy. And so the end goal here, which is far, it's far, but the end goal here is really for software creation to be a thing that is just accessible to anyone, right? I mean, I think the two of you and like many developers listening to the podcast probably experienced, you know, like encountering some annoyance in their day-to-day life and sort of solving it with technology or building, whether it's a hobbyist thing or just a small solution or something, some app for themselves. And you want that to just be accessible, to just be a tool that everybody has to solve their problems. And so I do think that there's long-term significant importance over here. And so above and beyond all of those productivity boosts and whatever commercial
Starting point is 00:43:30 advantages, I think there's a societal element over here. When you say that, I think it becomes even further clear why it's important for this to remain open and remain kind of connected and not be just a platform. So while I'm building a platform that I hope will be the leading platform in this domain, I think it's critical that it is a leading platform in an open domain. Amazing. I think you already got too close to what we actually want to go for the next section, which is the spicy future. Spicy futures.
Starting point is 00:44:02 So as you know, we want to hear your hot take about the future. I'm very curious. What is your spicy hot take? Maybe about this whole AI native developer. What is something you believe that most people don't believe in yet? Maybe the one that is a bit spicy to people is that I think coding goes away. It's one of those things that I think many people don't want to believe because coding is fun. I love coding. You know, coding has this sort of video game mentality of, you know, like you can always level up, right? You get a task, you do it,
Starting point is 00:44:36 you complete it. It's, you get your endorphin kind of hit on it. You can always level up or you can do the same level. It's amazing. So super so super super fun and so it's like a little bit unfortunate to go away but i think coding goes away coding is also the translation layer to the machines and it is limiting us in so many ways i don't know precisely the timeline but i think its effect happens faster than we think this is a world in which progress happens in leaps, not in steps. And so I don't think I am advising anyone who is currently a developer to really kind of, you know, reevaluate their life choices. I think there's going to be work for a while for many times. But do I want my kids to focus now on learning how to code as a core competency? I mean, five years ago, I would have said coding as literacy, I would have really talked about that
Starting point is 00:45:23 for the same goal that I've mentioned right now. Now, I don't think so. Now, I think coding as a skill is a short-lived skill. It would remain a hobby. It would remain a specialist skill, but it would not be a prevalent skill. I don't know, maybe in a matter of a decade, something like that. Maybe 10 to 20 years. Maybe a decade is a bit too fast.
Starting point is 00:45:46 I have an opinion here, but I'm curious, what do you think the limiting factors are today in terms of reaching that reality? What's holding us back? What are we missing from the tool set? What are the core fundamental problems that isn't solved today but need to be solved for us to actually get
Starting point is 00:46:01 the spec-based future where anyone can write code? In the same way that anyone can write in a Word doc, they can go and open, you know, a Word doc and type it and they can help lead them through it. I assume that's sort of a version of it or a possible future. Yeah, I'm really curious to understand what you think is limiting us. Like, what are we missing? There must be some fundamental problems we have to solve to get there. Yeah, I mean, you need the Tesla platform to launch on the no, well, maybe there's something a bit more sort of substantive. I think, well, I do think, though, it is the change in the trust that are sort of the axis, and both of them just need to
Starting point is 00:46:33 evolve. So on one hand, the tech isn't there, like this is at the edge of the possible today. LLMs are too unreliable, too hard to kind of wrangle and get to do what you want. They're quite limited. I think they're evolving very rapidly. And so this will change, but they are not there today. And it's a bit hard to state. That's why I kind of use the bigger numbers, because it's hard to say whether, you know, significant progress where GPT-5 arrives in a year or in five, probably not more than that, or at least that's a decent guess, but it might take a while. And then the second,
Starting point is 00:47:08 then probably the bigger slowdown factor is that of change and the notion of how do we interact with code like that. And, you know, some of that is making the system work to the assumptions that we perceive to be necessary today.
Starting point is 00:47:22 So like we're very used to machines being, you know being deterministic. You tell it something, it does the thing you want. And by the way, people that are less technical oftentimes don't have that expectation. They've encountered the finicky machines, and for many of them, one way or the other, it is kind of magic.
Starting point is 00:47:38 I mean, it works or it doesn't work, and the computer doesn't let me do this is a common perspective. But so some of it is getting the new machines to behave to our expectation, and some of it is learning how to adapt our expectations. Sorry for referring to yet another episode in the podcast, but I spoke to Caleb Sima, who's a big security guy. I love his thinking.
Starting point is 00:47:58 And we had this interesting conversation about how to think about security scanning in the lens of AI. Like if you have an ability to scan your software and it does, let's say, typically does a better job at finding vulnerabilities that are there and not giving you alerts about vulnerabilities that are not there. But, you know, two times out of 10 or one time out of 10, it misses things or it just sort of hallucinates code. Hallucinates code is like maybe for false positives, but let's just sort of say it misses things. So you have code, you know, it's been blessed, you've deployed it to production, and it turns out it just forgot to tell you about something or sort of hallucinate. All in all, I think that's a better system if it's better enough in its accuracy, but it's weird.
Starting point is 00:48:41 It's like it's going to require really rethinking how do you do the methodology for security. So I think there are just many, many, many cases like that. So if you go further into the society and into people that are not coding today,
Starting point is 00:48:52 that change is even slower because you also have to remove fear and you might need a generation. You might need, you know, the kids that sort of
Starting point is 00:48:59 grow up with this type of tech and build them out. So, you know, on top of that, there's regulations and there's other things. But change, society, humans,
Starting point is 00:49:07 they're the slowdown factors, not the tech. And I think we actually didn't really talk too much about Tesla. I know everything is early. I think we got to the point now, you're working on something about spec and you're working something around the community. Maybe tie the bow here.
Starting point is 00:49:24 What is Tesla? How do you want to describe Tesla today to our audiences? And I guess you were talking about the conference. That's probably one big way to learn about your community. But what will be a way to start getting to learn more about your product as well? Yeah, so Tesla is two things, maybe. On one hand, we are looking to be a driving force
Starting point is 00:49:44 in getting this community that we call the AI Native Dev. And so, you know, there's the podcast. Check in to sort of tune into that, you know, join the conference on November 21st. And we're going to have more and more activities to just facilitate this conversation, right? Get people thinking far ahead
Starting point is 00:50:01 because like so much of the conversation today is about today. And by the way, a lot of the tools of today are useful facilitators of the conversation but tomorrow you know how do you like when you think about tests you think about documentation generation some tools i think are a little bit less so like code completion you know that's maybe a little bit less the other part of tessel the primary part of tessel maybe is the platform so uh what we're building is we're building a platform for ai native development uh we want that platform to be open and usable by many. And so we're building it with that line of sight. It doesn't work yet.
Starting point is 00:50:32 It's still closed. What you can do is you can go to tesla.io or tesla.ai. We'll route you to tesla.io and join the waitlist. And we hope as soon as we can to make this, to get more and more people in to produce software in this fashion. What I will say is it's a new paradigm and it's big. And I'm a believer that you have to get the product out there and give users the opportunity to tell you that it sucks
Starting point is 00:50:59 so that you can ask them why and you can fix that and you can evolve it. And in the case of a new development paradigm and platform, it takes a while to build even that MVP, but it's still that MVP that we're going to build at the beginning. And so what I would say is if this is interesting, if you want to be part of what is the future of software development, if you want to be a voice in it, join the community,
Starting point is 00:51:19 learn about it, share your opinions and such. If you want to try out what I think will be the first and hopefully one of the key tools in the domain, then join the waitlist and hopefully try out the product. And just the expectation is we want the early adopters. It's going to have some things that you'd say, wow, this is awesome. And it's going to have some things it's going to say, well, this sucks. And I guess our hope is instead of just ranting about it or sort of shutting your browser window and not returning again, you tell us this sucks and work with us to fix it. I'm super excited to play around with it myself.
Starting point is 00:51:48 I've been waiting for almost a year. So it's going to be a good time. I have one question. We have a lot of entrepreneurs or wannabe entrepreneurs that are trying to figure out what to do. I'm curious, as you build Tesla and as you thought about this, do you think this wave, the shift to AI-native software development, do you think this is a wave of creative destruction
Starting point is 00:52:06 that results in many new companies? Or do you think this is a wave that helps, you know, that basically becomes like an enduring advantage to the existing incumbents? I'm kind of curious how you think about that because there's lots of people who are very excited about AI. And, you know, there's an opportunity,
Starting point is 00:52:22 from my perspective, to reinvent a lot of how we think about software development and always represents an opportunity for new company creation and new category creation. So I'm kind of curious, for those aspiring folks that are looking to go start or looking to join a new company, how is your mental model there in terms of, is this a thing that's going to really reward incumbents
Starting point is 00:52:39 or is this a thing that is a creative destruction that's going to enable us to reinvent a lot of stuff. Yeah, I think the trust axis is more in favor of the incumbents because fundamentally it boils down to building IP, it boils down to having data to be able to optimize that IP. And so you can potentially win there. But I think mostly if you're talking about something that's at the bottom left or going up the trust axis, then I think you're better off thinking about this as an acquisition. All you're doing is you're outrunning the big companies because they're still bigger and you're trying to be friendly to them and they might acquire you and the numbers are big. That's a legit strategy for people founding a company. It's not interesting to me as someone who founded Snyk and looks at something big, but it's legit.
Starting point is 00:53:23 You could say that that's what i did my first startup it wasn't as intentional but it uh it worked out put me in a good place to start sneak so that's one path the change path i think is still favorable to the disruptors um and i think the mistakes that people make or the most common mistake that people make as they found company is that they don't think long term enough. And this is just a very, very fast moving space. And so you have to have a view that is a bit contrarian, that is a bit hard to believe. If everybody nods when you tell them about the story, something is missing, something you're not thinking far out enough. And so I do think that there needs to be some boldness and some long-term path.
Starting point is 00:54:05 But I think that change, the same dynamics that always exist, remain. It is the existing players, the existing incumbents, they control the existing workflows. And it is in their best interest to maintain these existing workflows as long as possible. And everything about those companies is wired to maintain it. Of course, some of them will manage to kind of break out of the innovator's dilemma or this counter-positioning path, but most won't.
Starting point is 00:54:32 And so I think those are the opportunities to go after. They're scarier and they are harder. And as I said, I think starting a company in the AI space right now is actually higher risk than typical. But I think the only two kind of truly viable paths are one is build a company that outruns the big companies in a plan to be acquired, or build something that's a bit wild, and that goes further out. Some of this is true in general, when I angel invest today, you know, 100 investments later, then I kind of count the leaps of faith.
Starting point is 00:55:05 And clearly, if you talk to a company and you need to have like 10 leaps of faith, whether to invest in or not is a problem. But if there are zero, that's a problem. That to me is like a bad, it's like something here is obvious. And in the world of AI, it probably means there's like a hundred reasonably funded companies that you just, you haven't heard of them because they're too small. They're like you right there might be there hasn't been enough time for any of them to become a player but rest assured there's funding there are smart people and there's a lot of attention
Starting point is 00:55:33 and so more companies will be around so you have to have something a bit wild a bit contrarian that of course you believe in you're not just like creating a funky tale if you're committing your time to it for it to be worth it. And I guess maybe like the other advice that I would have is, you know, another common saying that I have for founders is nobody cares about your product. They care about the problem that you're solving for them. And so in the world of AI, you have to think about problems that will become greater over time, that they're real problems. And I think a lot of people find problems today in the AI ecosystem.
Starting point is 00:56:11 Those are the high execution mode. Like it's obvious to many people that those are problems. They might be solved by the platform, might be the picks and shovels things. So it's really an execution game over there if you want to win that. But I think the interesting ones is to think about, okay, what would happen when legal reviews are very common or like are done so much more easily because of AI? Would it make the judicial system collapse? So should I build something for judges
Starting point is 00:56:37 that sort of does this, but what about bias? Or maybe what happens in the pharmacy? I don't know, whatever it is that is in your space, think about it from the lens of understand the ecosystem, understand the pains, anticipate the best you can,
Starting point is 00:56:50 what would be the pains that evolve here, then try to figure it out. Once again, many, many leaps of faith here. You have to have some conviction in it. You have to be convincing about that. But then you can build a sustainable kind of long-term differentiated company. Amazing.
Starting point is 00:57:05 I guess last thing for our audience, where can we all follow the center of all AI developer movement, which is Guy and Tesla? What social channels or places people should sign up for? So for the company, it's tesla.io, T-E-S-S-L dot I-O. You can also go to ainativedev.io, T-S-S-L.io. You can also go to ainativedev.io. I love the.io domain. I find that represents development too. So although we bought the.ai, again, it's not about the fact that it's AI, it's about the fact that it's a development. So we route it there. Me personally,
Starting point is 00:57:40 I mean, Guy Poggiarini, fortunately Poggiarni is sufficiently uncommon that you can find me on the Twitters and the LinkedIns. And I'm probably most active on LinkedIn. So if you follow me on LinkedIn, that's probably best. We do have a newsletter as well. So if you go to Tesla.io, you'll find yourself either able to register the newsletter or join the conference or you can sign for the wait list. All of those are good cases to be involved. Lots and lots and lots to discover and I think
Starting point is 00:58:12 fascinating conversations. You can be a believer, you can be a non-believer, you can think the path is not but I've yet to really find anybody that doesn't think the conversation is interesting. Awesome. Thank you so much, Guy. This has been really insightful and Tim and I really enjoyed it.
Starting point is 00:58:29 And I think this is a great one that we've done. Well, I can't wait for the AI Native DEF CON, to be honest. Thank you. Thanks for having me on.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.