No Priors: Artificial Intelligence | Technology | Startups - Low-Code in the Age of AI and Going Enterprise, with Howie Liu from Airtable

Episode Date: July 25, 2024

This week on No Priors, Sarah Guo and Elad Gil are joined by Howie Liu, the co-founder and CEO of Airtable. Howie discusses their Cobuilder launch, the evolution of Airtable from a simple productivity... tool to an enterprise app platform with integrated AI capabilities. They talk about why the conventional wisdom of “app not platform” can be wrong,  why there’s a future for low-code in the age of AI and code generation, and where enterprises need help adopting AI. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @Howietl Show Notes:  (00:00) Introduction (00:29) The Origin and Evolution of Airtable (02:31) Challenges and Successes in Building Airtable (06:09) Airtable's Transition to Enterprise Solutions (09:44) Insights on Product Management (16:23) Integrating AI into Airtable (21:55) The Future of No Code and AI (30:30) Workshops and Training for AI Adoption (36:28) The Role of Code Generation in No Code Platforms

Transcript
Discussion (0)
Starting point is 00:00:00 Hi, listeners, welcome to No Pryors. Today we're talking to Howie Lou, the co-founder and CEO of Airtable, which now serves half a million organizations around the world, including folks such as scale, Benchling, Adobe, Raya Games, Amazon, and Pottery Barn. Recently, Airtable launched a suite of AI features. We're really excited to have Howie on to discuss the state of low-code and no-code AI tools, how he's transformed the business over the last few years, and what's happening generally in Enterprising. Welcome, Howie. Thank you. Excited to be here. Most of our users know what Airtable is,
Starting point is 00:00:36 but for anybody who isn't new, what does the Airtable do and where the idea come from? You know, Airtable's been around for a little over 10 years, and we launched in 2015, spent two and a half years building the product before them. But for the people who knew of Airtable back then, you probably would say Airtable is like a spreadsheet on steroids or a really awesome productivity tool.
Starting point is 00:00:56 And I think those were true statements. But what we've always been underneath the hood is a true app platform that just happened to be really, really easy to use. So we kind of cut across different categories. There's the low-code app platform category that preexisted us filled with pretty complicated platforms required actually a fair amount of technical expertise. You had collaboration tools like Trello and then later like Asana and so on that were very easy to use, but we're more project management centric. And so we kind of came in and did something in between, which is give people the ability to build real apps with a real real, relational data structure, logic and automations, and then interfaces, but did it in a way that was so much easier to use that it doesn't look like your traditional app platforms.
Starting point is 00:01:38 I got the idea basically by working at Salesforce. I had a very small company before then, a startup that was acquired by Salesforce. And working within their company, just all the power of the platform model, right? Realize that Salesforce didn't win all these CRM use cases because they had just built all the features for CRM. really because they had created a platform that could be customized for every customer's needs. And so, you know, coming out of that, I really wanted to apply that concept, but democratize it and sort of make a much more accessible app platform that could really open up apps to many more people,
Starting point is 00:02:15 citizen developers, and use cases than, you know, had been possible before. So very strong conventional wisdom in Silicon Valley will say, like, build a killer app, not a platform. So, like, you know, why doesn't it apply here? Why did Airtable work as a platform from the beginning? And I hate to interrupt, by the way, but I've known Howie from his first company, and when he was starting Airtable, we met, and he walked me through this in all the applications and the ability to have different verticalized applications and to build apps. And I remember thinking, that's crazy.
Starting point is 00:02:47 This is going to be so hard. It's impossible to do as a startup, and of course he pulled it off, which is, like, amazing. So I think to Sarah's point, you really beat conventional wisdom and you did something really outstanding in terms of building out this broader, this broader popcorn. You know, actually, now I'm just going to interrupt you one more time because it is, and we want the answer, by the way, but it is quite funny. I was on a blow-up, a friend in common that I'm sure Alad knows too, Eric from Open Door. And he was also, we were talking about angel investing at some point.
Starting point is 00:03:20 And he was just like, I really didn't think this thing was going to work. Like, it was so poorly scoped, but like how he seemed good. and so, you know, it's a little bit unbelievable. I don't think, you know, it turned out, you know, you weren't wrong, any of you, like it did turn out to be hard in many ways. I think a few things that we had going for us. One is, you know, we did have an existing product and paradigm to sort of compare ourselves to which is spreadsheets, right?
Starting point is 00:03:50 So everybody has used spreadsheets at this point. I mean, it's like the most prolific app building platform out there. And I think the fact that now we're how many decades in, like four decades in to, you know, the usage of spreadsheets, I mean, they were one of the first core applications for all computers. And maybe at that time it was like kind of a mind-blowing paradigm like, oh, wow, you have like all these cells, you can do whatever you want, you can model any data. But, you know, four decades in, I think the spreadsheet products have paid the cost of, you know, going through that steady march of kind of, you know, creating a new category, right?
Starting point is 00:04:24 and getting people to be familiar with it, such that by the time we came in, we made a really intentional set of design choices to make Airtable as approachable and sometimes literally feel like a spreadsheet. So that was super important, because a lot of the no-code app builders before us, or even alongside us, had a different layout, right?
Starting point is 00:04:46 They would make you go and define the database schema in one interface that look more like a data modeling thing, or maybe like defined form-based layout, and that kind of thing. And for us, it was really, really important that Airtable felt as easy to use as a spreadsheet. So the kind of initial version of the product was very grid-centric, right? And you could use all even the keyboard shortcuts of spreadsheets of copy-paste. You could even copy-paste data directly in from a Google sheet or Excel into Airtable,
Starting point is 00:05:13 and it just kind of worked. So I think what we got there was the ease of getting people to shift away from this existing product that everyone uses. And a lot of use cases of spreadsheets really should be. databases or apps, right? Anytime you're dealing with something that's not like number trenching and instead is some kind of tabular data, a workflow, like a database like thing of customers or could be inventory, which turns out is like most use cases of spreadsheets, you know, we wanted to make it really, really easy to pour it over. So that was definitely part
Starting point is 00:05:43 of it. And then second of all, I think we later did find that it was really important to build use cases into airtables. So templates were kind of our initial version of that. And so in a version, you know, in a way, we did end up having to build apps. We just got to build many different apps, and each of them kind of represented only a small part of the long tail of use cases we could go after. The platform has evolved from the original, very, you know, simple experience to something still simple, but much more powerful. The company as well has just become much more enterprise facing in recent years.
Starting point is 00:06:17 Like, how did that evolution happen? Like, what did you have to change most to support that? And when did you decide it was time to, like, go do that if there was a decision point? So that was always part of the master plan. And we actually wrote this like vision deck and kind of like business plan or as close to it as we got back in 2012 when we started working on this. That laid this out. And we said, look, like generally, it's probably harder to start with a really complicated product. Like you're not looking at SAP and saying, okay, over time, they're going to make it simpler.
Starting point is 00:06:45 Whereas it is very common or at least, you know, more intuitive to start with a very, very simple product. and then kind of make it more powerful, customizable, complex over time, right? So, and actually, I think I got this terminology originally from Mike Krieger, but, you know, we like this idea of like, let's start with a really low floor, get the floor as low as possible. So we really are coming in and undercutting all the existing local app platforms entirely. We're undercutting sales force service now. We're undercutting, you know, like these old school products, like QuickBase and so on.
Starting point is 00:07:17 And it's just going to be so much easier to use. But then over time, we can improve the ceiling, right? And initially, we're going to get some, like, you know, lightweight, medium-weight use cases. But over time, we want to improve the data scale. So, you know, actually literally just making it possible to score hundreds of thousands of rows or objects in our table, and soon like millions rather than just, you know, the thousands or tens of thousands. And then also kind of adding new layers of extensibility. So literally code extensibility into the platform.
Starting point is 00:07:44 So you can write your own integrations code or scripting logic that runs within our platform's serverless environment with access to your data or automations, that kind of thing. So a lot of it has been, you know, kind of in the, you know, in the works for for a while. And I would say in the past few years, we've really kind of, you know, been able to lean much more into enterprise, partly because we've already been making these platform investments, right? So I think had we tried to go really hard into enterprise in the first couple of years, it would have been difficult because some of the bigger, larger scale use cases would have just broken the product. In fact, we did see customers kind of pushing it to the limit. So it took us that
Starting point is 00:08:22 time to actually make the platform scalable and robust enough to go after the most ambitious use cases. And then I think the second thing that happened for us is, you know, we started to get enough organic adoption, like the whole product-like growth engine that we had been compounding for so many years in the early days actually resulted in enough usage and, you know, enough high value of use cases emerging organically within the enterprise that we could actually start the lean into those. And rather than having to invent a completely new use case or vertical solution on the product like some companies do, we always kind of got the cheat sheet within our own customer base. And so now being able to say, like, we understand what global content production
Starting point is 00:09:03 looks like at media companies. And that, you know, we can actually architect, you know, an implementation of Airtable to solve that end-to-end process allows us to also, you know, double down on marketing and selling to that use case, as well as making sure that our platform continues to kind of support it. So I think it sort of was like a petri dish that, you know, had a lot of like blooms of growth, you know, emerging. And then, you know, we just got to look at the areas, the hot spots and say, okay, like now it's time to really double down on these areas and sell more repeatedly to them. Howie, I asked a few people what question, you know, we should talk about that would gain from your wisdom or from the air table journey. And Fenton had said, like, you know,
Starting point is 00:09:47 his views of product management seem to have changed a great deal over the past few years. Like, I guess how so? And what does that meant operationally? I think product management is, first of all, just like a really hard discipline to do right or to do well. Because I think, you know, a lot of companies have some flavor a bit that ends up, you know, only solving for one of the multiple hats that I think ultimately you need to solve for. I mean, I think, you know, I found it really compelling that Brian Chesky talked about this on a podcast somewhere recently where, you know, this was, I think, misinterpreted, but like there's a big buzz around the statement that they had made around, like, doing away with product managers, right? But what they really meant was they were kind of splitting the role into two constituent pieces and actually making those into explicit roles that were complementary, which are product marketing and then program management, right? And I think those two reflect two really important hats of, you know, a PM.
Starting point is 00:10:49 So, for instance, on the product marketing side, it's really about understanding what is the market for this thing, right? I think there's a lot of PM functions that are more inward looking and just focus on what are we building, what's going to be hard about it, like, you know, how do we keep the technical, you know, kind of, you know, capabilities on track or make sure it fulfills the technical capabilities. How do we make sure that from an engineering, you know, kind of sprint standpoint or, you know, on timeline, et cetera. But that's more of like the program management side of things, super important because you got to, you know, you have to know what requirements are. You have to kind of keep on pace against that. But the product marketing side is really interesting because I think it's often neglected in PM. And it basically is about like starting from the customer, starting to the market, saying like, who is our competition and like what is the problem we're trying to solve? I mean, like the JTPD or jobs to be done for it was ultimately meant to kind of really put the emphasis on it.
Starting point is 00:11:42 But like all frameworks, I think it's like great in theory, hard to actually implement well in practice, right? Like the framework alone doesn't solve the problem. And then I think for us, maybe even more so than some consumer companies like Airbnb, I think might have like a third hard hat to be worn, which is really around like thinking creatively about more complex U.S. when you think about the amount of just pure informational density in our team, right? Or the fundamental complexity of some of the concepts that we're trying to mop, right? There's just like a certain unavoidable amount of degrees of freedom and complexity and nuance to whether it's like how do you build these AI primitives or, you know, even our existing features like automations and so on. And so, you know, add to the makes like a third bucket. And I think for that third bucket, actually like the Google and. and meta were formerly known as Facebook at the time,
Starting point is 00:12:39 you know, kind of PM disciplines did a pretty good job of like cultivating for that. Like I think, you know, in the early days of Google PM, like, you know, that's when you had so much emphasis on finding people who could think about like those kinds of hard design problems, right? Like the informational architecture and like, you know, how to like handle the UX of a more complicated interaction. So you might put that as a third thing, which almost like crosses over. over into like some amount of like ux design, et cetera.
Starting point is 00:13:08 But to me, those are kind of at least three of the really, really important buckets. And I think what we've done as a company is started to recognize, even if not to explicitly split out these roles, starting to recognize, you know, first of all, the importance of all three and making sure that, you know, someone is covering each of those three.
Starting point is 00:13:28 And it doesn't have to be the same person, sometimes in a group like, you know, maybe the design lead actually fulfills more of that third bucket. right? And maybe the Englea fulfills more of the program management bucket. And maybe, you know, the PM fills more of the product marketing bucket. But just making sure that we are thinking with all of those hats on for most things that we build, especially those that demand more of one or the other.
Starting point is 00:13:50 Right. And there are some functions or features that are a little bit more kind of dry and like kind of more straightforward to build. So maybe we don't care as much about like the product marketing really understand the customer requirements and market dynamics side. But I would say it's, it's more about like really kind of recognizing PM is not like a single art, just saying, hey, look, we're going to hire PMs who are great at other companies will not necessarily mean success here. And really to kind of start cultivating our own definition of like, what are the, why is product so hard energy?
Starting point is 00:14:23 One way, you know, I was just at an on site at Nvidia where Jensen came out and, you know, he's always, you know, so inspiring in terms of, you know, some of his philosophies, although some of them are completely inapplicable, I think, to other companies. To mortal companies. I don't think I could have like 50 reports and take no one-on-ones with any of them. But, you know, one of the really inspiring things was this idea of they love to solve hard problems, right? And in fact, like, if a problem is not hard enough, like, they almost don't want to go into it because, you know, it's going to be a commoditized space, right? And it's going to be about other factors that drive success in that market, maybe go to market excellence or so on.
Starting point is 00:15:03 And, well, I wouldn't say we're quite as hardcore as Invidia in that regard. Like, we're not, you know, problems nearly as hard as the scaleable compute problems that they deal with. I do think, you know, part of, when we're recognizing more and more that, like, part of our success and our culture is rooted in solving problems that are uniquely hard from a U.S. standpoint and from understanding the market, you know, the true job to be done, you know, problem in a novel way where we're not just trying to build a slightly better version of some other person's, product, right? We're like the literal solve for a thing, like, because then we would look more like a vertical SaaS company more than a platform. And so we're now trying to apply a lot of that same philosophy and those principles of product management to how we go and attack AI. And that is, you know, that ultimately will be either our differentiator or we don't do it well enough and like we won't be able to like win in a big way. But it is the unique take that
Starting point is 00:15:58 we have on seeing the capabilities of these models and ultimately thinking about how to productize them into our platform. You folks were pretty early to this AI wave in terms of early iterations and thinking about it and thinking about how to integrate it. And I think you're always very thoughtful on the product side in terms of like, how do you actually take something and convert it into something that has, you know, real user value? Can you tell us a little bit more about that journey?
Starting point is 00:16:22 Like, how did you first become aware of some of the things happening generative AI? What made you decide it was different from prior ways of ML and then, you know, how you thought about progressing with it? So, you know, I actually, like, really was interested in neural networks in college. You know, I, like, it was kind of ahead of the, the current wave of, like, exciting breakthroughs, right? This was back in, like, 05 through 09. So still kind of, like, in the wintry phase, I would say. But, you know, and ImageNet had definitely not come out yet.
Starting point is 00:16:49 Like, you know, this was not like the time where we were saying just, like, year after year. There's, like, amazing new capabilities of these models. But at that time, I still found it really fascinating, more, like, intellectually. It just has like this academic concept that, wow, like instead of having to go and laboriously write all the code to tell the computer what to do, right? Whether it's for interface code or for like business logic or whatever, like, you know, you could just basically have this approach where you tell the data or you tell the computer like, here's all the data, here's all the patterns I want you to look at, whether it's just basic like, you know, kind of Netflix recommendations engine type things or in the future like images. But look at all this data, and I just want you to figure out the patterns. And I'll tell you, like, what I want the output to be. And you figure out, like, what the rules should be, right?
Starting point is 00:17:35 And I think I just found it fascinating because it's, you know, in many ways, I think the best or the most curious software engineers are actually fundamentally lazy at heart, right? Because you're trying to find ways to, like, you know, build the meta-solve to, like, solve the thing that you're trying to do, right? And in a way, Airtable is kind of a meta-solve for, you know, application sets, right? It's kind of the database, the interface, and the logic layer to allow you to build any application rather than if we were trying to go and actually build like a hundred different vertical SaaS products. And I always found AI to be like, you know, really interesting meta-solve to a lot of software, you know, problems at large, right? And so I think I've just been kind of studying it a little bit from arm's length, you know, over really like, you know, since college. And, you know, it was really excited to see some of the image breakthroughs in terms of like convolutional
Starting point is 00:18:24 and be able to, like, actually, you know, start to classify images, which historically was a pretty hard problem to do as well as humans. But, like, now it could, right, do it very, very cheaply and scalably. And in fact, I interned at a company called Prow, which you both know well, the founder of which Lucas now is the founder of weights of biases. And I think it was like three people or two people at the time that I first showed up in this alley in an admission. And they were already starting to do some really interesting work with, like, labeling data that eventually would would kind of, you know, go towards labeling data for AI applications. But I found it, you know, just kind of interesting to see there, okay, like we are reaching
Starting point is 00:19:04 the tipping point of initially a lot of the workloads were about labeling or, you know, kind of actually manually just, you know, doing image classification or image moderation for, say, social media companies. And that was the steady state solution. Like it wasn't even about training the AI model. It was about, hey, let's just like do this at scale, but try. cheaply and scalably through this giant farm of, you know, basically click work, right? And I think in the future or, you know, gradually it started to shift to, oh, wait, let's,
Starting point is 00:19:35 let's actually use, you know, these, this content to train models that actually now have reached human level. And so that was one little sliver of seeing like some, some breakthroughs for its hand. And then, you know, much, much later, I think as we started to see the text-based application of transformer models, I think that that's, that became really interesting and, you know, like probably just playing around with some of the models like, you know, GPD4 really early on, you know, but also like, you know, even just the chatchety as soon as it came out, like be able to see the reasoning capabilities there beyond just, I think a lot of people were enamored with like the fun. This case is like composed me to sonnet, you know, do these like fun, you know, kind of fanciful things. I was more enticed by the fact that this, you know, it felt like you could actually do some really interesting and meaningful reasoning work, which. you know, is, in my view, a massive, massive unlock that still has not been fully exploited even today. Like, I think we could pause model development today and still get a million times more economic value impact from today's generation models than we fully realize.
Starting point is 00:20:40 How do you think about that user impact? Because I feel like there's a lot of false starts when people first start using this. And so did you all go through a similar thing? Were the things that you thought would make for compelling products? Then it turned out that's the wrong direction, or how did you think about what was actually important to do? Yeah, so we are in a somewhat interesting position because, you know, not only do we need to think about how to use AI in our product to enhance the product experience, right, the same way that, let's say, Figma uses AI to, you know, make it really easy to design, right? It can, like, you know, kind of co-design with you.
Starting point is 00:21:17 It can generate decks now and so on. So we have to think about that way of integrating AI. into our own product experience and changing kind of the user experience around it. But separately, because we are a very meta app platform that enables our customers to build their own apps, a big, big part and probably the biggest thing that we're excited about is our opportunity to make it possible for our customers to build AI apps, right? So like taking the meta approach first, I think what's really interesting is that, you know, when we first launched our AI capability in beta, it was really about that runtime
Starting point is 00:21:52 capability, right? So we put out, you know, effectively a wrapper around, you know, the open AI models and later anthropic and so on. But we basically wrapped around it, but then made it really easy to kind of use these, you know, Lego Heises to build an AI call into your data and into your workflows, right? So Airtable is all about having your, you know, first party data in a very usable form and having humans interact with it, collaborate on it, and it performed workflow around it. You know, this made it really, really easy for you to add a workflow step, either as like an AI field that took inputs. Let's say it's taking, you know, kind of inputs for a product feature and then generating
Starting point is 00:22:31 the first draft of the PRD. Well, these are things that technically you could have done in chat, GBT, separately, but because it's embedded into your data and workflows, it's a lot more, you know, kind of recurring and automated. And you can have, you know, prompts that are predefined, right? And, you know, I think what we quickly learned was that there is a lot of, fear and kind of also just like intimidation right now, especially amongst the enterprise customers,
Starting point is 00:22:57 but even amongst, you know, kind of just the broader B2B landscape of, you know, customers, you know, there's just a lot of unknowns around how they can actually use AI. And, you know, there's a lot of immaturity of market understanding in terms of how these models work. I mean, not just like on a mathematical level, but even like just in a basic sense of beyond doing
Starting point is 00:23:17 some like experimental fun chat prompts into chat tweet, most people don't really understand what they're capable of, right, whether it's translation use cases or categorization or even more advanced things like reasoning, right? And synthesis take this, you know, earnings call on the customer and actually extract really, you know, specifically applicable insights to our sales team about how we can sell to Nike better, for instance. So I think what we've learned is that, you know, this is going to be a really difficult product to just kind of release out there in a horizontal way and hope that everyone just. figures it up, right? Even though there are so many different applications, you can apply it to almost any use case, any industry.
Starting point is 00:23:58 I think that the gap right now is in imagination and know-how. So we basically worked on two things. One is we spent a lot of time with specific customers. And we had 1,000 customers in our kind of beta before we launched publicly. And since then, we've gotten many, many more. And we've worked very, very hands-on with a lot of these customers to really not just show them
Starting point is 00:24:22 how to use the feature, like how to point and click and implement the feature, but really help them understand what parts of their workflow they can automate and even kind of challenge them to be a little bit more ambitious about where they're climbing, right? So for instance, like a top five law firm is an AI customer of ours. And, you know, they're actually coming in with ideas around how to automate a bunch of parts of the contracting workflows, etc. But then, you know, we want to work with them to even challenge, you know, their idea of what they can do, right? So that requires level of immersion and kind of design partnership with these customers.
Starting point is 00:24:55 And then the second part is, you know, much like the whole premise of Airtable being a very horizontal platform, but you know, we realize as easy as it was to just get started with the product, it was also important for us to build the templates and guide people towards use cases. We're starting to do more of that with our AI, right? So, you know, we built a bunch of prompt templates that are common to, you know, kind of our most popular air table use cases, whether it's in marketing or product management, etc. And I think that doing more and more to automatically infer and suggest and make it like not just easy to implement the feature, but to kind of break through the imagination barrier of
Starting point is 00:25:35 how can you use this and what can you use it for is going to be really important. I'm really excited to actually use AI to do a lot of that, right? You can infer from, you know, the contents of an app and the data what the use case is and even then have an LLM suggest what are good. use cases for an LLM in this workflow, right? What are questions I could ask against this data set? Yeah, that's super interesting. I mean, you mentioned earlier that you feel that if we just froze things in time today, there's all these things that could be built and value in logged through AI and current
Starting point is 00:26:06 LLMs, what do you think are the biggest things that are missing from a technology or functional perspective as you think about how to translate that into product? Like, what couldn't you do right now? Or what is missing that would allow you to do a lot more? I think we're all very centered on this idea of the chat interface as the main kind of UX design pattern for LLMs, right? And it's no surprise. I mean, chat Chikiki was kind of the thing that broke through and made this mainstream
Starting point is 00:26:32 and even kind of like escalated the world and every enterprise's attention and urgency around, you know, what these LLMs could do. But yet, I think while chat is really powerful and very open-ended, and you see a lot of companies building RAG use cases against their own internal data. It could be HR data so that now any employee can have like an AI, HRBP to ask cop or, you know, or benefits questions too. You know, you see companies that do that with like internal data around like product development or whatever just to make that information more discoverable.
Starting point is 00:27:07 So I think those are great use cases. But ultimately, I see them as just one small sphere in the broader, you know, kind of, you know, town of potential applications for AI. And I think a lot of the more interesting use cases are those that involve some kind of structured recurring process and being very deliberate about saying which parts of that process can you automate. Now, I think this is happening in certain very narrow solution to me. So obviously support has been a really great one. You have companies like Dekegon, like they're going and trying to tackle that end-to-end support automation problem with AI and kind of taking an agentic model to do so. But I think that you'll have some use cases.
Starting point is 00:27:48 is addressed through solution companies that can be very big, you know, each individually. And yet, you know, I still think even if you add all of them up, they're still tapping into such a small fraction of, you know, all of the actual departmental workflows and processes that are happening within the enterprise. So like to attack more of those, I think you need an approach that looks more like an air table or on the very heavyweight side of things, a Palantir, which is going in and doing these AI boot camps, AI workshops, and like in a very Palantiary kind of way, like going very bespoken and very hands-on in terms of actually building, you know, an AI process or AI automated process for a customer. But I think, you know,
Starting point is 00:28:30 there's going to be like very, very heavyweight use space. You can do that way or you can do with your own in-house teams. But then there's still a massive long tail that an aggregate, I think is like, you know, trillions of dollars of economic, you know, value or labor value equivalent that is just waiting to be solved for with a platform that has data, that has workflows, that has human-in-the-loop capabilities, and yet also allows for really flexible modeling of, where do you put in the AI, what are the inputs, what's the prompt, how do you define the outputs in a way that is resilient to error so that you can have humans, take the output, edit, approve, and then chain that with other either L-LOM calls or human steps or automation steps.
Starting point is 00:29:11 When you were doing this, you know, year-long beta with 1,000 customers and trying to give them some, like, obviously work on real workflow, structured recurring processes, as you said, you know, did you figure anything out about how to give other people? Like, you've been looking at this for a long time. Any frameworks for how to give people intuition for what today's models can do, either like in Airtable because you guys have to have the expertise or in your customer. The short answer is we've been trying to do a number of things to kind of codify that and scale it beyond like one-on-one bespoke, you know, kind of interactions, right? Because we can't be like Palantir and go really, really, you know, forward deployed for every customer. So one is we actually now run this AI workshop program. It's a lot lighter touch than like the Palantir at boot camp. But, you know, for instance, we just have one in LA.
Starting point is 00:30:05 We have like, you know, probably 60 people from all kinds of companies, a lot of media companies, some like retail, big like retail companies. etc. And it's a full day kind of master class in first, you know, really just teaching people like, what are these transformer models? Why have they gotten so much better recently? I mean, looking at literally this slide of, you know, parameter count of these models from like five years ago to now, from GP1 to GPD4. And obviously, parameter count is not the NL. And now, you know, smaller models are actually doing really well. But I think it just kind of illustrates to people like, what is this thing that now everybody's talking about, and why now? Is it just a fad or is there like a real foundational kind of technology, you know,
Starting point is 00:30:46 kind of improvement, sort of like with the 8086 processor that has made this the time to actually pay attention and care and, like, there's no turning back now, right? All the way through to some basic prompt engineering techniques and then also showing them some use cases that basically are approximated from real customers. And then, like, you know, what are the different kind of AI design patterns? patterns in, you know, in workflow automation, right? So if you think about like in computer science or software engineering, you have different good design patterns for implementing certain types of business logic.
Starting point is 00:31:18 Like you have the observer pattern, for instance, like, you know, these are different ways of shaping the code to solve a certain archetype or blueprint of a business problem. We're kind of doing the same with, you know, these AI workflows, right? So here's like a chain workflow where, for instance, for translation, we've actually found that creating a pipeline where, you know, the first AI step generates a first effort at translating, you know, whatever it is. And then the second step actually critiques itself and finds potential errors. You have a third step where either a human reviews it or the AI tries to take its own edits from the second step and applies it to the first, right? But, you know, we're starting to emerge these patterns that we've found actually result in much better quality, right? Or we're getting to the desired business outcome. Here's how you
Starting point is 00:32:03 solve for that, right? So part of that, part of this is like we're doing these training programs. Right now they're in person, we're digitizing a lot of this contents. We want to have this Airtable Academy that's not just about how to use Airtable AI, but also, if you go through it, you actually learn a lot about how to use these LLMs in general, right? And theoretically, you can apply that to, you know, other custom AI application building with code, not just through Airtable. And then the second is we're trying to productize more and more of it, whether it's
Starting point is 00:32:32 in the form of creating more prompt templates that, you know, obviously just kind of reflect the different use cases we've seen. And we try to kind of prompt engineer these prompts with the best prompt for that need. But then also we're thinking of and building more advanced primitive. So the initial primitive was transparently a wrapper around an LOM call. And the real value ad for us is that we're not just a wrapper
Starting point is 00:33:01 like one of the one-off content generation apps, but like it's a wrapper that then allows you to embed the model into the context of data and workflows and automations. So there is unique value there. But what we're trying to also do is make that primitive more and more robust. So for instance, something we're working on is adding, you know, kind of a more native capability to do many shot prompting in our table, which kind of makes sense, right? We have all the data.
Starting point is 00:33:25 So you could say here's like five examples of a product requirements document, like a PRD that I wrote where my teams wrote that are really great PRDs, given this feature and these customer insights and so on. Now, based on these five examples, help me generate a draft of the next one based on the customer inputs or like the ideas or notes or whatever, whatever maybe the inputs, right? And what we can then also do is have it learn over time, right? So we have the opportunity to build a great RLHF, you know, feedback loop that instead of having to get applied through a fine between run, you can actually just start stuffing more many shot examples in the prompt. And so those are some of the things that we're excited about building to kind of
Starting point is 00:34:07 make it easier and easier and more native in the product itself to improve the AI capability and also, you know, kind of make it easier to use. We're also thinking about, you know, like I said before, how do we best show you ways to implement AI in our table? So not only will we in the future recommend, hey, have you thought about adding this AI field? Because based on the content type in this table, like this is contracts, and we can infer that, maybe it would make sense to ask to extract these terms, like if they're VC term sheets, like, you know, what was the post money evaluation, right? What's the amount rate? It's like extract that out automatically. And, you know, if you take that to the limit, you can imagine new onboarding flows for Airtable where you actually
Starting point is 00:34:50 start with like a content source, right? Like maybe it's a document folder from like GDrive or box. Maybe it's a bunch of call transcripts from Gong. You integrate with Gong. And then all of a sudden, you get this like beautiful tabular display with the ability to create workflow and interface around it in Airtable that now can have these AI extractions or stuff. So you can start rating every gone call from your sales team or extract every competitor that's been talked about for whether it's marketing or product or what kind of sales oversight use cases. We're doing a lot of this stuff through manual bills right now, both for ourselves
Starting point is 00:35:25 as customer number zero and also working with our customers to do so. But as we start to see those patterns emerge, we want to make more and more of it easily buildable through our very same GUI, no-code kind of U-X that we've always been good at. One thing that you said at the beginning is that like kind of the no-code enterprise app platform category is the thing that came before Airtable. And to some degree or that, how does it change your thinking about Airtable now that code is becoming easier to generate? Well, you know, will CodeGen basically replace the need for vertical software and will replace the need for no code because now like even code is so easy to generate.
Starting point is 00:36:07 And I have a very specific point of view on this, which is, you know, I think, you know, sure, you can generate small snippets of code very easily and maybe that's getting better and better with the more advanced models. You know, I think code is obviously one of the core capabilities of all these LOMs and I think it's, you know, has some nice properties of being, you know, simulable. So you can, you know, you can actually do a good job with synthetic data and training. on it and there's just so much of the corpus of code out there that there's some really interesting things you can do with training at making the models better and there's some really
Starting point is 00:36:43 interesting innovations happening out there right with with not just the big companies but like the startups like the magics and the pool sides and so on of the world that being said and you know this may come down to as much of religious debate as how close are we to AGI I think it's going to be fundamentally hard like really really hard to generate really sophisticated end-to-end process automation type apps. So if you think about like a bespoke solution for content production. I mean, this is basically like an ERP for digital content production at a company like a Netflix or like an NBCU, etc. It's very complicated, right?
Starting point is 00:37:20 There's so many different steps. There's so many nuances to the business logic, to the data modeling, et cetera. And I think that code gen makes a lot of sense as an augmentation to human developers because the output is easily inspectable and fereable by the developer, right? You can just basically look at it, you can reprompt it, you can edit it and take it over. You kind of understand what the code is, right? In the same way that, you know,
Starting point is 00:37:42 if you're generating text output or knowledge output, you can look at it and since we all know English, you can reason about it and say like, oh, I don't actually like the idea that you came up for me for this marketing campaign, right? If that's what you're generating with text. I think for app development, if you want a non-developer to be able,
Starting point is 00:38:03 to have that same interactiveness with the code output, it's got to be outputted in a format that they understand, right? And by definition, that is no code, where when an app is generated in Airtable, and we're actually about to launch like this in a pretty exciting way, but when an app is generated in Airtable, the idea is that you can immediately, as a non-technical person, understand what's going on.
Starting point is 00:38:27 You can see the data schema that's not like an alter table or migration script and SQL that a non-technical person, person would have a really hard time understanding and even the business logic i mean it's literally you know uh built in a very human readable way right we have an automation ui that feels like if this than that right where you can see a very visual flow diagram of hey here's the logic conditional logic etc for this automation logic and then even the interface layout is very easily inspectable right and if you want to like overwrite it modify stuff around instead of having to sometimes frustratingly reprompt uh the lm to like
Starting point is 00:39:03 you know, take what it generated and like, you know, refine it. You just want to go and like directly manipulate the thing, right? So, you know, I'm just very strongly at the belief that short of AGI, I think we're going to have a really hard time having fully automated, you know, co-gen agents that replace the need for no code because you actually want to generate the outputs in no code because for a long time it's going to be about augmenting the human, you know, kind of creative director of the of the app, the architect, if you will, of the app, or at least the business requirements to finder of the app.
Starting point is 00:39:37 And without a professional developer in the loop who understands the outputs and kind of guide it and refine those outputs, you're going to need to generate the outputs in no code. Yeah, that makes a lot of sense to me. You know, even if code generation gets much better, it doesn't really help non-technical folks if the generated code to them is opaque, right?
Starting point is 00:39:57 Because, like, software is just such an iterative process. Like, you even have this issue where, like, you know, a requirement is communicated and the developer was like, I thought you meant X, right? And so if developers can't zero shot, like, you know, professional developers can't zero shot applications, it seems unlikely that the non-developers will be able to, right? Like, how would that magically happen? Yeah. And I mean, you can have, you can imagine like, you know, a codium style, we're not codium, cognition style, you know, workflow where it's like, okay, it's going to, you know, generate the spec for the thing, and then it's going to try to
Starting point is 00:40:27 like build tests for it, and then it's going to try to generate the code that passed the test. So I think that works again for like simpler use cases, but when you think about like really complex business applications, there's just so many nuances there that unless a human is kind of inspecting the requirements along the way and then kind of like giving feedback on, wait, no, you've got the logic here wrong or you got the interface here wrong in many, in a very iterative fashion along the way. And to have the ability to, I think, directly manipulate and edit it in a very precise way. I think it's going to be very challenging to generate apps of any meaningful complexity. Howie, this is a great conversation. Thanks for doing it. Thank you so much. This was fun. Thanks. Great to see you.
Starting point is 00:41:07 Find us on Twitter at No Pryors Pod. Subscribe to our YouTube channel if you want to see our faces, follow the show on Apple Podcasts, Spotify or wherever you listen. That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no dash priors.com.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.