a16z Podcast - Big Ideas 2026: The Agentic Interface

Episode Date: December 22, 2025

AI is moving from chat to action.In this episode of Big Ideas 2026, we unpack three shifts shaping what comes next for AI products. The change is not just smarter models, but software itself taking on... a new form.You will hear from Marc Andrusko on the move from prompting to execution, Stephanie Zhang on building machine-legible systems, and Sarah Wang on agent layers that turn intent into outcomes.Together, these ideas tell a single story. Interfaces shift from chat to action, design shifts from human-first to agent-readable, and work shifts to agentic execution. AI stops being something you ask, and becomes something that does. Resources:Follow Marc Andrusko on X: https://x.com/mandrusko1Follow Stephanie Zhang on X: https://x.com/steph_zhang  Follow Sarah Wang on X: https://x.com/sarahdingwangRead more all of our 2026 Big IdeasPart 1: https://a16z.com/newsletter/big-ideas-2026-part-1Part 2: https://a16z.com/newsletter/big-ideas-2026-part-2/Part 3: https://a16z.com/newsletter/big-ideas-2026-part-3/ Stay Updated:If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Transcript
Discussion (0)
Starting point is 00:00:00 I chatted with the head of IT recently, who told me for the first time in his two-decade-long career, he believed that IT support was fundamentally going to change. If all of us want this software to be doing work for us, ideally it's doing work with at least, if not more competency than a human could. We're no longer designing for humans, but for agents. The new optimization isn't visual hierarchy, but machine legibility. And that will change the way we create and the tools that we'll change. we used to do it.
Starting point is 00:00:32 Every year, we step back and ask a simple question. What will builders focus on next? Our 2026 big ideas bring together the themes our investing teams believe will shape the coming year in tech. This episode is built around three big ideas that, together, explain where AI products are actually heading next. The shift is not just that models are getting smarter. It's that software is changing shape.
Starting point is 00:00:54 AI is moving from a tool you consult to a system that can understand intent and take action. You're going to hear three different perspectives on that transition. What it means for the interface, what it means for how we design software and information, and what it means for how work gets executed inside organizations. The first big idea is that the prompt box is not the final interface for AI. Mark Andrusco argues that the winning products will feel less like chat and more like proactive teammates. They'll notice what you're doing, anticipate what you need, and propose actions you can approve. Here's Mark.
Starting point is 00:01:28 I'm Mark Andrewsco, a partner on our AI apps investing team. My big idea for 2026 is the death of the prompt box as the primary user interface for AI applications. The next wave of apps will require way less prompting. They'll observe what you're doing and intervene proactively with actions for you to review. The opportunity we're attacking used to be the $300 to $400 to $400 billion of software spend annually in the world.
Starting point is 00:01:55 Now what we're excited about is the $13 trillion of labor spend that exists in the U.S. alone. It's made the market opportunity or the TAM for software about 30 times bigger. If you start from there and then you think about, okay, if all of us want this software to be doing work for us, ideally it's doing work with at least if not more competency than a human could, right? And so I like to think about like, well, what are the best employees do? What are the best human employees do? And I've recently been talking about this graphic that was floating around on Twitter. It's a pyramid of like the five types of employees and the ones with the most agency and why they're the best. So if you start at the bottom
Starting point is 00:02:30 wrong of the pyramid, it's like people who identify a problem and then come to you and ask for help and ask what to do. And that's like the lowest agency employee. But if you go to the S tier, like the most high agency employee you could possibly have, they identify a problem, they do research necessary to diagnose where the problem came from. They look into a number of possible solutions. They implement one of those solutions and then they keep you in the loop or they come to you at the very last minute and say like, do you approve this solution I found? And that's what I think the future of AI apps will be. And I think that's what everyone wants.
Starting point is 00:03:00 That's what we're all working toward. So I feel pretty confident that we're almost there. I think LLMs have continued to get better and faster and cheaper. And I think there's a world in which the user behavior will still necessitate a human in the loop at the very end to sort of approve things, certainly in high-stakes contexts. But I think the models are more than capable of getting to a point where it's suggesting something really smart on your behalf and you basically just have to click accept. As you guys know, I'm pretty obsessed with the notion of an AI needs. native CRM. And I think this is like a perfect example of what these proactive applications could look like. So in today's universe, a salesperson might go open their CRM, explore all
Starting point is 00:03:37 the open opportunities they have, look at their calendar for that day and try to think about, okay, what are the actions I can take right now to have the greatest impact on my funnel and my ability to close deals? With the CRM of tomorrow, your AI agent or your AI CRM should be doing all these things on your behalf in perpetuity, identifying not only like the most obvious opportunities that are in your pipeline, but going through your emails from the last two years and harvesting, you know, this was once a warm lead and you kind of let it die. Like maybe we should send them this email to drum them back up into your process, right? So I think there are so many ways in which drafting an email, harvesting your calendar, going through your old
Starting point is 00:04:13 call notes, like the opportunities are just endless. The ordinary user will still want that last mile approval almost 100% of the time. They will want the human part of the human in the loop to be the final decision maker, and that's great. I think that's like the natural way in which this will evolve. I can imagine a world in which the power user is basically taking a lot of extra effort to train whichever AI app it's using to have as much context about their behavior and how they perform their work as humanly possible. These will utilize larger context windows. These will utilize memory that's been baked into a lot of these LLMs and make it such that the power user can really trust the application to do 99.9% of the work or maybe even 100,
Starting point is 00:04:54 and they'll pride themselves on the number of tasks that get done without a human needing to approve them. Mark gives the interface shift, from prompting to execution. The second big idea follows naturally. If agents are the ones navigating software on our behalf, then we have to start building software to be understood by them. Stephanie Zhang calls this machine-ledgible software. In an agent-first world, visual hierarchy matters less and structure matters more.
Starting point is 00:05:22 The advantage shifts to products, content, and systems. that machines can reliably interpret and operate inside. Here's Stephanie. Hi, my name is Stefan Dang, and I'm an investing partner on the A16Z growth team. My big idea for 2026 is creating for agents, not for humans. Something I'm super excited about for 26 is that people have to start changing the way they create.
Starting point is 00:05:44 And this ranges from creating content to designing applications. People are starting to interface with systems like the web or their applications with agents as, as an intermediary, and what mattered for human consumption won't matter the same way for agent consumption. When I was in high school, I took journalism. And in journalism, we learned the importance of starting with the five Ws and H in the lead paragraph for news articles, and to start with a hook for features. Why? For human attention. Maybe a human would miss the deeply relevant, insightful statement buried on page 5, but an agent won't. For years, we've optimized for
Starting point is 00:06:23 predictable human behavior. You want to be one of the first search results back from Google. You want to be one of the first items listed on Amazon. And this optimization is not just for the web, but as we design software too. Apps were designed for human eyes and clicks. Designers optimized for good UI and intuitive flows. But as agent usage grows, visual design becomes less central to overall comprehension. Before, during incidents, engineers would go into their grafana dashboards and try to piece
Starting point is 00:06:52 together what was going on. Now AISREs take in telemetry data, they'll analyze that data, and they'll report back with hypotheses and insights directly into Slack for humans to read. Before, sales teams would have to click through and navigate sales force or other CRMs to gather information. Now agents will take that data and summarize insights for them. We're no longer designing for humans, but for agents. The new optimization isn't visual hierarchy, but machine legibility.
Starting point is 00:07:22 And that will change the way we create and the tools that we used to do it. It is a question we don't know the answer to what agents are looking for, but all we know is that agents do a much better job at, you know, reading all of the texting an article versus maybe a human would just read, you know, the first couple paragraphs. There are a bunch of tools out there that different organizations use to just make sure that they show up when consumers are prompting chat dbt asking for the best corporate card or the best shoes to buy.
Starting point is 00:07:52 And so there's like a bunch of what we call GEO tools out there in the market that people are using. But everybody is asking the question what AI agents want to see. I love this question. When humans may choose to exit the loop entirely, we're already seeing that happen in some cases. Our portfolio company, Decagon, is answering questions for a lot of their customers already autonomously.
Starting point is 00:08:15 But for other cases, security operations or incident resolution, we typically see a little bit more human in the loop where the AI agent takes first stab at trying to figure out what the issue is, running the analysis and serving to the human's different potential situations. Those tend to be cases of higher liability, more complex analyses that we see humans staying in the loop and will probably stay in the loop for much long. until the models and the technology get to incredibly high accuracy. I don't know if agents will be watching Instagram Reels. It's really interesting. At least on the tech side, it is really important to optimize for that machine legibility piece, optimize for insight, optimize for relevance, especially, versus in the past it was more about hooking people in, capturing attention,
Starting point is 00:09:15 in flashy ways. What we're seeing already is case of high-volume, hyper-personalized content. And maybe you don't create one extremely relevant article, extremely relevant and insightful article. But maybe you're creating extremely high volumes of low-quality content, but addressing different things that you may think an agent wants to see, almost like the equivalent of keywords in the era of agents
Starting point is 00:09:48 where cost of creation of content kind of goes to zero and it's really easy to create high volumes of content that's the potential risk around just high volumes of things to be able to try to capture agent attention. If software becomes machine legible and agents can execute tasks across tools
Starting point is 00:10:08 that the biggest challenge is not cosmetic, it's organizational. That leads to the third big idea. Sarah Wing describes the rise of an agent layer that sits above the traditional system of record and becomes the place where work actually happens. It collapses the distance between intent and execution and changes which software systems control the flow. Here's Sarah. I'm Sarah Wang, general partner on A16 Z growth, and my big idea for 2026 is that systems of record start to lose their edge. A passive system of record layer stops making sense
Starting point is 00:10:39 when agents can independently execute on a signed intent. I expect to see a new dynamic agent layer that actually make sense for employees to replace legacy systems of record. This is a very exciting development on the long road of inserting intelligence into companies. I don't say that systems of record are losing primacy lightly at all. I used to work at a firm that almost exclusively invested
Starting point is 00:11:01 in ERPs and other systems of record because of the stickiness of the data gravity, There was a wave of SAS 2.0 that was well-funded and tried and failed to take on the system of record, mostly through a better UI. This is the first time that we've seen a genuine threat to that, and that's because the distance between intent and execution is collapsing. And that's creating not a 20 to 50% better experience for the user, but how you get to that magical TEDx. Let's take the concrete example of ITSM, IT service management. This has traditionally been the domain a powerhouse company service now.
Starting point is 00:11:40 I chatted with the head of IT recently who told me for the first time in his two-decade-long career, he believed that IT support was fundamentally going to change. It will look completely different in five years. So why is that? If you think about the way that the old systems work,
Starting point is 00:11:56 how long it takes do something like request access to new software in the firm, and you contrast that with the ITSM agents that are arriving. They plug into your staff, and this type of request becomes nearly instantaneous. Through advancements in LLMs, you can now extract intent, you can classify the request type, you can map it to a known workflow,
Starting point is 00:12:17 identify user entities, and the request from the user becomes fulfilled in a way that is efficient and accurate. So we think there's a couple of valuable layers in this new paradigm. Of course, there's the foundation model layer. We believe that stays valuable. But it's really the emerging agent layer that sits as close as possible to the user and is collecting data on that user, understanding user preferences
Starting point is 00:12:38 that we think accrues value in the future. Based on everything that we're seeing in the wild, we believe this is a huge opportunity for new players to come in and win. Why is that? We're in a phase right now where the product is getting better on a weekly, if not daily basis,
Starting point is 00:12:54 and you need teams that move fast. If you're going to collapse intent and execution, what bridges that is actually having an accurate, reliable solution for your customer. Otherwise, they're not going to use it. They're not going to trust the agent that you're building. That's why we're starting to see even agents built on top of classic, iconic platforms like Datadog lose to some of the new AISRE companies like a resolve or a traversal.
Starting point is 00:13:17 We're extremely excited about this opportunity, and 2026 is going to be the year that the dynamic agent layer overtakes the system of record. Taking together, these three big ideas form a single story. First, the interface shifts from chat to action. Second, the design shifts from human first to agent readable. Third, the workflow shifts from systems of record to agent layers that turn intent into outcomes. This is what agentic really means here. AI stops being something you ask and becomes something that does. Thanks for listening to this episode of the A16Z podcast.
Starting point is 00:13:55 If you like this episode, be sure to like, comment, subscribe, leave us a rating or review, and share it with your friends and family. For more episodes, go to YouTube. Apple Podcasts and Spotify, follow us on X at A16Z, and subscribe to our Substack at A16Z. com. Thanks again for listening, and I'll see you in the next episode. As a reminder, the content here is for informational purposes only. Should not be taken as legal business, tax, or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16Z fund.
Starting point is 00:14:30 Please note that A16Z and its affiliates may also maintain investments in the company's discussed in this podcast. For more details, including a link to our investments, please see a16Z.com forward slash disclosures.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.