No Priors: Artificial Intelligence | Technology | Startups - AI Superpowers for Frontend Developers, with Vercel Founder/CEO Guillermo Rauch
Episode Date: August 31, 2023Everything digital is increasingly intermediated through web user experiences, and now AI development can be frontend-first, too. Just ask Guillermo Rauch, the founder and CEO of Vercel, the company b...ehind Next.js. In this episode of No Priors, hosts Sarah Guo and Elad Gil speak to Guillermo about their AI SDK and AI templates, and why Vercel is focused on making it easy for every frontend engineer to build with AI. They also discuss what applications Guillermo's most excited about, how to prepare for the world of bots, whether the winds are changing in web architectures, and why he believes in the AI-fueled 100X engineer. Prior to Vercel, Guillermo co-founded several startups and created the JavaScript library, Socket.io, which allows for real-time bi-directional communication between web clients and servers. Show Links: Guillermo Rauch - CEO & Founder of Vercel | LinkedIn Vercel Vercel AI Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @rauchg Show Notes: (0:00:00) - Vercel's AI Strategy and Future Plans (0:10:36) - AI Frameworks, Observability, and Bot Mitigation (0:17:24) - Crawling the Web and Architecture Changes (0:27:54) - AI's Impact on Web Personalization
Transcript
Discussion (0)
So much of the web runs on Vercell, and now it'll run on Vercell and AI.
Elad and I are super excited to welcome Guillermo Rouch, the founder and CEO of Vercell.
It's one of the most popular developer-fund-and-framework companies and is widely used by Adobe, Octa, eBay, and others.
We unpack the company's AI strategy, what's next for the web and more.
Guillermo, welcome to NoPriars.
Thank you. I'm excited to be here.
Just for people who are not super familiar with Versailles, can you give us a quick explanation of the company?
Yeah, you described it well. We're basically a web infrastructure company. We provide the frameworks, tools, infrastructure, and workflows for companies to deploy the most dynamic and ambitious websites on the internet.
So we power anything from the technology behind chat GPD, in fact, is powered by Next.js, our open source framework, to websites like Underarmor.com or Nintendo.
though, where we provide the infrastructure to serve all their traffic and help them iterate
on their web presence.
And what's the sort of founding story of this?
Technically, I started it at the very, very end of 2015, but I kind of settled on the idea
and launched some of my first prototypes at the beginning of 2016.
Yeah, it's hard to imagine even not existing now.
At the time, what drove your belief that this was a different defensible product than the incumbent
clouds?
So there's an interesting duality in me. On one hand, I'm basically a missionary of the web. I want the web to win. I want open platforms to win. I want developers to win. On the other hand, I really love Apple and companies that invest a lot in design and integration and making things really easy. So in many ways, the inspiration was, can we create a developer platform that does for the cloud what maybe like the
iPhone did or the MacBook did for personal computing.
And at the time, I had just sold my company to Automatic, the company behind the WordPress.
So I had this idea in my mind of just making it really, really easy for developers to deploy an idea to the global web.
And to start focusing on the front end, which is sort of my strength.
I've been a front engineer for the vast majority of my life.
there's always been sort of this, you know, almost like disdain in engineering for Fron
and it's like the last thing you worry about.
But we've kind of turned that upside down and we've made the case that Fronon is the most
important thing that your company has because that's where you meet your customer.
That's where you can accelerate your website to drive more conversion, more sign-ups,
more sales.
So I wanted to also create a company that focus on this last mile of end-user experience
and kind of work backwards into, you know, all the integrations.
and back-ends that you need to bring in to create a full-stack application.
And that's what Versailles has become, basically.
It's to me, it's a portal into the web and into a new way of building software.
Speaking of working backwards then, like, when did you begin to think about AI and get
Varsall into it?
So I've actually been a fan of AI for many, many years.
As an angel investor, I was one of the early investors in companies, I guess, scale AI.
to me, AI is just another important step, huge step, of course, but another important step
in this idea of automating all the parts that we don't want to deal with when we're in
the pursuit of a creative endeavor. And, you know, it's very true to the spirit of Versailles
that incorporating any back-end, any new technology into your site, especially into dynamic
web applications like the ones we power, should be really, really easy. And the other insight
for me was it's clear that a lot of these AI foundation models almost feel like Cloud 2.0
where, you know, tremendous SaaS businesses have been built on top of companies like AWS.
Snowflake, I think you all had their CEO as a guest recently. Snowflick is a good example of
maybe you don't need to reinvent all of the infrastructure. You can create a great cloud
native company. My new insight is there's going to be a lot of great AI native companies that are built
on top of this new infrastructure,
let's call it like Cloud 2.0,
which is this foundation models.
These are the new backends
that are going to power
the most exciting front-end engineering,
you know, applications.
And to that end,
we created this Versailles SDK
that is now powering a ton of different startups.
We just heard about a bunch of awesome companies
that join their AI accelerator.
A lot of them are being powered by this SDK.
It's basically the easiest way to create an AI app.
without having to reinvent the back end wheel, right?
Like you can connect to Open AI, you can connect to Hugging Face, replicate.
So we're really focused on that idea of ease of integration
and really the easiest way to put AI into the hands of users
and creating actually valuable products.
I always advise the team and folks that I work with.
I'm not for random acts of AI, like just like, you know, checking a box,
but creating really useful products.
And I'm a big believer that that last,
of integration can be where a lot of the value accrues.
Can you talk a bit more about what sort of products
for sell offers on the AI side?
I know you have the AI SDK, which is a development kit for AI
apps. You have chat and prompt playground, which
has performance of various LLMs. It'd be great to just hear more
about the different tools you have and what companies
have started using them and how. So also go into some
crucial infrastructure advantages that Versailles
brings to the world. One is we have this product called
edge functions. We allow you run compute as close
as possible to the user.
A lot of these AI applications are required this idea of streaming content to the end user.
So when you type in something, if you just sit there waiting for the server to respond,
this is quite a new thing on the web, right?
Like most e-commerce websites are, most back-ins are sort of optimized for responding within
100 milliseconds.
AIs can sometimes take like 15, 20 seconds to actually, you know, fully bake a thought.
So a lot of our infrastructure in this Edge Functions product is sort of empowering this long sessions of dynamic streaming of responses from as close as possible to the visitor, so the edge of the network.
And this has actually played a crucial role in sort of making apps that integrate with AI, not just really easily, but that the performance of them is really good.
The user experience feels really good.
So if you go to the RAISDK, we actually show you this is what the application would feel.
if you just use like a traditional back
and then it's blocking,
it feels like when you're leveraging
this streaming technologies.
So the SDK currently also
plugs into all this sort of text-oriented
LLMs, but we're planning
to add voice, audio, image generation,
sort of to bring more tools
into the toolkit of front engineers.
In the VERSEL template marketplace,
we actually have a lot of different apps.
Some of them that have already even gone viral
like room GPT, where you can sort of redesign your bedroom
or your living room by using a image generation model.
So that shows you how you can take an open source model.
I believe in that case, it's hosted by Replicate.com.
And you can sort of create an application
with a turnkey subscription model, sort of log in and sign up,
and deploy it in basically seconds.
So a lot of what we're doing is just putting
AI into the hands of as many developers as we can.
Where do you think this all had?
So if you think ahead on the roadmap or strategies or anything else you can share in terms of future products or future things that you're going to be releasing, I'm a big believer that folks have still under-explored the integration side and just creating new AI-native products, you know, for entrepreneurs or startups that are listening in.
I do believe that you don't have to train or fine-tune a new model in order to create a legitimately useful product.
I've been just looking at startups like jenny.a.i where they went from a million in ARR to
1.5 million error over the past two months on creating a very specialized product to assist
researchers in writing research papers. And so I think a lot of what you're doing there is creating
the right product from the point of view of what is this problem that's already existed,
what would it look like to solve it if now I have AI as sort of my input in the design space.
And I think that's a radically different way of thinking compared to, I'm going to add AI to an existing product.
I'm going to add AI to a word processor.
So I think there are a lot of exciting avenues to explore in that direction.
You know, a lot of the productivity tools that I use on a day-to-day basis could certainly benefit from being rethought from the ground up.
And our perspective of Versailles is, you know, start with the front end, start with the AI SDK, which like saves you a ton of time in the AI integration side.
one of the
big things that we believe at Vercel
is that we're going to build the best
possible products if we're customer zero
of our own products, right?
So we build
the entirety of Vercel.com using
the Varsel platform itself.
It pushes us to make a better
next S. It pushes us to make better
infrastructure. It pushes up to make
the builds of our websites faster
because we've dramatically increased our
engineering head account and we want to
optimize for their productivity and so on.
So on the AI side, we want to do the same thing.
We're starting to think about if you had the ability to automate a lot of the work
that fronten engineers in particular do on a day-to-day basis, you know, creating forms,
creating UIs, creating layouts, a lot of this is almost like statistical in nature.
You know, the expectation of a good user interface is that it has to be familiar.
It can be completely, you know, pursue your own journey every time you sit down to create a new front
So we're basically dog-footing our own AISDK to think about the next frontier of automation and generative AI,
but apply to the domain that we know really well, which is UI and front of engineering.
I know that you're a very beloved company in terms of developer adoption, and I think it's one of the most
popular developer-centric companies in the world right now.
What do you think is lacking from an AI developer tooling perspective more generally?
There's a layer of instrumentation that I think is really critical.
Typically, when you look at the successful sort of monitoring and observability companies of the Cloud 1.0, I'm going to use Cloud 1.0 and Cloud 2.0 to denote the new AI Native wave that we're seeing.
If you look at Cloud right now, a lot of the best products in the observability space were born out of, we understand what frameworks and primitives you're using.
we're going to integrate extremely well with them.
I remember the first time I used Datadog, I was blowing away
because the onboarding process was so well-tuned
to, hey, let's not let you move on from the onboarding page
until you've sent us a data point.
And instead of giving me a not-so-familiar way of sending them data,
they sort of enumerated all their integrations.
I actually just checked out Zapier onboarding from Scrite.
I'm just an onboarding diehard.
Zapier is one of our customers.
They run all of Zapier.com and Versailles.
And they have the same thing.
It's just so awesome.
Like you sign up and then they take it to like, let's tell us what software you work with.
Tell us what integrations you work with.
And it wouldn't even let me click on the Zapier logo.
I was so deep in the funnel.
It was beautiful.
Datadog does the same thing for sort of, oh, here's Kubernetes, here's Nextjs, here's all the things that you already know.
I don't think that that's fully landed for AI and the new,
like sort of topology is different.
The frameworks that you use are different.
Of course, there's the AISK,
there's Lang Change ASA, there's a ton of new frameworks,
and the things that you're monitoring are different as well.
Yeah, there's a few different nominees I feel
that are starting to work in this area,
you know, tackling different pieces of what you're saying,
and to your point, it really feels like a very active area
of sort of developer tooling, right,
that's being developed right now.
So, yeah, it's really cool.
I definitely think the overall, like,
let's say, monitoring, test,
observability, like feedback collection space is really nascent, but important.
Really exciting. I think in like Cloud 1.0, it's almost like a nice way.
Of course, you need observability to ship and maintain and evolve a production-grade application.
It's like letting you provide a great quality of service.
But in the AI realm, it's just so mandatory.
Like your V0.1 already needs that critical feedback loop.
Whereas I think maybe some engineers that are moving fast.
as in the early days, if I start out maybe more lenient with how much they observe their endpoints and so on.
So the other hot take that I have is I think a lot of the early frameworks that we're seeing,
the more opinionated frameworks that we're seeing, they're probably going to have to evolve a lot.
And I think we're probably going to see a second generation of frameworks that come out of actually building and deploying AI at production scale.
I think a lot of the DX tools for AI that have emerged so far are more rooted in, I have to get the job done.
I don't know if it's the best way yet.
We haven't really run the application in prod for that long.
My insight there is there's probably going to be significant evolution in the frameworks for AI space.
And I'm not talking about sort of the training tools, the pie torch, obviously those are very well baked.
I'm talking about sort of the last mile, the everything has to do with agents, everything
has to do with indexing and retrieval and more of the novel integrations of AI applications.
If you think ahead in terms of where the web is heading, at least a subset of the interaction
on the web are probably going to become agent-based, right?
So you'll have an agent that represents you, an agent that represents a company or a product,
an agent that represents the government, and you'll basically have your agent go and act on your
behalf, and I'll just interact programmatically through APIs or other means.
What impact does that have for Versailles and does that even matter?
I think it matters already tremendously.
So one of the key investments that we're making is insecurity products.
So when GPT3 came out and folks were sort of like dying to integrate it and launch it,
Open AI is by far the most popular backend.
We have sort of aggregated anonymized telemetry and like what are the backends that
our server functions are talking to and open AIs is sort of biggest.
What happened was a bunch of folks published, you know,
whether it's chat GPT clones or demos or prototypes and whatever.
And then sort of the abuse began of folks that wanted free tokens, so to speak,
and started like running proxies at scale to basically just,
it's almost like extracting intelligence.
Like I want free intelligence.
I'm just going to write, instead of writing a script.
and let's call it Scraper 2.0, I run a bot that tries to get free GPT4, basically.
So this is still a huge problem, by the way.
A lot of products have integrated AI in such a generic way that they've opened up their
token, even if they have authentication in front, they've essentially opened up this source
of intelligence to the entire internet, including countries in which this AI has a bit have
already been banned or companies where the use of AI has been banned. So there's definitely a
security challenge there that we're giving tools to developers to address, whether it's integrated
tools to facilitate rate limiting, bot detection, and all kinds of technologies also for reducing
the cost of deploying this AI's like integrated caching of a lot of the open AI responses
that are cacheable and so on. So I think on one hand,
we already have that issue already at internet scale around how do you protect your own investment
in AI?
How do you also potentially protect your own unique IP from adversaries and so on?
The other one I think is the one you're calling out that is related to the bot detection
mitigation problem, which is how do I actually have a good bot versus a bad bot?
And how does a website owner at scale sort of have an understanding?
of what is the right ratio.
Already, we're seeing that a lot of these AI companies are very strict in blocking any
kind of bot activity because of the threat of abuse.
So I think we're going to have to continue to find more sophisticated.
It's almost like the AI and the counter AI.
We're going to need to deploy more and better AI to sort of detect the bad bots and
keep them at bay, while also allowing you, to your point, the authenticated good ones,
that are going to become your agents that represent you in your ability to crawl the web.
The other challenge that emerges as well is this idea of, like, is my content AI-generated content or not?
And what does that mean for SEO in the future?
The traditional conception of SEO is I'm going to optimize keywords for Google.
And I'm going to make my site really performance so that Google crawls it and then boosts my results based on the signals of performance that they've aggregated from visiting
my website in the past. There's a world where there is an intermediary to your content that
is no longer Google, right? And obviously, this world already exists with GPT4, but there's a cut-off
date problem and so on. But now we have folks like perplexity where, you know, they're basically
real time. So the question that'll emerge is, how do I get SEO right for this retrieval
engines. Do you feel like you have customers that are already working for or planning for this or
thinking about how to handle it, especially if they're more content oriented companies? Yeah. So on the
bot mitigation and abuse prevention thing, every single customer that's deployed AI at scale,
at any scale, a product that actually works has already faced this challenge. And of course,
we're continue sort of, in some cases you're playing cat and mouse. In some cases, you're just
advising the customer how to implement better protections and better tools and finding that balance
of, you know, how do I actually deliver a good experience for everybody while also protecting my
business? On the SEO side, I think mostly I'm just hearing a lot of questions from people, right?
Like, is Google still king? Are the rules of SEO still the ones that apply to me? So I think those
are the main ones. But again, my perception is there's a lot more people entering the crawling game
in doing this retrieval process,
whereas before it felt like you had to delegate all of that
to Bing or the Google Search API.
And I think creating protocols to negotiate content
and to make it more accessible and more distributable,
it really depends on your business model to a great extent, right?
For us, I would love if every single AI
gets the most recent NextJAS APIs to be correct,
which is not the case right now.
If you ask chat GPT how to solve a problem with Next.js, it tells you the solution for 2020.
And it would love for that to be the solution for 2023.
So please go and help yourself to our docs.
I can give you whatever format you want.
But for other companies, it's going to be a challenge, right?
Because they're expecting a different type of content negotiation.
It seems like that's another place where tooling can become really valuable in terms of, you know,
the ability to understand where their content that's provided in a corpus is, you know,
falls under certain copyright laws or has other issues around it or there may be other sort of
tools that we increasingly will see from content owners or for content owners in terms of how
you actually deal with this on the web. And we've already seen some early days versions of that
around image gen and some of the image generation models and people not wanting certain content
included in that. Like Getty images, I think famously pulled a bunch of data specifically to avoid
this sort of issue or ask people to pull that data. I wouldn't be surprised if the APIs that were
used to today, which are basically, here's your stream of words that answer the prompt,
become a duplex stream of the content and the citations, right? Because a lot of products
actually require it. I might be, and in the future, legally required to log, you know,
where that content that I gave to a certain user came from. So I may want to give you a little
UI component to explore the citations, maybe you want to hover a part of the text and understand
where it came from, or simply you just wanted to, like, throw it into a log file for future
reference, like, what are resources that your users keep coming back to and that are worth
exploring more?
Yeah, it makes a lot of sense.
I think this idea that you were talking about of more people getting into the crawling game
is a really interesting one.
I think we all have some exposure to, like, search tech.
and search companies, but it both seems to me, like, really challenging that agents,
we're going to have more agents, they're going to need access to the web, or many of them,
to be really useful, right?
Google is not going to give you their index.
Bing is going to be expensive and to, like, not be up to par on some things, right?
And you can also just, like, imagine technically an index that's just better for an agent
to interface with, right?
If I'm not trying to serve people, I'm trying to serve an agent.
But I guess from recent experience and you guys would also know this, like,
Like, to have an index, you need ranking and coverage, and the web is very, very big, right?
So fresh coverage of a trillion URLs is a very expensive value prop.
And I would love to see somebody with, like, smart ideas about if there is some way to go about this problem that doesn't require, like, full coverage.
But maybe some team needs to figure out how to get there.
One idea is that you delegate the full coverage to the initial sort of pre-training of the large models.
and then you complement it with your own up-to-date, you know, indexing of the sources that are relevant to your domain-specific queries.
So I also use a product called find p-h-I-N-D.com, also of a cell customer, where what they do is they really focus on high-quality developer results.
So when I have a very tactical question about a vendor, it's given me amazing results.
And I think there's a version of this where, like, case text or sort of like any search engine for a particular knowledge worker type will have that, you know, need for like this specific crawling.
And that makes the web a lot smaller, right?
Another insight that I like to share with folks is there's this dataset that Google sort of open sources called Crocs Chrome user experience report.
and it's basically all their anonymized telemetry
of the highest traffic websites on the internet
and it doesn't tell you exactly what the rank is
so it tells you by cohort.
For example, in the top 1,000,
you already have chat GPT and character.aI.
They're in the top 1,000 most trafficked websites
of the public internet, so to speak.
And you can actually notice this crazy power law distribution
where you have the top 1,000,
thousand websites of the internet, you know, amounting to basically like 50% of page views,
especially on mobile is even more slanted than on desktop.
So there's an argument for you can create crawlers that target, you know,
even if you target the top 10,000, you've covered things that most people actually use.
Now, in that top 10,000, you also have dark matter of inaccessible internet.
But the point stands that you can do a lot of crawl.
of sort of the open access internet
and going back to the changes
that could happen to SEO,
you also have this opposite problem
of a lot of things that used to be crawlable
are no longer crawlable.
You have to pay some huge API penalty.
Where else do you think web architecture changes
overall, given these changes in AI,
or other things that you're really thinking
about deeply at Versailles relative to all these shifts?
Yeah, there's a huge push for dynamic
and away from static
in sort of the previous buzzword of JAMSTEC architectures.
It's very clear that content already changed very rapidly.
You had your CMS, and you had a bunch of people working on your CMS,
and they pushed content changes.
And what really didn't work for the web is static generation.
Like, every time your content changes will rebuild the entire site.
And that's what's really created a kind of weird experience
for a lot of folks on the web, where in order to actually get a change,
change live at scale in 2023, it might take you an hour because there's all these layers
of caching. There's this huge build process. There's a lot of static side generation. So a lot of
folks, and behind the XJAS is a lot of this traction of moving from static to a dynamic
architecture. But now I'm seeing, for example, all the headless CMS vendors add AI capabilities.
Of course, you also have the content hubs or content collaboration platforms like Notion also at AI.
So if the rate of content change in production continues to increase, the need for more dynamic infrastructure and architectures continues to increase.
The other one is just generally speaking, we're all going to have more access to AI and therefore we're going to increase the amount of personalization on the web, right?
So I think we're going to continue to see more of a web that's just for you and also delivered very quickly.
And then is there anything else what they would predict in terms of changes to front end or AIUI that you think is going to come in the very near term?
Yeah, there's a really weird meme in front end, which is that front engineers change their tools every weekend or every week based on like what framework comes at in Hacker News.
Funny enough, the reality has been the opposite.
it. Like, if you actually look at, like, what are the Fortune 5,000 doing? What's happening at
scale? When we crawl that Google Chrome report of, like, what are the technologies that are actually
being used? Frameworks, like, React, Sveld, and View very clearly seem here to stay.
Especially React has sort of dominated at the top of the web. So I actually expect to not see a ton of
change there. And the innovation to switch to, like, what are the AI tools
that can actually generate that code.
A lot of what makes Mid Journey so good at what it does
is that almost every prompt yields something
that's a statistically pleasant piece of artwork to look at.
And I think the way that we built for the web
will sort of go much more in that direction.
You don't start with the empty canvas every single time.
But also crucially, so when I say,
don't start with just a blank page and rebuild every element and place every element like you're a caveman.
I think a lot of folks already don't do that and they say, well, I use templates, right?
Now you have a phenomenon that happens a lot on hacker news and startups, which is every startup has the same template.
There's this sort of like, if you're really tuned in, it's this like purple-ish thing.
that has a headline in the middle with some gradient and like box, box, box.
And then so you have these two problems, right?
Like either you have to like reinvent the wheel hard from scratch in handcraft every pixel
or you have an internet that looks the same for everybody.
And I think AI is definitely going to give us the best of both worlds.
Like you're going to get started really, really easily.
And you're going to have this sort of stochastic novelty that AI starts.
are so good at introducing, with the ability to refine based on your own taste.
So I actually recently tweeted the funny meme of Rick Rubin saying, I don't know how to play
music.
Artists hire me because of my taste and my confidence in what I like and I don't like.
I think I see a world where the product engineer role evolves to become that.
I like this.
I don't like this.
Okay, let's refine it.
Let's reprompt.
Okay, this looks too much like the average website.
I don't want it.
And of course, you can sort of dive more into the code if that's, you know, what do you need
to do to solve this problem.
I find that akin to use a lot of image generation tools that still require a lot of heavy
editing on top, especially in the video space.
But I see that totally happening to UI engineering.
And I think we can do it with, you know, a lot of the tools that already exist and not so
much, you know, significant breakthroughs.
One question I have relative to your point on.
frameworks being reasonably static.
And obviously, there are certain types of programming languages that have also been with us for a while now.
JavaScript obviously came shortly after the inception of the modern browser and things like that.
Python's been used for a while.
There's obviously more modern languages as well that are getting widespread adoption.
If you look at the evolution of machine-driven codes, for example, I've heard claims that
40% of the code and repos that are associated with copilot are being generated by GitHub copilot versus a person.
It's actually being generated by AI.
Do you think eventually human-derived programming languages are replaced by more efficient machine-driven versions?
In other words, do we actually have to shift that basis for the language in which we code just so it becomes dramatically more effective?
Or does it not really matter in the context of AI can just generate these things that will compile well no matter what so it doesn't matter?
Yeah, I think it's really tricky.
On one hand, I believe this is a productivity race and you have to meet the world where it is.
I think part of co-pilot's success is that it did exactly that.
It made you where you are.
I was already in VS code.
I was actually in NeoVim, but they actually shipped a plugin for NeoVim, so kudos to them.
And sort of incrementally evolved from there.
So I believe the figures around that kind of code generation,
because developers frequently struggle with the liability of bringing in a package.
Anytime you take on a dependency on a third party,
you're almost basically
contaminating your supply chain
you get this like
bag of surprises and so on
so in many ways what's fascinating
about what's happening is that there's
almost like a return to copy and paste
right you know and that was the world
of the last 10, 15 years of
the ecosystem was
the rise of the package manager
we saw this for Python we saw this for
Ruby we saw this for JavaScript
we saw this with Rust and Cargo
but fundamentally what we've been doing is copying and pasting strings of code from the nearest CDN
into your computer and I think in many ways what's fascinating is AIs are now making copy and paste
so ergonomic that do you actually need that package right and one thing that's also really
interesting is in the UI world folks have been actually leveraging copy and paste more
than packages, because with UIs, it's really hard to design the perfect API that actually
allows you to have that creative freedom on top.
I kind of touch on that problem where the UI that's really easy to create all looks the
same.
This goes back even to the days of like Win 32, Java Swing, like people would make this tremendous
investments into like this UI libraries and then no one would use them because then everything
looks the same. But now we're seeing
a return to copy and paste. We're like literally
the most popular way of creating
React UI today, which is called
ShatsiNUI. The
author literally told
people to just copy and paste
from the web browser into their
editors, and that was a breakthrough.
There's a great
phrase that I love, which is copy and paste
is always better
than a bad abstraction.
And a lot of the worst code bases are the ones that
are over abstracted. So
I do believe that AI will help us sort of, again, it's like that idea of the 100X engineer
that almost doesn't even need an ecosystem to exist. You just write everything and you know
everything. A quick question on what you just said, because there's a number of companies that
are focused on supply chain security, so things like SNCC or socket, where they basically
monitor open source packages and say, is there something now nefarious that's been inserted
in it? Do you think that functionality just goes into developer tooling where they, you know,
there's companies like Magic that want to adjust your entire repo and then provide sort of a mega
copilot on top of it, right? Do you think that type of functionality just ends up there?
Absolutely. Security copilot, right? Like, you didn't free this memory allocation. Use after free.
Like, I think those already exist, but there's probably a lot of potential to like audit whatever it
is that you're auto-completing in real time, right? That's another argument for going back to copy-paste,
Right, because if you actually own the code, you can optimize and secure the code,
and you don't need necessarily any of this dependency management and cleanup.
Overriding the third battery package is always a pain in the ass, right?
You have to like, oh, okay, like I can no longer use it as it is because he has a vulnerability.
So another thing that we talk a lot about our cell is monorepos.
And we built tooling for making it really, really easy to adopt monoripos.
It's called Turbo.
And this also comes from the observation
that the largest companies
that ones that have written
the most successful software on the planet
have always worked in massive monorepos.
They didn't scatter
engineering workforce in like,
okay, welcome to L.A. and Sarah's startup.
We have 100 repos here.
So if you want to touch this feature,
go to repo 99.
If you want to touch this feature,
go to repo 38.
No, it's like, here's the code base.
It just works, right?
And most of those companies
don't actually depend on,
they just don't use
the global package managers of the world.
First of all, there's too much liability.
Second, it's just so easy to copy and paste the code into the mono repo.
And now you've assumed ownership over it,
and now you can do much better auditing of the code as well.
So there might be, again, a swing back of the pendulum to
vending and AI generating a lot of this code.
And to your points there as well, like now the AI that scans the code base
also has an easier time.
because they have a full visibility of every dependency in a critical path.
And then I guess the last question was just around this, you know,
machine-derived languages or is that a thing?
Yeah, I come back to a lot of what GPD4 seems to be extraordinarily good at right now
is a function of the available data on the Internet.
So it's really, really good at writing JavaScript.
It's really, really good at writing Python.
And that's because folks have created a monumental amount of content on those two languages.
I don't know how good it is at writing like more of the, like NIM, for example.
It's not very good at writing Kuta.
Sure.
But I think these things are more the question of what happens in four years or five years versus today.
Because I absolutely agree with you.
Like there's the training set that it uses to basically become perform in at certain things.
And so it's really good at things where there's lots of data.
It's gotten better at things where there's space.
bars data and it sort of has to extrapolate, but it's still, you know, the early days of that,
right? Maybe it's GPT6 or seven or something where you really get this more advanced
functionality. But the question is, will that functionality even be relevant? Like, does it really
matter to get to that sort of level or layer? One thing that's more immediate that has come up
for us is the ability to be very, like, efficient with your token usage has definitely
favored more terse syntaxes. So,
you're just wasting a lot of time when you output HTML, for example.
You can make it more concrete.
You don't need all this, you know, redundant closing tags and so on.
So I definitely believe that AIs could operate in a more pure layer of logic
that then gets converted back to whatever problem in hand that you have.
We've certainly done already some simplistic versions of that, basically to make our systems
more efficient. Is there anything else that you want to cover that we should be asking about?
No, check out Versilatcom.com.com.com to get started building your own AI apps. And NextJ.js
at org to check out our framework.
All right. Great. Thanks so much for joining us today. It was a real pleasure.
Thanks, Gerimau. Thank you, folks.
Find us on Twitter at No Pryor's Pod. Subscribe to our YouTube channel. If you want to see our
faces, follow the show on Apple Podcasts, Spotify, or wherever you listen. That way you get a new
episode every week. And sign up for emails or find transcripts for every episode at no dash priors.com.