a16z Podcast - The 1000x Developer
Episode Date: February 16, 2023A small minority – likely less than 1% – of the world can code. Yet also widely known that the skillset tends to yield outsized returns, with developers generating some of the highest paying salar...ies out there.But the field is quickly shifting, especially with the advent of wide-scale AI. In this podcast, we get to chat with Amjad Masad, founder of Replit, about these foundational shifts.We cover how Replit has integrated AI into its platform and the implications on both current and future developers. It’s easier than ever to learn to code, but is it still worthwhile? Listen in to find out.Timestamps:00:00 - Introduction02:04 - What is Replit?04:15 - Stories behind Replit11:10 - The software hero’s journey13:09 - Making coding fun15:58 - AI powering software19:37 - Training your own models22:36 - Building UX around AI24:16 - The developer landscape26:23 - The 1000x engineer30:40 - Should you still learn to code?34:41 - What does AI enable?40:54 - Developing on mobile43:24 - A software labor market45:53 - Differentiating a marketplace48:23 - Building new market dynamics50:45 - Looking aheadResources: Replit: https://replit.com/Replit Ghostwriter: https://replit.com/site/ghostwriterReplit Bounties: https://replit.com/bountiesFind Amjad on Twitter: https://twitter.com/amasad Stay Updated: Find us on Twitter: https://twitter.com/a16zFind us on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://twitter.com/stephsmithioPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. For more details please see a16z.com/disclosures.
Transcript
Discussion (0)
We're seeing people making software in 30-minute increments.
Someone the other day put a request for a landing page up,
they attached a figma file.
They got it back in 30 minutes.
That's really never been done before.
And why shouldn't have it existed like that before?
It's hard to find concrete numbers,
but some estimates predict that less than 1% of the world knows how to code.
Yet, it's also widely known that the minority of people
who know how to not only consume but create software yield outsized returns.
with some developers generating some of the highest-paying salaries out there.
But even this field is shifting quickly,
especially with the advent of wide-scale AI.
And in this interview, we get to chat with Amjad Massad,
founder of Replit, an integrated development environment
that allows you to code live in the browser.
Here we chat about how Replit has tackled the difficult problem
of making coding fun,
but also how it's now integrating AI into its platform via Ghostwriter
and the implications of these shifts on both current and future developers,
in addition to the applications that can be built.
As a personal anecdote, I actually taught myself to code in 2018, and to this date, it's one of the best decisions I've ever made.
And despite screaming this from the rooftops, still many other people find it extremely daunting.
My journey took around 300 hours. Yes, I tracked it. But with the advancements in tooling and technology, it may actually be easier than ever to learn to code.
And I was surprised to hear from Amjad just how quickly he thinks people can get up to speed today.
Let's get started.
As a reminder, the content here is for informational purposes only, should not be taken as legal business tax or investment advice or be used to evaluate any investment or security and is not directed at any investors or potential investors in any A16Z fund.
For more details, please see A16c.com slash disclosures.
Judd, thank you so much for joining the A16Z podcast.
It's my pleasure. I've been listening to this podcast for a real long time, so it's great to be finally on.
Yeah, well, we're happy to have you. So just to set the tone for listeners, you are the founder and CEO of Replit.
I think you're also the head of engineering now, but we'll get to that.
Can you just share what Replit is? And I'm probably getting a little ahead of myself, but how it differs from maybe other similar products or development environments.
I like to think about Replit as the technology that reduces the distance between an idea and a product.
The moment any person in the world gets an idea for a piece of software,
and so the distance between having that idea and making something in the world is typically very large.
The internet has really reduced things down further.
And we think Replit is the sort of the ultimate answer there,
where we want to get to a place where the moment you get an idea, you just replet, you know,
and that's sort of where the name came from.
And we do that in a number of ways.
One way is that we just simplify the development process.
Crucially, we don't make it stunted.
Like a lot of tools sort of think that to simplify is to reduce the power.
We actually keep the power while also making the process a lot more enjoyable and unlearnable.
We also superpower it with AI.
If you don't know how to code,
we have various courses and things that can teach you about a code.
If you don't think coding is for you,
then you can hire someone from our community to make the program for you.
So we have this thing called Replit Bounties
where you can put a price on a project you want to get done,
describe the project,
and then a combination of an AI and a human being will get that done for you.
And so it's really ultimately about that idea of like,
getting from idea of product and will help everyone in the world have access to that
superpower that we call software. I love the way you put that because you've kind of represented
the different modalities depending on where someone is and their proficiency in terms of being
able to code. And regardless of where they sit, getting them to that end product, that end vision
that they want. And we'll get to Ghost Rider, we'll get to bounties. But I want to give people a real
sense of some of the stories behind Replit, because that's something that's really gripped me
as someone who's been following the company. And I'll call it two, but then I want to play a
quick game so that we can share more of these stories with the audience. So two things that I
remember. One is recently shared a 13-year-old kid that set up a sizable crypto mining operation
via Replit. Another example that I saw a while back was these two kids, they looked maybe six years
old crashing a computer science teacher conference and sharing their startup pitch. So I've
come up with five or so different scenarios. I have no idea if these scenarios have actually
happened on Replit. I haven't shared them with you prior to this. But I think they might
stimulate either other stories that have happened on Replit, or you can tell me if these things
have indeed happened. So let's give it a shot. The first one is someone has started learning to code
via Replit. They haven't gotten a computer science degree. They haven't done a boot camp. They've
learned within Replit and since been hired as a developer at a Fing company. Has this happened yet?
I would say it happened. Yes. So we've had someone who learned a code on Replit, primarily on
their own, became a very productive community member and applied to work at Replit. And we thought
they're very talented, but at the time there wasn't like a really good fit for them. And so I helped them
apply for other companies, and they ended up at Google.
That's awesome. Let's move on to the next one.
Someone has used Replit to build over 50 projects.
Now, you can use your own definition of project here.
These don't need to be full-blown startups,
but have you seen someone build that magnitude of projects on Replit?
Yeah, I think that's a fairly reasonable number of projects.
So there's a 16-year-old developer.
His name is Rayhan.
He's one of our most prolific programmers.
He actually built a bunch of interesting projects
where he reverse engineered how Replit works,
and he built an unofficial API.
One of his unofficial APIs is a security program
that searches people's Rapples to find Discord tokens.
So a lot of people build Discord bots and Rapplet,
and they copy and paste the token.
tokens in clear text as supposed to putting them in our encrypted service, the Secrets Manager.
And so he would find those tokens. He would invalidate them because Discord has service
to invalidate those. And then he would send them a notification. He would say, like, hey,
like we found that you've exposed your token. He was, and still as one of the most
prolific bounty hunters. And he built one of our earliest bounties, which was a startup that
wanted to build like a stable diffusion-based t-shirt generator.
So he was generated a t-shirt based on a prompt and then get it printed and send to you.
Like, there isn't one week where I don't see Rehan producing like a new piece of software.
That's amazing.
So I would say like 50 is probably on the low side of things here.
Yeah, I undershot it, I guess.
Let's rapid fire through the last three.
The third one is someone has built an app on Replit and through that app has since made a
million dollars or more from it.
A million dollars, I don't think that happened yet.
There are people that are on trajectory to doing that right now.
And so there are bounty hunters that are making thousands and thousands of dollars a
week, and I think that will continue to grow.
There are startups that are starting entirely on Replit today.
There are a couple of AI startups that have their entire stack built on Replit.
And presumably one of those projects,
will maybe get to a million dollars.
There are a couple of earlier examples of startups
that sort of prototype their projects.
For example, FIG is this command line auto-complete tool
that this YC startup, I think it was like YC20 or something like that
or YC-21.
And it's used by tens of thousands of engineers.
And the original product was entirely built on RAPLIT.
They're starting to sell.
I don't think they're making a million dollars,
but maybe that's another example.
of maybe on the path to a million dollars.
But I would say it would be a success if someone, like an individual person just made a
million dollars, I think that's really my dream.
And I hope we can get to that this or next year.
That's awesome.
And I should mention there's a time element to this.
I believe Replit was it founded in 2016.
That's right.
So we're around six to seven years later.
So let's see how things progress.
The next one is someone over the age of 90.
I went a little crazy with this one.
Someone over the age of 90 has used Replit.
Has this happened?
Someone who ended up working at Replit,
their grandpa in India is like a physics PhD.
They do a lot of physics sort of writing.
And in one recent paper they published,
they used Replit to write physics simulation.
And Replit was published as part of that paper.
I don't believe they're 90,
but they're definitely 70 plus.
Okay, final one.
Strangers have met via Replit.
Maybe it's the forum,
maybe working on a project together,
and have since gotten married.
Hmm.
Man, that would be awesome to know.
I haven't heard of it if that happened.
I know there are some community members
that recently got married.
I don't think they met on the community.
But, you know, I've certainly, like,
seen some people date and things like that,
but married, I haven't heard that.
yet. Well, I had to ask it. Like I said, I hadn't shared these with you beforehand. So I went from
what I think are maybe more ordinary scenarios and maybe more extraordinary. One story,
let me mention. There's a story in the news about the world's youngest Microsoft Azure AI
certified programmer. And naturally, someone looked them up and they found that they're
unreplit. This kid is six years old. Oh, my goodness. And if you go to the Replit profile,
they have a thousand followers on Replett and they have games with thousands and thousands of runs
and they're just like completely prolific. And I just can't trap my head, especially now that I have
kids. I can't drop my head around like a six-year-old like doing this like every day. And it's like
pretty freaking amazing. That's incredible because six years old, that's grade one, I believe. In grade
one, I was learning to trace letters so that I could write the alphabet. Wow, that's incredible.
Are there any other stories worth mentioning here? Any top of mind scenarios like a six-year-old who's
coding on Replit? One of my favorite stories is one day I wake up and I see my Twitter blowing up
and I go on Twitter, I find that this Indian mom and dad are tagging me and saying that I corrupted
their child. Their kid is totally addicted to Replit. They're supposed to go to IIT. As you know,
IIT requires a lot of preparation, but that kid is not interested in it, so not studying at all
and just programming all the time. And actually felt pretty shitty about it because it started
going viral. A lot of people were like pretty negative about the parents. Apparently it hit a nerve
in some discourse in India about how parents push their children really hard on IIT. And so,
I reached out and try to help.
I try to talk to the kid and tell them like,
hey, you should listen to your parents.
And then a few months after that,
India was hit pretty hard with COVID,
if you remember the issue there.
And that kid wrote the application that first responders were using
to find equipment,
to find oxygen,
to find all these things.
And it went totally viral.
It brought down our website.
That's how viral went.
And in my mind,
I was like,
his parents should be really proud now.
And then something even more fantastic happened.
Out of that experience and him going viral using his skills,
he got a job.
And now he gets paid more than his entire family.
And so you go, it's sort of this hero's journey.
You go from getting blamed for someone ruining their future,
to actually it turns out that they actually did a huge favor to their future.
And I think that's the sort of power of software and data.
That's beautiful.
And something that also stands out to me from that story, which will touch on throughout this interview, is this idea of coding being fun.
Like, this kid really just wanted to code all the time.
And as someone who taught myself several years ago, prior to learning, I thought coding was really dull, just something more prescriptive instead of creative and artistic.
And again, using this word fun.
And after I learned to code, I saw it in a different light.
And I think that's really important is this idea where it's like, how do you make coding fun?
So what are your thoughts there?
How have you been able to design Replit to actually enhance people's creativity and make them want to come back and make them see this skill in a new light?
So I think most things that are fun tend to be devoid of a lot of drudgerous routine work, right?
You know, when you're playing a video game, you're not like building the video game every time or setting up the TP or doing some rote IT task, right?
When you are doing a sports hobby, you're in the flow, you're doing the thing you're excited about doing.
The problem with coding is that a lot of the maintenance around the development environment and the packages and the integration of all the different components was the thing that engineers were spending most of their time doing.
The moments they were coding, they were in absolute bliss.
But those moments were actually very little in terms of the, if you think about the pie chart of what it meant to work as a programmer.
So the first thing that Repl.
The first thing that Repl.
It does remove the need to do all this setup.
That in itself made programming a lot more fun.
And then at the collaborative aspect, like a lot of what we find fun in life has to do with other people.
We're just social animals, right?
And so if I can share my program with you with just a link, that's really fun.
That's what makes Figma fun.
That's what makes any other collaborative tool fun, is that I can just, like, send you a link and you're in there with me, or you can play it and try it out.
And then finally, there's a lot of explicit gamified elements of Replit.
For example, we have a built-in currency called cycles.
Cycles allow you to earn, you know, from bounties.
You can spend it on AI or compute, or you can cash it out.
And it's built with a mindset of like it feels like Roblox.
It feels like one of those in-game currencies.
Well, I like your analogy of the setup or the admin that goes into coding at times being a lot more overhead.
Where if you compare it to other activities out there, like I love playing soccer, for example,
if playing soccer required me to like put on my shoes and shin pads for an hour before
every hour-long game. I'd be like, man, this is really frustrating. Like, I just want to get on
the field. And so that ratio is actually important. Let's take a left turn and go straight to
AI, which is, you know, the theme, I feel like, of the last six months with many companies.
But why don't you give listeners a quick overview of what Ghost Rider is and how it works?
So Ghost Rider is like a pair programmer that is an AI. It helps you in every aspect of
software creation, whether that's
typing code. So we're
going to provide this great text to either complete
lines of code or entire functions or
entire classes. So you're going to
see that happen as you're typing.
You also can generate
sort of entire programs. So you can
do right-flake generate and you can give it
a prompt. And the same way you could
stable diffusion, Dolly a prompt
and generate entire images. You can give
it a prompt and generate entire programs.
Entire programs that run
entire like applications. And then
And you can highlight a piece of code and you can right click and you can transform it.
Finally, the ability to talk to Ghost Rider is the final piece here.
And we're actively working on it.
And it's our number one priority.
It'll be like a chat box inside Replit.
And it'll both chime in when things happen that it could help.
So, for example, if you get an error, it would like, I can help you with that.
Like a clipy thing, hopefully not as annoying.
But Klippi was way ahead of its time.
So it'll be like, you got an error, like if you want, I can try to help you with that.
Or you can just sort of ask it, like, I wrote this code, can you write tests for it?
It's really remaking the IDE as we know it.
I would also say it's less binary in a way.
Something I've been thinking a lot about as I've learned more about how AI is being implemented with code is when I learned, one of the most frustrating aspects of learning was just the binary nature of code.
When you get into a bad error, it's just like, no, no, no, no, no, no.
And then eventually you hit a yes.
And sometimes that takes way longer than it's comfortable.
And so what I like about this idea of having this sidekick is just the nuance to that of
AI being able to say, oh, have you tried this yet?
Oh, did you mean this by your code?
Like this is what your code is trying to do?
Is that what you intended?
Getting suggestions.
And again, having the learning curve be less like a no, no, no, no.
oh, yes, and more like a gray that evolves into a yes.
Like a gradient descent, sort of how you train an AI, actually.
So, yeah, I 100% agree with that.
I think it humanizes the process, as opposed to this like mechanistics of trial and error thing.
It allows us to iterate a lot faster on tools.
Traditional IDEs are very hard to build.
They're super complex classic algorithms.
I'm sure you remember stuff, but like some of these IDs are super big.
big. Like, you download
Intelligate, that's like a couple of
gigabytes of stuff. And they're
clunky, they take up a ton of
RAM, like try starting
X code, you know, it just
consumes your computer. And the
cool thing about the AI revolution is that
the AI is going to be running in the cloud.
You're going to give it a prompt, like, you know,
what you're talking about, what your code is about.
And it'll be able to
implement suggestions and
implement features and tools
without having
this heavy algorithmic, hard-to-maintain sort of piece of software. So I'm really excited about
that. And as a startup, you always want to catch a new platform shift. And with Replit, I feel
like we're catching this platform shift. And the older IDs will not be able to adapt as fast as we can.
Well, something that I've heard you talk about is also a decision that you made, which was to
build Ghost Rider on top of your own models. So something like a co-pilot is
built on top of GPT3, to my knowledge.
And that's a decision to be built off another platform, but you went a different route.
So can you speak a little bit more to how you made that decision and what kind of inputs
led to that output?
Well, first of all, how crazy it is that Microsoft had another company, whereas Replit built
our own thing.
There are like multiple ways to answer this.
One of them is UX.
UX is inherently inseparable from the infrastructure for how a product works.
I think most people think as their separate things, but if you're serious about making products,
you know, famous Alan Kay quote to Steve Jobs, he told Steve Jobs,
if you're serious about making software, you have to make hardware.
And that's why Apple is a full-stack company is because they think about everything from the transistor to the touch, right?
And so I think for us, it was like, if this is going to be a core interaction with our platform,
we have to be able to optimize it and we have to get the latency down to the point that we feel
it's going to be a really great user experience.
And we weren't able to really get that when we're hitting something over an API because
the latency will be all over the place.
We can get the caching ride.
We can get the location ride.
We didn't have control about any of these things.
that's a huge downside of being a consumer of a mere API.
And then the other part is a strategic part,
which is if you believe that this is primary platform shift
and this is going to be a core part of your technology,
then you have to build it.
If you call yourself a technology company,
that means you build technology, right?
It doesn't mean you're just like building glue code
on top of like existing technology.
Finally, we think that we have a bit of a data advantage.
And that data advantage will compound over time.
And so will allow us to train more advanced AIs over time.
So all these three reasons just made sense for us to bring at least part of it in-house.
I should say that we still use Open AI for a lot of the bigger workloads that require really large models.
Something that I found really interesting was Daniel Gross and Nat Freeman were on the Stratereckery podcast.
And they talked about how they ultimately,
they're investors, they're not the creators of co-pilot,
but how co-pilot ultimately got to the interface that it now is.
And originally they actually wanted to create it as a chat bot.
They thought, oh, people will run into an error.
They're going to want to talk to someone and ask,
hey, how do I fix this?
And they're going to get a response and implement it.
But ultimately, they ended up kind of pivoting to what you might imagine
as like a robot on your shoulder that only speaks up when it has confidence.
And so I know it's the early days of Ghost Rider, but thoughts on how you got to the specific interface that is Ghost Rider today and how you, as you said, kind of linked the UX that people see to what's happening in the back end as you're building these models.
So I think there's two modalities. One is pull and one is push, right? So pull is the human knows what they want and they're going to ask for it. You write a prompts, you're going to wait a little bit and you've got to get it. And then there's push, which is the role.
bit on your shoulder that is like continuously suggesting improvements. And there are tradeoffs
to both. The push model is actually fairly expensive because you're computing all the time.
It needs to actually be fairly low latency. So that can make you a smaller model. So you can use
a super large model. On the other hand, on the sort of pool model, like I'm asking something,
I'm actually going to formulate my question in a way that the AI could understand it better.
I'm going to be able to wait to give the AI time to think or to compute.
And so I would say it's not either or you have to do both.
And I think that's the fundamental UX innovation we brought to the space is that we call
a society of models.
Copilot uses a single model.
Rappellate uses like three different models of different sizes.
So the smallest models is the model over your shoulder continuously giving you suggestions.
Then a medium-sized model to do the transformations and things like that.
than a super large model, the kind of model that you would want to talk to,
chat cheap between a style model.
That's great.
And I guess that shows a concrete example of the capabilities you get from developing your own models
and really being able to fine-tune them to specific use cases.
Let's zoom out a little bit and talk about, you know, right now we're at the beginning of this phase
where AI is being applied to code.
Copilot came out, what was it like a year ago or so?
Ghost Rider came out recently.
But we're in, you might say, the first.
first inning. So if we extrapolate to maybe say Ghost Rider V4 or co-pilot V6 many years from now,
I want to think about how you see the overall environment for developers emerging or evolving.
So on one hand, I could see how people might argue, you know what, having this technology,
being able to just tell a computer, hey, I want to create this app, go basically build it for me
or at least build the fundamentals for me,
is going to create this wave of really low-quality developers
who don't really know what they're doing.
We're just relying on this AI crutch
to be able to do a lot for them.
I could also see an argument, though,
if we're talking about some of these suggestions
within their development environment,
this is actually creating better developers.
They understand a little bit more about how their code is being executed.
Maybe they more quickly pick up new skills,
new languages, new frameworks,
because they have this assistant.
And so just curious to know how you see this all evolving and, again, this kind of ecosystem of engineers changing.
Yeah.
I mean, anytime you make something more accessible, you just get the entire gamut of things.
Like Instagram made photography way more accessible and you get like a long tail of crappy photographs.
But you also discover people who would have never been photographers, right?
You know, everything in the world is like that.
Like YouTube, most of the creators don't get any views.
But then few creators like PewDiePie, pre-Utube, would have been just a kid in Sweden, not a super celebrity, right?
And so making things accessible gets you a lot more people involved.
And the tradeoff you're making is that you're going to get a lot of noise, but you're going to discover talent that wouldn't otherwise be discovered.
I think that's good for the world.
And so with software engineering becoming more accessible because of AI,
I do think we're going to get a lot more developers.
I think we're going to get way, way more developed, probably 10x.
So right now there's like 30 million developers in the world.
There's probably 300 million developers by the end of the decade.
And the way I see, so the developer market evolving to accommodate this new technology
is that there's going to be this bit of a bimodal distribution.
So bilateral distribution, meaning there's no middle end, there's a large tail on both sides, right?
So the middle end is the sort of the glue code plumber type developer that we have today.
I think that'll go away.
And the reason that will go away is because platforms are going to be a lot more expressive.
They're going to be able to be programmed using natural language.
A lot of the cloud platforms are just building better abstractions, things like Replit will just like make backends a lot more accessible.
And so at the middle end, I think, will probably disappear because of that, because of pressure from both sides.
The front engineer is just going to get way, way more powerful.
So front engineer will be able to build full-stack products just because they have access to all this really powerful platforms.
And they're going to be able to just produce a lot more, be able to use AI in every part of the coding process.
Whether it's testing, CICD, design, everything is going to be powered by AI and just made a lot.
lot better, including quality control, by the way.
So that's the sort of on the front end side.
And then on the sort of back end, low level sort of platform engineering, I think those
people are just going to get a lot more powerful.
Like imagine John Carm, right, John Carmack is what we call a 10x developer today.
Imagine giving him a army of AI developers that he could delegate work to, that he could, you know,
ask questions of, you're just going to make him 100x, a thousand X more productive.
And so you can have maybe fewer of those sort of low level 10X engineers, but they're
going to be a thousand X engineers.
And so maybe a single company would need two or three elite engineers and then maybe
dozens of front engineers kind of building all these products and maintaining with a customer
But those are the core group of elite engineers, their impact is just going to be tremendous.
They're going to be demanding a lot more money.
They're going to be making a lot more money.
So if engineering is really your craft, it's not going away.
And you're going to be able to actually accentuate your power.
I think it's funny that you mentioned the 10x engineer because a lot of people make fun of this concept of a 10x engineer because you don't see, for example, 10x plumber because you're a little bit limited in the leverage that you can get with your,
time in a scenario like that. But we know that one of the reasons there truly can be 10x
engineers is because software can actually give you leverage. And so if you enhance the power
that software can provide, it's actually not crazy to your point to imagine a 100x engineer
or a thousand X engineer. If you basically have these like robot developers that you can
rein in and apply in a specific direction. It's scale, right? Yeah, exactly. It's the concept of
Yeah, it's kind of like this is what technology has allowed us to do since the dawn of humanity.
You know, you go from collecting crops manually by hand and doing everything to using animals
to then using robots and now a single farmer can maintain entire acres of farms just because of all
the technology they're using.
That's technology, right?
I mean, the 10x denialism is kind of funny to watch.
Like every one of those people that say 10x engineers don't exist
knowing their heart of hearts that 10x engineers exist
and they probably worked with someone they higher,
respect and admire.
The reason they say that is just politically motivated.
They just don't like the fact that some people can be better than other people.
And that's just the fact in life.
It's uncomfortable to accept that some people actually do bring more value
to different organizations.
All right.
So with the bimodal description that you've shown,
shared. I have to ask the question, which is just simply, is it still worth learning to
code? To me, it feels like it's more worth it today because it's a tide that raises all votes,
right? Like we said, the 10x becomes a thousand X, but like a 0.1x becomes a 10x, right? So if you're
someone who's previously maybe whose impact with coding is going to be very minimal, now it's
going to be meaningful because there's this rising tide.
Previously, like, you learn a bit of coding.
You learned how to plug together some frameworks and create some UI.
Going from that to, like, doing a little bit of parsing, for example, parsing some text
on that or not.
It's very hard.
Right now, you can just use GP3 to do parsing.
Right?
So you're doing the basic coding.
And then any time you find something that's like a little difficult, you can plug in
GPT3 in that.
place. And so I actually think that programming becomes more fun and impactful because of that.
So it's like totally worth doing, even more worth doing than before because you're going to get
more done, not going to get as stuck. Yeah. I think to how quickly people can probably learn.
I wonder if you have any data on this. But when I learned, I don't know why I did this,
but I decided to track how many hours I was spending from when I first started to when I actually
felt proficient. And for me, that number ended up being 300 hours. Maybe I was slow, maybe it was
fast. I don't know. But it kind of shocked me in a way because if you actually distill 300 hours down
to if you were learning full time, which I know not everyone can do, that's less than two months.
And now I think to the tools we have today, and I'm like, gosh, like we can probably do it
way faster than 300 hours. So any thoughts on how quickly someone can actually get to proficiency
with the tools that we have?
So I know you're fine of biology
and he was on your podcast.
You know, he has this beef with media.
And so he wanted to show
that the New York Times is a bunch of bots, right?
And so he put in a bounty on Replit.
He was one of our early adopters of bounties.
And he wrote, it's like,
I want someone to build the GPT Times.
It's sort of like New York Times,
but just totally based on tweets.
You give it a tweet, and it goes and generate an article written in the style of the New York Times.
So actually, it should be up now.
So if you go to the GPP Times.com, that's basically like the site that was built using the bounties.
And the person who built it was on day 80, I think, of 100 days of Python.
So 100 days of Python is one of our programs to learn how to code.
And we say you can do a day of Python in basically 20 minutes.
So if he was on Day 80, spending 20 minutes, how many hours is that?
That's 26 hours.
So in 26 hours, they were able to build an entire website using AI and earn $1,800 for biology.
So that's in your answer.
I mean, maybe that guy was an outlier, but we're seeing it all the time where it's taking weeks for someone to get to a place where they can build things.
And I think that's really what matters.
I think that's incredible because I think one of the best decisions I made was learning to code because to me it really was like a foundational shift in understanding the world around me.
Because again, so much of what we do is digital, it is online.
And so even if you don't want to go and be the developer that's hired to build products, just having that understanding.
And it's kind of crazy when you think about it, if you were to actually position that like before and after to people and say, well, now it only takes 30 hours.
you can do that in a week. You could take four days off from your job, go through this. And I know there's a difference between the 20 minutes a day and, you know, a straight shot. But that's pretty incredible. So the final thing I want to ask you about when it comes to AI is how this might shift not just how quickly we can code or how quickly we can learn to code, but how this may fundamentally change what we can do with code. So let me give you two examples to kind of shape up the question. The first is when we,
learned that computers were better at chess than humans, we didn't just learn that fact.
We also learned that there were all these other moves or modalities to chess that we had never
considered, right? It kind of reframed the way we saw the game. And then another example
is one that our games team shared recently, where basically if you've heard of the flight
simulator game, you can, in the digital world, fly and land an airplane. And it's got this
3D model of the earth that was built off of the 2D model.
from Google Earth. And that 3D model was built with AI and it only could have been built with
AI. So those are two examples of how this technology not only maybe made things faster,
but actually made things possible, things that weren't prior available. So have you seen
any glimpses of that or any thoughts around, you know, we could position it as somewhat of a
superpower, having that superpower accessible? What does that change? And this is such a brilliant
question. I actually haven't thought about it as much, and I think I should. Your example with
chess also happened with Go, with the Lacey Do All game? Do you remember that move? That basically
what happened is the AI made a strange move that made it give it a disadvantage in the near term
and a huge advantage in the long term. And they were so confused about it because it's almost like
alien intelligence intruding on this thousand-year-old game.
and producing this fundamentally novel move.
So I don't think we've seen that entirely in programming yet,
but I'd be definitely on the lookout for that.
What I would say we've seen is that tasks that previously would require a ton of work,
like a ton of insane amount of laborious work getting done like that.
For example, the parsing question, GP3 is incredibly good at parsing.
If you give it a malformed JSON, it will still.
parse it. Writing parsers is one of the hardest things you can do in programming. Writing
parsis in GP3 is one of the easiest thing you could do. You could spend 15 minutes in the
Open AI playground. So really, that goes from a task that requires hours and maybe days and
weeks of building and testing to something that takes a 15 minute. And so that's a fundamental
phase shift in how we do with things that's actually quite clear. I think there are ways in
which we haven't totally explored how to use LMs and programming.
Like, can you create backend as a service using LLMs?
So basically, Firebase, but entirely using natural language.
Firebase is this great project that Google Cloud acquired.
In Firebase allows you to have a backend without any back-in knowledge.
You just start storing data and it'll just retrieving data and it just works.
Can you have a backend that's completely programmable using
natural language. Can I describe my application and just write the front end for it and just
have the back end taken care of? I think that if that's possible, that I'll cut down on
how many engineers you need on your team. That'll cut down on time to create a prototype.
And so I think that will be incredibly exciting. But your question was more like, what is something
fundamentally not possible that became possible? I don't think we've seen that yet.
But I do want to think a little harder about it and really be opening my eyes wide about it.
Maybe one direction that could happen is the action model.
Have you looked at action models at all?
No, no.
It'd be great if you could describe what they are.
Yeah, so transformer models are the models underpinning large language models, right?
So that's what everyone knows, GPT.
GPT stands for generative, pre-trained transformer.
So you take a transformer, that's a type of model.
architecture and you threw a huge corpus on it and it learns a ton of things and then you can
program it using prompting right that's basically what jpt is now you can do a different type of
transformer where you take a transformer instead of throwing a ton of text at it you throw a ton of
actions at it so what are actions for example actions could be all the mouse and keyboard events in a
browser so i'll just take a straw stream of data and just train in transformer to do that so
Now the transformer encodes knowledge about how to use a browser.
That's wild.
All right, what can you do with that?
You can now instruct the transformer to go book an Airbnb for you,
to go do more complex tasks like find the place with the best weather in this time of year
and book an Airbnb for me and in my family.
So that's quite interesting.
I would say that wasn't possible before.
So I think that's a fundamental area.
If action transformers became mainstream and as powerful as GPT,
then I think it'll unlock a new programmable platform.
Because now it's almost like everything has an API suddenly.
In the past, it's like a specific database has an API that you can plug into.
But now this idea where anything in the browser, anything on the web could potentially be transformed into its own API without
setting up the like specific API yourself like the language model could actually figure that out that's fascinating
and I think to your point we haven't seen these specific examples of wasn't possible now possible yet because
we're really early but even just as we've discussed so far the amount of foundational development
that is required from engineers today that will soon be abstracted just opens that time up that brain
capacity up to apply to something new. So I think it's a little bit inevitable for something to
emerge. We're just not sure what that something is. I want to ask you very quickly about mobile as
well, because up until now, desktop has really captured most of development. I can't think of
many developers who code on their phone. But it sounds like this might be changing. In fact,
I heard you actually say that you think millions of people will code on their phone.
If you think about what we've been talking about with AI, the idea that your primary development experience will include a big portion of chat.
And like the best way to do chat is on your phone.
Everyone texts on their phone.
Everyone talks on their phone.
And I think that being able to generate software by talking to your phone is going to be a very clear thing that will happen.
At minimum, being able to instruct your AIs while you're on the go.
and review their work, that's obviously going to happen.
But as typing becomes less of an issue on the phone,
then actually making complete pieces of software
just will make a ton of sense.
Because you're just prompting
and you're reviewing and then prompting
or reviewing or prompting,
that error of loop is very clearly going to work very well on the phone.
And actually it could be more delightful on the phone
because it could do a lot of swiping sort of activity.
Like I, you know, with my team, we joke, we called like Tinder for Code, where sort of you prompt AI and it gives you a piece of code, you can say yes or no.
Say no, then it gives you another piece of code and maybe give it another prompt or maybe it'd ask you another questions.
So that iterative piece of making software using AI, I think really lends itself nicely to the concept of a phone.
And then the other thing is that it's not just the phone is the tablet.
It's like how crazy it is stuff that, you know,
we don't like have good IDs on the tablet.
It's kind of surprising that Microsoft has made VS code for the tablet.
So where are the first kind of major IDE for the tablet as well?
And I think you can do everything that you could do on desktop on tablet.
You can attach a keyboard to that and you can go to any coffee shop in the world.
Everything happens at cloud.
Your storage is in the cloud.
Your AI is in the cloud.
You don't need that much.
local capability, and you could just write software on that.
Yeah, I haven't thought of that, actually, but it's true.
I have never seen someone coding on a tablet, just like I've never really seen someone
coding on mobile, but let's see how that evolves.
Another feature you've developed that you alluded to earlier was Bounties.
So you tell listeners a little bit more about what this is, this idea of Bounties, and also
just a little bit more about how it's going so far.
Yeah.
So Bounties is part of our sort of portfolio of products that make it easy to make software.
So we realize that not everyone in the world wants to be a coder.
And I think a combination of the network effects on Rapplet and being able to discover a lot of software engineers like the kind of guy who made the GPT Times for Bology and the AI revolution and, you know,
sort of what's happening in currencies,
whether cryptocurrency or centralized currency,
there's a lot of interesting things.
I think it's a sort of a bit of a trifecta
that allows us to kind of build
what I think is a fluid software labor market.
This theory,
the theory of the firm is by this famous economist.
His name is Ronald Coase.
And the fundamental observation is that
full-time employment is a bug.
It's not a feature of the market.
The reason why full-time employment exists
is because the transaction cost of doing something is really high.
Uber is an example of something bringing the transaction costs almost down to zero.
And by doing that, it creates a huge amount of flexibility in the market.
So anyone can enter the market, anyone can do the work,
and they can do it at their own terms,
and there's no binding contract between the different parties.
Software has been something that's been very hard to actually contract out.
And when you do contract out, you get a lot of problems,
There's a lot of issues running the software.
You get a lot of quality issues.
So the fact that Rafflett is a fully integrated place with high quality software engineers using it allows us to be a place where someone can go put a description for a piece of software that they need to get done.
And then a developer, high quality developer can go and using AI can make that software very quickly.
And we're seeing people making software in 30 minute increments.
Someone the other day put a request for a landing page up, they attached your figma file.
They got it back in 30 minutes.
That's really never been done before.
And why shouldn't have it existed like that before?
And so I think the combination of all the technologies we're building allows us to create this marketplace.
How are you thinking about this marketplace among the existing marketplaces?
Because it sounds like, and let me know if I'm wrong, this is more peer-to-peer.
I post a project, someone within the community puts their hand up, or making it.
Maybe they're matched. But when I think about the existing marketplaces for developers out there,
their job in many of these cases is to vet a bunch of developers and say, okay, you have this
skill set. You're this good of a developer. And then also vet their clients and say, okay,
this is a good project. This is worthwhile for us to introduce into this marketplace. And then
they match people. And then, of course, those marketplaces take some commission for doing so.
And so since many of these already exist, like, how do you see bounties as an ecosystem
differentiating or maybe providing something new to people within the community?
I think it's already differentiated.
And the reason it's already differentiated is because the development environment is built into the system.
If Uber was a marketplace to connect you to people and then they have to go get their own car, right?
So you have to go meet someone at a coffee shop and then you have to go.
them go get a call.
That's an absurd example,
but that's what happens at Upwork
or some of these marketplaces.
Like I asked for a piece of software
and then you go make it in something that I don't know.
And then you send me a zip file.
And then what do I do with that?
Right?
And Replit, I just send you a link.
A link to a computer that's running your application.
That's like fundamental innovation on top of that.
And then like all the services just being integrated right there.
Like your open AI API key.
the cloud runtime, like all that stuff, the database, just replicating this complete platform just
like makes this process a lot more efficient. That's actually a great point because when I think
about other scenarios where people hire developers, I think one of the massive gaps is, as you
said, the standardization. But it's also, if we think about the AI tools that you're integrating,
if someone gets a project and they can actually read the code and maybe they're not a developer,
but have some level of understanding
of like this was what I got back.
I think that's actually massive differentiation
because in the past they just get back
this code that they can't understand it all
if they need to refactor anything
or change anything
or get a new developer to work on it.
They really struggle
and they really also struggle
to have some sort of sentiment
in terms of how good the code is.
Yep, exactly.
And we want to create more market dynamics
in the future that are more interesting.
Like, for example,
this concept from crypto called staking
and the idea that
people being able to stake
their money in order to
say, I'm going to build this, I'm going to build this better
than anyone. So I'll put up a bounty for
a thousand bucks. And you staff
will say, I'm so sure of my ability to do that,
I'm willing to put up a hundred bucks
that'll be able to deliver it on time. And if I
don't, you can take my hundred bucks.
I love that. And so
I think there's a lot of innovations to do in
markets. And you can also integrate some kind of
AI things on top.
of that. So, for example, you can have
a sort of AI project manager
where now I takes one bounty
and splits it into
10 bounties. So we have this
experimental product that I
Twitter about recently, but
basically the moment you put in a bounty,
we actually generate the scath
holding for the code. What's scaffolding
in programming is basically the structure of the
code. And then every function
that is not implemented, that function
could be its individual bounty.
So I think once you add
all these sort of innovations together,
I think you're going to get this super
fluid market
of AIs and humans
that you can go from an idea
to a product just like that.
And crucially, like you said, the product
that you're going to get is something you're going to be
able to iterate on the future and get
more bounty creators to
contribute to it's a living
artifact that's working on the platform
as opposed to, like, again, like
a zip file for it over the wall to you.
I love the idea of
people being able to like legitimately bet on themselves if they want to participate if they want to
take on a bounty because actually when I talked to Bologi in our interview we talked about this idea
of evidence versus confidence so someone can have a lot of confidence and say hey I have these
requisites on my CV or I have this degree from somewhere and that distills a lot of confidence
in other people that they can get something done but what's better than confidence is evidence
that you can actually get it done and if you can show hey I've done this project before like
literally here is my work. And also, as you said, put $500 down to say, hey, if you don't
like it, like you can have my $500. Like that is more than confidence. That is like evidence that
you can do the work and you're willing to put your money where your skills are. So I love that
idea. With that said, let's move on to the very final section and we'll just rapid fire go through
these questions. They are questions that I've actually seen Sam Altman from OpenAI talk about
as four different important questions that he thinks are relevant to this advent of AI,
especially as it progresses again, taking the lens of not just Ghost Writer V1, but Ghostwriter V10,
quite a ways away from now and how good, how proficient it can get.
So I'll just read you each question.
I just want to get your like raw take on each of these.
So the first question is with that technology, how do you think that fundamentally
changes society.
Something I care deeply about, which is a quality of opportunity.
So I think just the ability that anyone in the world being able to contribute and build
something is just beautiful and just amazing.
It allows us to include everyone and anyone who's willing to work hard.
And that's like the real Americanism, in my opinion.
That's like what America's built upon us.
The idea is that if you're willing to do the work, you're going to get the chance at doing
work. And if you do the work right, you're going to get rewarded for it. And I think that's
beautiful. However, I think that comes with also increased inequality. I think the pie is going to be
bigger, but also there's going to be differences between, again, like the 10 X difference becomes
a thousand X difference. And that's going to create some political issues because I think it's going
to create envy and it's going to accentuate some differences. And I think that part is not going
be fun and I think reasonable people should be able to say it like it is and that's why I think
you know the 10x denialism is a small sort of symptom of a larger issue and the issue of just
being uncomfortable of people being rewarded more because they have a better ability of doing
something so that's just going to be a big issue that society is going to have to deal with
I agree. Exponential technology means potentially exponential gains from certain people and not others, and that's not necessarily inherently bad, especially if the pie is growing. But yes, I agree with you. That's going to be something that is divisive. That actually relates to question number two, which is how do we ensure that it benefits us all? Now, that does not mean it benefits us all equally, but how do we ensure that this technology is something that widely benefits us all?
the people participating in society.
I think as much as possible, having competition is going to be important.
I think a world in which just one company controls the biggest AI models is not going to be great,
but neither do like two or three companies.
I think the lesson we learned from the last generation of big tech is that,
especially if there's like a monoculture like Silicon Valley,
you're going to have similar decisions made.
And so how do we prevent that in the world of AI?
That's something that's very important.
I think open source plays a part in that.
I think as much as possible,
pushing the technology to be easy to run and easy to develop,
but not necessarily something that's sort of closed off
and like a few people can control.
I think a lot of what's called AI alignment today
is not really aligning with what the average human being wants.
It's aligning with like what the,
sort of Silicon Valley average sensibility is, which I don't think it's good.
Like, I think that we should try to build as much as possible neutral technology.
So my bias is going to be more freedom, more decentralization.
Ultimately, these models are a tool.
The tool itself is neutral, but the application of the tool where the technology doesn't
always happen to be neutral.
And they hate the fact that it can be neutral, and they think that neutral is bad.
And I think that's where we really need to push back is that, no, actually, it should be neutral.
And then the user have to decide what to do with it.
Yeah, I think that's a good modifier.
I'm going to combine the final two into one question.
You can maybe touch on either or both of them.
But the question is, given, again, if we imagine the exponential nature of this technology, the evolved form of it, how do we deploy it safely?
And you can kind of interpret the word safe in your own terms.
How do we deploy it safely and also maybe how does governance play a role on that?
You know, I just like the word safety.
And I think a lot of the problems in the world today and so the way our world is shaping up is this concept of safetyism,
this concept that like companies need to ensure our safety and then they encroach on our freedom to ensure our safety.
Like it's such a stupid topic, but there was a recent gas stove debate.
right? Whatever government official that kind of said like they might consider banning gas stoves, that kind of impulse of like we know better than you, sort of the nanny state impulse, I think it's bad. And I think it actually, every time governments have created sort of atrocities have come from that idea. Like the government wanted the American people to be safe from the Japanese. That's why they create Japanese internment camps, right? You can use safety to do the most imploring thing in the world.
world. And so I think we need to be skeptical. Anyone's talking about safety, they probably want
something bad for you. That would be my bias. At least when they want to impose something under the
guise of safety, I would be super skeptical. And then on the question of governance, I would also be
skeptical of that. So maybe I'm showing myself a libertarian impulses here. But I don't think
anyone is inherently responsible for governance of technology. Look, I think government
has a role to play. The government
sort of moves slow, and
that's good that it moves slow because it needs
to learn what's happening.
You know, the government moved really fast on regulating
cars. They would have gotten super wrong.
And it's now like making
self-driving move a lot slower than
it should be because the regulation just
mounted up. But I think the process
in the U.S. is actually fine
for regulation, just like move it
slow, and let's learn
what's happening in the real world,
and then let's have a reasonable
debate and discussion, I think a democracy at the end of the day will arrive at the right
decisions if there's sufficient freedom. And I'm a big fan of debate and vicious debate
in order to arrive at the truth. But that's just going to take time. And the problem now is
that there are some people talking about AI takeoff being so fast that you need to react
to it really quickly. And I think that'll get us into trouble, actually. If you're going to
force politicians to understand this technology before it's even deployed, you're really asking for
trouble. Yeah, but I do think on the flip side of that, something beautiful that's happened in
the last year or so is that the technology has taken off really quickly, but it also has done so
in a way where it's gotten in the hands of consumers. And you see this, if you look at the charts
of like chat GPT, for example, their speed to 10 million users was so much quicker than, you know,
Facebook or Snapchat or other apps in the past. And so I think while
It is inviting some skepticism to say, oh, my gosh, this is happening so quickly.
This is scary.
We don't know the implications.
I also think you're going to see a level of pushback, right?
Because at that point, you're taking something away from people, this superpower as we talked about it.
It's important to notice stuff that nothing catastrophic happened.
Right?
Like this technology is now massive deployment and nothing as bad as the sort of less wrong sort of AI safetyism, sort of part.
of the debate have said it's going to happen. And so it's important to recognize this because
the people that are going to be arguing for extra controls and everything are going to always
paint a future picture of catastrophe. But like the question to ask them is like, okay, now there are
millions of people already using this. Nothing really bad happened. So, you know, what are you
actually worried about in concrete terms? Yeah. And I think your time dimension is so important here
because to your point, you look at technologies like cars
and they were worried in the early days
about cars scaring horses
and I think just having a layer of humility
to say, we don't know how this is going to shake out.
Absolutely.
I'm super optimistic about the future of the technology
and everything that's happening.
I think that's really lovely.
It's like the best time to be a builder.
Thanks for listening to the A16Z podcast.
If you like this episode,
don't forget to subscribe.
leave a review or tell a friend. We also recently launched on YouTube at YouTube.com
slash A16Z underscore video, where you'll find exclusive video content. We'll see you next time.