a16z Podcast - Beyond Avatars: How AI is Reshaping Online Identity
Episode Date: April 24, 2023In a recent episode, we delved into the first wave of technology that was used to create digital influencers like 'Lil Miquela back in 2016.Fast forward to today, we're presented with a whole new set ...of tools that enable almost anyone to build their own digital influencer. In this episode, Sinead Bovell and Danny Postma discuss the transformative impact of artificial intelligence on the world of modeling, online creation, and self-representation.From the rise of AI-generated photos to the democratization of creativity, they discuss the potential of AI in shaping the future of digital expression.Topics Covered:00:00 - Introduction03:08 - The key insight08:00 - The photo studio of the future09:44 - Programs and models behind deep fakes10:58 - How the technology is used today12:36 - How real is today’s content?15:29 - The cost of a virtual model17:13 - Industry benefits19:28 - Market appetite23:25 - Evolving tech25:15 - Unlocking a new wave of creativity28:38 - Future predictions30:23 - Keeping up with tech32:02 - Finding a moat34:09 - Stacked AI models35:25 - Running your own GPUs and models37:33 - Product ideasResources:Find Danny Postma on Twitter: https://twitter.com/dannypostmaaHeadshot Pro, ProfilePicture.Ai plus all of Danny’s projects at: https://www.postcrafts.com/Find Sinéad Bovell on Instagram: https://www.instagram.com/sineadbovell/?hl=en Stay Updated: Find a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://twitter.com/stephsmithioPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. For more details please see a16z.com/disclosures.
Transcript
Discussion (0)
I barely sleep at night because every time I wake up the next morning, there's something new that's launched.
In the arc of innovation, a few years can feel like a long time or no time at all.
Because early on, things move slow. People ask, are we there yet? And also jump to point out that
we're not exactly where people thought we would be. But when technologies hit that exponential phase,
things move really quickly. Sometimes so quickly that we forget that tools like mid-journey
or stable diffusion literally did not exist one year ago.
Now, in a recent episode, we discussed the first wave of technology used to build
digital influencers, and we did that with the creators of Lil Michaela.
Michaela was a first mover, and she was built in 2016, but at the same time, she was built with
CGI.
But today, we're actually presented with an entirely new set of tools, these AI tools,
that give almost anyone the opportunity to build a digital influencer of their own.
So in today's follow-up episode, I thought it'd be interesting to bring in two new voices, seemingly
from different worlds, but also converging on one topic. That topic being, how does artificial
intelligence fundamentally change the way that we all represent ourselves online? The first
voice is Chenet Beauvel. Chenet describes herself as a futurist, but she also wrote an article
titled, I am a model, and I know that artificial intelligence will eventually take my job. But
here's the thing. She wrote this three years ago, which again, depending on the prism that
you have on the world, can both feel like yesterday and forever ago. Our second voice is Danny
Postma. Danny has explored the many ways that AI can reshape how we express ourselves online
by building the tools to actually enable that. So that ranges from the memes that we post,
to the tattoos that we put on our body, to the headshots that we all post on LinkedIn. And over
the last few months, yes, months, not years, he's created seven different projects, and that
ranges from the things I just mentioned, tattoos, memes, headshots, but also a virtual
modeling agency. And honestly, if you've been on AI Twitter recently, I can almost guarantee
you've seen at least one of his projects. So we brought in Chenade and we brought in Danny,
and together we explore how these tools may impact the world of modeling, but also creators
and just about any forum that we as humans show up online.
We'll discuss the deflationary nature of these tools, what they unlock, how to build a moat, and so much more.
And I'll also say that Shnade unfortunately had to drop a little bit early, but Danny stuck around with us till the very end.
All right, let's get started.
As a reminder, the content here is for informational purposes only.
None of the following is investment, business, legal, or tax advice, and please note that A6 and Z and its affiliates may maintain investments in the companies discussed in this podcast.
Please see A6NZ.com slash disclosures for more important information, including a link to a list of our investments.
I am so excited to have you two online today. As you both know, we did an interview with the creators of Loma Kayla just before this. And Loma Kayla, it was started in 2016. And now,
it feels like we're going through a second wave of sorts. It feels like maybe anyone can create
these characters online, and that could be characters that are net new, that are not necessarily
representative of them, or it could be representing their own personality, their own likeness
in a new way, using, I feel like, the technology of the year, AI. So before we get into that
technology and what people are building there, Shanade, you wrote an article in 2020 with Vogue,
that I feel like was really prescient, it was called, I am a model, and I know that artificial
intelligence will eventually take my job. And here we are in 2023. Maybe this does not
sound surprising three years later. But back then, that was a statement. And so tell me a little
bit more about that article and what inspired you three years ago to write it. Yes. So I also
worked as a futurist. So a lot of my time is spent varied in data tracking trends. And I had come
across a company called Datagrid, using a form of generative AI called adversarial networks,
we would know more formally or informally as deep fakes. If this technology can so accurately
create artificial identities, artificial beings, what industry could that have a lot of ramifications
for? And the clear cut for me was fashion, especially when it comes to e-commerce, a line or
a segment of fashion where it's not necessarily about these creative big movements for a fashion
model, but instead just small variations in the posing. And that in some ways is somewhat robotic
already, we don't have a lot of variability into the types of poses we can do or the things that we
can try when it comes to e-commerce. It has to look good on a website. And so that to me was a very
clear trend line that I was able to draw. And I did also see, as I was reached more and more,
fashion companies like Zilando, a big e-commerce giant in Germany, also researching the technology.
And then when you combine it with a little Michaela was also a big influencer back then.
This was like 2019 when I started to research all signs pointed to a future where society might
be okay with artificial beings representing clothing and where it could be a much more cost-effective
future for fashion companies. Totally. And we're seeing this be extrapolated, not just to fashion.
and we'll get to that. But I want to hear when you wrote this article, what was the response? And
maybe break that down both by the people in the modeling industry and then also outside.
Yes. So it was definitely a shock to a lot of people. And I think for a long time, even though
technology has always disrupted fashion, whether that's e-commerce or whatever it may be,
nobody was really talking about the impact of automation on creative jobs at that time and especially
not fashion modeling. So the fashion world, a little bit of fear, I would say, in the modeling
world. But I also try to, I don't like to just drop kind of fearmongering things and leave
everybody defend for themselves, saying things like it isn't just models, right? Everybody's
going to have to prepare for the future of work. So providing that perspective, I think
was a little bit reassuring for people, but the amount of responses I received, and not even
just in the modeling world, people in advertising, photography, all sorts of surrounding
ecosystem players in the world of fashion that will inevitably be impacted by a world where we can
digitally generate photoshoots. The article was received as something highly likely, but I think
people couldn't conceptualize how this transformation would actually come about. So it became
suddenly true for people that, okay, in some way, modeling might be impacted by AI, like other
industries would be, but they couldn't actually understand what that could actually look like. It
became hard to see how that would materialize. And then now in this kind of explosion of generative
AI, it's like all the light bulbs are going off connecting all of the pieces. Yeah. And as I said,
it's not just models. I think a lot of people, as you said, it's materializing where they're seeing
how these tools today are already maybe not replacing them, but are able to do things that they
previously thought they were safe from the advancements of these technologies. In fact, that was really
how people viewed it. They were like, if you're creative, you know, that's where the robots
are unable to match you. And we're now seeing that maybe that's not the case. And so, you know,
talking about some of these tools, Danny, let's move on to you created many different projects and
we'll get into more of them. But one of them was really, you know, this like photo studio of the
future, this modeling agency where you really could spin up a model, put them in these different
backdrops, drop in your product, things like that. And so you posted this a couple weeks ago
and it went really viral. I think one tweet in particular reached like two million people,
but you also, I think saw maybe similar to Chennaid's article, a range of responses. And so tell
us a little bit about what that product was, but also how people responded to it. I've been dabbling
with different kind of projects and it all came together and I made this tweet. My product wasn't
ready yet. Deep agency, the treat went absolutely viral. So within 24 hours, I had to come up
with something to put life because otherwise there wasn't a website online. I think it went viral,
got 30 million views as a modeling agency, and it's nowhere closer to that. I think there's a lot
of other tools that are way better doing those kind of things. But as you said, like all the
technology is coming together right now, all these different kind of models, the ability to generate
deep fakes, like non-existing people, but also the ability to put the clothing on those kind of models.
It's like all these separated tracks of development by researchers over the last past years
that are all coming together in 2023 turning into tools like this.
Since you've been kind of like in the bowels of these tools, Danny, do you think, you know,
let's say like a year from now or maybe even five years from now, is it realistic that basically
you can represent what the modeling industry does today or what influencers do today?
with these technologies and have it be pretty much like a one for one or a match in terms of
it being as good.
So what I'm able to do right now, what others are able to do right now is the ability to use
Dreambooth, which is technology to train an AI on top of someone's face and you can basically
make a clone, a digital twin or whatever, of someone.
There are other models called Staugen, which basically are trained on millions and millions
of photos and they can generate phases and heads that do not exist at the moment.
They are non-traceable.
They don't exist.
Combine those two, and you can train a dream booth model on a non-existent person and create
a model, for example.
And then there are other deep learning models that are able to make a, and I believe
the name is like a top-down photograph of a clothing that's not being worn by someone.
And the AI knows how to put that clothing in kind of the folding ways to put it on top
of someone's body, being able to like dress the model in that kind of way.
So the technology is there at the moment.
I believe. It's up to a company that understands the fashion industry to take it to the
other level. Oh, I love that point because there is, you know, this sense of taste of the people
in the modeling industry are not just the folks who look a certain way or are able to hold a
camera a certain way, but they do have this background, this understanding of aesthetic.
And so, Shnade, tell me a little bit more about how maybe some of these companies are using
this technology today. Are they leaning in and saying, oh, actually, this is really valuable?
we're going to utilize these new tools because we are in this new era, or are they kind of
resistant to this new wave? Yeah, I think it kind of depends on the type of fashion company.
So if you look at e-commerce and a company, although we might not be as familiar with Zulando
in America, they're quite a big fashion giant in Europe for them to be researching this technology
and showing very viable proof of concept as early as 2019. We know that,
fashion companies are exploring this. And we can also say a company like H&M, they have thousands of
data scientists that use AI to do things like forecast trends and understand and analyze supply
chain. So they already have somewhat of the infrastructure to deploy the next advancements in
what the industry will be adopting. I would say where we might see a little bit more resistance
is potentially in high fashion when it comes to using AI for things like design, although I do
expect as any tool, it will eventually be adopted. But I would argue that there's probably
been instances where any of us on this podcast right now have come across an AI model and just
not realized it, shopping online. So I think the technology has been here for years. It's 2023.
Companies were investing in it in 2018. It's largely here. It just hasn't materialized as broadly
at scale yet, but that's coming. I think maybe one pushback people might have is like, okay,
if this thing is AI generated, it's not quote unquote real. But how real are the photos that we're
seeing today anyway? Like ignore the digital virtual influencer concept. The photos that we're seeing
on a billboard, like how photoshopped are those? How real is the end image that we're seeing
today? I feel like there's a degree of fabrication already. And so maybe you could just kind of
speak to that concept. And maybe again, just like the natural pushback for us to say, well, this is not
human anymore, but how far past the original photograph were we already? Yeah, I mean,
that also depends on how representative the models are of society. So it could be argued that it is
hard to find examples of diverse representation in the fashion industry as it stands today. And so
therefore seeing a piece of clothing on somebody else. How helpful is that if your body is a different
shape or you couldn't conform to the clothes in the same way? And in terms of the actual shoes
themselves. Of course, there is a lot of pinning and last minute stitching and things to make
things appear to fit a certain way. And then, of course, the Photoshop part towards the end.
I'm not the person on the computer doing the Photoshop. So I couldn't tell you exactly how much
Photoshop there is. At the end of the day, yeah, an image definitely does get altered.
So it isn't entirely like this documentary style of photography where it just goes specifically
from shoot straight to market. But at the same time, there are an interesting.
social and cultural trends that are converging. The idea of photoshopping images is becoming a little
bit more taboo. People are speaking up against it that they want to see maybe some of the stretch
marks or the cellulite or the acting, the things that make people human. And so it's interesting
that those types of social and cultural trends are appearing at a time when we're arguably
becoming less human in some ways or how fashion and culture is represented is becoming a little
less human. Yeah, I mean, I think maybe one of the most interesting aspects of AI is that you can
really prompt it to give you whatever you want. And so you can prompt it literally to say,
hey, I want this person to have acting. I want this person to be of my race or culture or gender.
You can really prompt it to give you whatever you want. But at the same time, I think there is
this yearning for what people, again, call real. And I think that term is going to take on a whole new
dimension as we look forward. But the last thing I want to ask just as we talk about like fashion and
modeling is maybe a sense of the cost. The cost differential is something that ultimately plays a
role in adoption. And so, Danny, let's just use an example, right? So you have this AI model.
This person doesn't exist. In fact, you did create one of these. Her name is Alice. And people can find
this model at this model does not exist.co, I believe, and we'll link that in the show notes.
Let's just use that as an example. If you wanted to do a photo shoot with Alice, like how much would
that cost. And let's break that down between like actually training the initial data set or the
initial face and then actually generating a bunch of images. Let's just give the audience a sense
of what that takes. So currently to deep train a model on someone's face to be able to generate
photos after that, it costs 60 cents to generate this. Okay. Yeah, so not even a dollar. And then
I think per photo, you're at a fraction of a cent right now. So you could,
generate hundreds of thousands of photos for less than a hundred bucks. So technology is pretty
cheap already in that sense. So I believe this is from what I've seen talking to customers,
people. The number one question in my life chat is from mom and pop shops with clothing brands
who do not have the funds to hire models to put on clothing for the web shops, who I think
probably wouldn't even ever be able to hire a model to do the photo shoots for them. So I see this as it's
like democratizing for the lower spectrum of the small business, enabling them to do the fashion
shoots. And yeah, I think it's mostly for those kind of consumers that this is interesting to do.
Yeah, and there's other applications of that. Like, I did a presentation recently, and I use MidJourney
for all of my images within that presentation. And I probably would not have had those images.
I wouldn't have hired someone at the very least to go create them. And so there is, as you said,
this democratizing force, this ability for folks that didn't have access to a specific technology
to use it in these new ways.
Sheneid, let's hear the other side of that.
Like maybe perhaps some of these folks who are models may not be hired for the same roles
or maybe they will still be.
And in the case where they are, what are the benefits of all this?
Like, can you do a bunch of shoots where, you know, your avatar is, quote, unquote,
attending those shoots without you actually physically being?
there? Can you save a bunch of money, fuel and not traveling? Can you actually like upgrade yourself
in some way? I'm just kind of throwing ideas out there, but how might this actually change the
industry for the better? Is there a world in which a model can be in multiple places at one time
and therefore increase their income stream and have more assets of themselves to be out in the
market? Absolutely. That could be quite profitable, quite lucrative if the industry does
evolved that way in a world where we want to see the same influencers and people we look to
in fashion to continue to be the face of brands and the clothing. But if you look at the history of
technology, technology is often quite deflationary over time, especially when it comes to wages.
So in a world where a model can be in multiple places at one time, yes, they may be able to
have more revenue streams coming in, but do we see downward pressure on prices? Because a lot of
it's created digitally, and there's an increase in supply of identities that could pass for
being human. And so that's where things get a little bit gray and a little bit ethically red flags
and kind of caution tape. And I will say, again, it's not that this is unique to modeling.
We're going to see this across the board with AI. It's going to put down more pressure on a lot of
different wages from modeling to programming. But that's where it becomes a little bit
challenging. And then what is uniquely applicable to modeling and maybe acting is that not only
are people getting automated, but their likeness and the communities they represent are also
getting automated, but someone is still benefiting off of that very specific identity. So that's
where it starts to become a little bit gray. Yeah. I mean, it's going to be interesting to see
how that all evolves if, you know, like what regulation gets put in place, what IP people
own or do not own that goes into these models. On the topic that this really does expand
past modeling, I think what's fascinating, Danny, is you've really went on this sprint over the last
few months you've created. I mean, I'm just going to name some of the projects. You did Tattoo's
AI, which is, you know, an AI powered tattoo artist. We mentioned Alice, this model that does not exist
that you created. You started, I think, or one of your early projects was Profile Picture AI, which is
similar to Lenza, which most people would recognize. We talked about deep agency, meme morph. People
can morph themselves into their favorite meme. And then most recently, you're working on
Headshot Pro, which is basically people can create their next corporate headshot or a company can
create a kind of wave of headshots for their entire staff. And so among those projects,
first just kind of tell me what you're seeing as you've launched these projects, what people are
getting excited about what you're getting a lot of traction with right away without maybe perhaps
even trying. And where actually, you know, this technology may be interesting, but it feels more
like, you know, a track uphill, if that makes sense. Yeah. So first of all, like the whole reason
this scales really fast for me and this gets a lot of tension because it's indistinguable for magic
at the moment. You upload a few photos and suddenly you get hundreds and hundreds of provo pictures
out for it. So this is why people keep sharing it on Instagram on TikTok. It goes.
goes viral really fast. The second, like, I think what mostly gets solved at the moment is I get
a lot of messages from folks who say, I don't have any profile pictures, I don't have any photos,
and suddenly I can look really well online. And this is one of the major selling points for
a headshot, for example. Well, like me, for example, I hate going to photo shoots because I always
look super awkward. I don't smile. So for them, this like solves an actual issue that they can look
good on their LinkedIn. Same one with profile picture. You can turn yourself into whatever you want,
Lensa showed how popular that could be.
I believe Lensa did $40 million with that.
I like this as an art.
I like to devil with it.
I like to try things out.
I get an idea.
I have to work it out.
It gets easy and easier because it uses the same technology.
And the Hed Sheld has been the biggest success, traffic-wise, financial-wise, at the moment.
For remote teams, which is branded that at the moment, like, good luck trying to get
a photographer.
If your team is remotely and you have people in Asia, America, Europe, and you want to get the same
photo style, like, you cannot sense.
a photographer all around the world to get the same style. But everyone can upload a photo,
train an AI model on them, use the same styles to generate it, and basically everyone gets the
same headshot for that team. If you're creative enough in it, you try enough. There are use cases
that people think it's going to take away the photographer, but I think it's going to enable
different kind of things that weren't possible before with this technology. Like having a
photographer everywhere in the world, trying out different hairstyles, for example, things that weren't
weren't able before that are enabled now by this technology.
Yeah, it's kind of crazy.
If you go to Headshot Pro, I think you have this live counter.
And last time I checked it said, 450,000 plus headshots were created.
And that's just...
I think it's a million now because I have to manually update it and it's two weeks live now, yes.
Oh, I was going to say, I just checked it last night.
You kind of alluded to this, but there's ways that people are using this product.
Like, you're creating these projects and you're releasing them into the world.
and you're kind of tinkering and thinking,
oh, maybe people will use it in this way.
But I've seen you tweet about the fact
that you've actually been surprised
that as you release this tool,
people are using the tool in ways that you didn't even expect as the creator.
Well, that's like with profile picture,
I learned from the headshots and demap shots
because we used to do only these artsy styles
and people kept requesting,
hey, can I get a LinkedIn photo for myself?
So we kept tinkering with that way.
Yeah, and just on the note of what people are being drawn to,
I think maybe one way to have a pulse on that is what people are willing to pay for.
And so among your projects, have you noticed any trends there in terms of things that people
almost seem allergic to? Like, oh, this is cool. This is novel. But like, I'm not going to pay you
$30 for that. Versus maybe the headshot example is something that people have ingrained in their
mind. Like, oh, a headshot is worth X dollars. So with profile picture, we started pretty expensive
at a time. And I've been lowering the price, lower, lower, lower. And I think we're all
Almost around cost price at the moment, $5 to generate them.
So, yeah, Lenza basically killed the whole market after they went life.
Right.
So, yeah, I think mostly now it's the high quality output.
And I really wonder where we're going to go in the next few months.
Because these models keep releasing, keep releasing.
Like, I barely sleep at night because every time I wake up the next morning,
there's something new that's launched that already makes the whole technology obsolete.
It's going so far.
So I don't know where it's going to end up.
Well, I know.
I feel like a lot of people would agree with that sense.
that it's moving so quickly. It's so hard to keep up. On the note of it evolving, though, right? These new models are being released all the time. What ways does this, like, really change or shape the way that people can represent themselves online? I think, you know, if you use a headshot example, that's kind of a one-to-one parallel to what we did before this technology existed. But maybe what are new, exciting ideas that people can look forward to. I mean, one example of this in fashion is, I can't remember how long ago this was, but like Bella Hadid's spray on dress was like,
new kind of concept within fashion where people are like, wow, we have this new technology
that fundamentally changes like what we can create. And so, Shnade, maybe I'll start with you there,
but any ideas on how this really, again, doesn't just replicate what we've done before,
but maybe unlocks a whole new wave of creativity. On the one hand, it puts design tools
in the hands of people who wouldn't have been able to step into the market. And so what does
it look like to have an AI system that's been kind of refined on a certain style?
that you like, that you're able to kind of tweak and then digitally apply those images to
your body or your photograph. So maybe you can't really afford the high brand luxury fashion,
but you can generate your own version of luxury using these systems and digitally apply the
clothes to you afterwards. And I know that the digital application of clothes has been something
that's been seeming to be in the pipelines the last few years, but now it really is possible
to retrospectively apply clothes to images.
And if you look at what the internet and social media did to fashion,
we saw the birth of all of these small brands that finally had a shot.
In some ways, it actually became really cool to be the undiscovered brand
that suddenly kind of was discovered and popped off by a certain celebrity figuring them out.
Imagine what happens when more of us get access to these design tools.
Fashion is really a communicator of culture.
And now we can push the bounds on that almost infinitely,
where we conduct photo shoots, who gets to be in them.
When we go digital, things change a lot.
Of course, again, there are ethical challenges that come with that,
even things like copyright.
Am I allowed to be inspired by Celine, for example,
have an AI system look at the latest collection
and then say, put me in clothes that look like the latest Celine, you know,
suit and shoes, but it's not Celine.
Where does the IP go there?
And, you know, are there real designers on the line who may see knockoff
at a whole other scale than we already do.
So we do have to figure out what are the boundaries of this technology,
where does IP start and stop,
and the world where we're a lot more digitized.
But I think the what could be possible and who gets to be a designer,
I think it could be really interesting.
Definitely. Danny, what about you?
Anything come to mind in terms of how this technology maybe unlocks
a new wave of creativity or new opportunities?
Photoshop shaped up this whole industry
where suddenly you could make photos and you're adjusting it afterwards.
So you would make a photo and Photoshop was the post-production.
I wonder how AI can do something, and I think it's going to go there, how AI can be the
pre-production, whereas you generate the photos, you can do the real photo shoots to solve for
missing content that you cannot make with the AI and then have Photoshop stitched those
pre-generation and post-generation together.
For example, what I would think for a Hatshot Pro is like, the customers who get the best
output from me did a photo shoot themselves.
very basic photo shoot on a white screen, trained the AI on the photos,
and then I could generate hundreds and hundreds of photos of them in a park,
in an office, wherever they want to be.
So I wonder if headshot photographers are going to transition towards generating
really, really high quality training images, for example,
and then use AI tools like the next Photoshop to put their customers without having them
to go outside, having to go whatever, make the perfect photo for themselves.
I'll also add to that, I think where we each get to be.
be the model in the shoots that we see. So instead of just looking at the campaign, there might be
an interactive option where you could drop in your own virtual twin to see how you would personally
look in that image. And I think that that could be really, really interesting and maybe be
even more accurate in terms of how things would likely fit on your body if we could get the physics
of clothes right online. And I wouldn't be surprised if any social media is going to do this soon,
because Instagram already has photos of your face, right? They could already train a model on your
face in that kind of sense and you could put the clothing on it that they put in their shopping
carousel, for example. So I think social media platforms are going to integrate this in a few years
or within a year, for example. This is going to be reality. Yeah, I mean, I think if anything,
we'll at least learn a lot about our own preferences because I don't know about you. I don't want to see
I don't want to see my face online everywhere, but some people might, but also you might learn
about what you do want to see. What kind of person do I want to see modeling in front of me online?
Right now, I don't have that option to decide.
But as we do get more of these options, and I think what both of you are pointing towards is, like, we have with these tools eventually more say, and it'll be interesting to see how people utilize that say, right?
How they want to participate in that ecosystem.
And I think we'll learn a lot both individually, but also as a society about what we want to see, because previously there weren't so many people involved in making that decision.
One place that would be great to end off is, I mean, both.
both of you have voiced how quickly this is moving, right, in your respective spaces. And by the way,
respective spaces could mean cheneid you as a model, but also as a futurist. Danny, that could
mean you as an indie hacker. It could mean a developer as a nomad. As this evolves in many
dimensions, what are you doing to quote unquote keep up? And you know, you can interpret that as you'd
like. I spent a significant portion of my day buried in academic white papers to see what is
likely coming down the line in the next few years and kind of potting out those trend graphs
and those trend lines. But I also feel a little bit of a responsibility to bring other people
along with me and to make sure that these conversations are accessible and digestible to
everyone. I think we all have a right to learn about and try to shape our own futures.
And it becomes really challenging if it's only the same few people invited to the conversations
all of the time. For me, I'm completely re-learning how to program different languages. I'm
learning Python, learning all the things to develop it. Also for me, I'm dabbling in all the
white papers. I don't have an academic background, so I'm using chat GPT a lot to summarize
those things and understanding it mostly. What I enjoy the most is most of AI and machine learning,
as you said, it's in academic papers, totally ununderstandable for, yeah, the regular people,
especially me too. So I like to try things out, make tools out of it, so it's more
comprehensible for the regular Joe. My following on Twitter has grown a lot by
just showing what is possible in those kind of sense. It really is what I enjoy doing it. So
very little sleep and a lot of building and trying out. Well, I think your Twitter following has
grown because like you said, there's just so much appetite for people to follow along and try
to understand what is happening. And I'm also just curious because you were quite early to the
stage here, especially with ProfilePicture.A.I. It's kind of crazy to think about. That was not
that long ago, a few months ago, six months ago. October, I think.
It's not even been a half a year since all this deck is out.
It's mental.
Yeah, and it feels like forever ago.
That seems like a perfect example of where you were early.
But then, to your point, it moves so quickly, this world moves so quickly, that ultimately
Lenza came in.
Lenza, I think, ended up, you know, if you were to declare a winner, not like it actually
works that way, then Lenza, you know, took a majority of that market share.
And I think a lot of people also have the question as they debate whether to build with these
AI tools is, well, if it's all based on the same model, then, you know, ultimately, do I
have emotes? Do I have anything that actually will stand the test of time? No, I got a really good
wake-up call because every indie hacker, like we like to develop, right? We like to build things.
We don't do marketing. And seeing Lensa swoop in one month after we actually launches already and
just did our revenue in one hour or something. Like, it's a wake-up call for a developer.
You've got to do distribution is everything. Marketing is everything. It's really important.
And it's always been important with product launches, and now, especially with AI, where you have less of a mode, hiring TikTok influencers, letting it post it and it's going really fast due to it.
But I think someone is going to come out who has a way bigger budget for that.
And that in the sense, like, you shouldn't build on a hype.
So what I'm focusing on mostly now is SEO, building tools, getting backlinks, trying to get it to stand the time longer than just a hype.
Focus on like something small, some small problems to solve and do your own.
marketing, do the distribution. Don't just build. You've got to sell it. What might people not
realize is surprisingly hard. And so maybe one simple example of that is as you're building these
models, everyone jokes about the hands. I think that's being solved. But I think there's probably
infinitely more things that you've encountered as you're building these projects that like, again,
people see the end result. And they're like, Alice looks amazing or like deep agencies working for me
or what a great headshot, but what might people not realize if they haven't built these tools
is actually, again, really, really hard.
So it might look easy on the inside, like on the outside.
Okay, you upload some photos and 20 minutes later the photos are done.
But actually, so profile picture was just dream move.
It was, there was no moat on it.
And what I'm doing with headshot is basically it's 15 different AI models stacked on top of
each other in like a sequence of doing things together.
Like their custom models built right now.
So.
And you said 15.
Yeah, I think at the moment it's 15 different AI models doing things on top of each other to optimize the photo quality, to optimize the training, upscaling, generating, yeah.
If you want to go more unique, yeah, I didn't know Python, I didn't know machine learning, and I had to learn all those kind of things so I could build my own models in that sense.
What surprisingly easy these days is what used to be hard is deploying those models running your own GPUs used to be really hard, but there's all these startups springing up since I think October.
also since the hype, they wanted to capitalize on, okay, AI is hot, what is hard,
it's hosting the models. So that part got released these times. So it's easier than ever to
start at the moment. And just for those who maybe are at the very earliest stages where they're like,
okay, I want to do this. I want to play around. Where would you start? Like how do you learn how
to train your own models? How do you learn to deploy them? How do you learn to interface with something
like stable diffusion.
So the easiest way would be to go to Replicate.com.
They host all the models.
You can just select what you want.
You could select stable diffusion, for example,
and they have a few simple input fields,
but you can just type in what you want,
and that outputs the photo.
They make it super easy.
They have an API that you could hook up
to a no-code tool, for example,
if you don't want to build it in no-code,
I believe Sapir launched their stable-diffusion integration.
So you could just plug that into an email or Discord or whatever.
So those two are really easy to get started with by making some simple AI tools.
Yeah, and if you want to go deeper, you probably need some coding knowledge.
But I think 99% of the apps you can build right now with API wrappers with no-go tools.
That's super cool because I think a lot of people who don't know how to code can actually participate in that ecosystem.
We kind of veered from the original conversation, but I guess just closing off because you have built all of these projects.
How do you think about what to focus on now moving forward?
I mean, this ties into our conversation around like what really has a moat,
what people are really willing to pay for,
especially because you are an indie hacker and you do need sleep eventually.
I wouldn't dabble in any image generation in that sense
because Adobe already does it, Confa does it already.
I wouldn't build any chatbots because JetGPT, they are shipping.
I don't even know how they're doing it, but they're launching something new every week.
Well, hint, Danny, they have more than one person like you.
Fair enough.
But just ship like a startup, which is exciting to see because they're competing with Google
and all the likes, right?
No, but I would just pick something and I'd tease something that issues you, that annoys you
where you think, hey, maybe if I put some AI on top of this, it could speed it up.
We're going into a recession, which means people want to save money.
So can you save money by removing some kind of step?
Totally.
And if anyone wants an idea, Danny, would love if you want to jump on this, but a very simple thing
that I still have not seen is the ability to use these text to image tools, but in a way
where you can basically train it on your own brand. So you upload your brand colors. If you have,
you know, let's say a blog with a bunch of sharing images from the past, you train it on that.
And I know people can set up their own models to do this, but if there was a self-serve version
where any brand today can, again, upload the constraints and then spit out new images,
that let me know if it exists, that you've seen it. That, to my knowledge, doesn't.
not exist in that exact form.
I'm going to 100% bet you that Canva dust is in the next three months.
Okay.
You've been following.
Yeah.
Okay.
Well, maybe, yeah, we'll see if Canva does that.
But for me, if I, if I, if I, if I can tune in an idea, I would love to see.
I don't know if we have time.
Yes.
I would love to have.
So openly I launched a, a model called Whisper, which basically lets you transcribe audio.
And all these text to voice models are getting really, really creepily real.
I would love to have someone build a tool where I say, hey, I want the top five pose of hacking news while I was sleeping, the latest news, and my favorite people on Twitter, and generate me a 100% customized podcast.
I can listen five minutes while I've walked with my dog in the morning, customize it for me.
I think that will be really awesome.
And not even that hard to build, I guess.
No, I agree.
And actually, I've been seeing some folks, like there's this guy who's been creating the AI version of the All In podcast.
And he just released the fifth one, but you can actually see how with every subsequent one that he releases, it becomes less popular because at the end of the day, that novelty factors wearing off.
However, to your idea, you're actually integrating that with, as we've discussed, like a problem that you have, which is trying to digest a specific set of information, which is the stuff ranking on Hacker News, and actually digesting that.
So that's like a very clear job to be done versus a novelty of just like, let's do.
listen to these fake people talk about fake topics.
Yeah, don't build on the hype because you will get some attention on social media and then
it fades away, for example, yeah.
For some reason, this reminded me of back in the day, I knew this guy who used to listen
to Spanish as he slept, and he was convinced that when he woke up, he was better at speaking
Spanish.
He was trying to learn the language.
Did it work?
He thought so.
I don't know if it did.
But anyway, on that note, it's super cool to be watching all.
the things that you've built, I think many people would agree with that. Just like the rate of shipping
is insane. I'm sure a lot of people are inspired to go create similar things and it's exciting to
see what's being built. 100% agree. Thank you so much for having me on the podcast. Really an honor.
Of course. Well, thanks for joining us. Thanks for listening to the A16Z podcast. If you like this
episode, don't forget to subscribe. We also recently launched on YouTube at YouTube.com slash A16Z
video where you'll find exclusive video content.
We'll see you next time.