a16z Podcast - Google DeepMind Developers: How Nano Banana Was Made
Episode Date: October 28, 2025Google DeepMind’s new image model Nano Banana took the internet by storm.In this episode, we sit down with Principal Scientist Oliver Wang and Group Product Manager Nicole Brichtova to discuss how N...ano Banana was created, why it’s so viral, and the future of image and video editing. Resources: Follow Oliver on X: https://x.com/oliver_wang2Follow Nicole on X: https://x.com/nbrichtovaFollow Guido on X: https://x.com/appenzFollow Yoko on X: https://x.com/stuffyokodraws Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Follow a16z on X: https://x.com/a16zSubscribe to a16z on Substack: https://a16z.substack.com/Follow a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
These models are allowing creators to do less tedious parts of the job, right?
They can be more creative, and they can spend, you know, 90% of their time being creative
versus 90% of their time, like, editing things and doing these tedious kind of manual operations.
I'm convinced that this ultimately really empowers the artists, right?
It gives you new tools, right?
It's like, hey, we now have, I don't know, what are colors for Michelangelo?
Let's see what he does with it, right?
And amazing things come out.
One of the hardest challenges in AI isn't language or reasoning, its vision.
Getting models to understand, compose, and edit images with the same precision that they process text.
Today, you'll hear a conversation with Oliver Wing and Nicole Brechtova from Google DeepMind
about Gemini 2.5 Image, also known as Nanobanana.
They discuss the architecture behind the model, how image generation and editing are integrated
into Gemini's multimodal framework, and what it takes to achieve character consistency,
compositional control, and conversational editing at scale.
They also touch on open questions and model evaluation, safety, and latency optimization,
and how visual reasoning connects to broader advances and multimodal systems.
Let's get into it.
Maybe start by telling us about the backstory behind the nanobanano model.
How did they come to be?
How did you all start working on it?
Sure.
So our team has worked on image models for some time.
we developed the Imagine family of models,
which goes back a couple years.
And actually, there was also an image generation model
in Gemini before, the Gemini 2.0 image generation models.
So what happened was the teams kind of started to focus more
on the Gemini use cases, so like interactive, conversational, and editing.
And essentially what happened was we teamed up
and we built this model, which became what's known as nanobanana.
So yeah, that's sort of the origin story.
Yeah, and I think maybe just some more background on that.
So our imagined models were always kind of top of the charts for visual quality,
and we really focused on kind of these specialized generation editing use cases.
And then when 2.0 flash came out, that's when we really started to see some of the magic
of being able to generate images and text at the same time, so you can maybe tell a story.
Just the magic of being able to talk to images and edit them conversationally.
But the visual quality was maybe not where we wanted it to be.
And so nanobanana or Gemini 2.5 flash image.
Nanopanana is way cooler.
It's easier to say.
It's a lot easier to say.
It's the name that stuck.
Yes, it's the name that stuck.
But it really became kind of the best of both worlds in that sense, like the Gemini
smartness and the multimodal kind of conversational nature of it, plus the visual quality
of imagine.
And I feel like that's maybe what resonates a lot with people.
Wow.
Amazing.
So I guess when you were testing out a model as you were developing it, what were some wow
moments that you found?
I know this is going to go viral.
I know people will love this.
So I actually didn't feel like it was going to go viral until we had released on Elam Arena.
And what we saw was that we budgeted a comparable amount of queries per second as we had for our previous models that were on Elamina.
And we had to keep upping that number as people were going to Elam Marina to use the model.
And I feel like that was the first time when I was really like, oh, wow, this is something that's very, very useful to a lot of people.
Like, it surprised even me.
I don't know about the whole team, but we were trying to make the best.
conversational editing model possible.
But then it really started taking off when people were like going out of their way
and using a website that would actually only give you the model some percentage of the time.
But even that was worth like going to that website to use the model.
So I think that was really the moment, at least for me, that I was like, oh, wow, this is going to be bigger.
That's actually the best way to condition people, like only give them a reward, partially.
Not all the time.
I had a moment earlier.
So I've been trying some similar queries on kind of multiple generations of models over time.
And a lot of them have to do with things I wanted to be as a kid.
So like an astronaut, explorer, or put me on the red carpet.
And I tried it on a demo that we had internally before we released the model.
It was the first time when the output actually looked like me.
And you guys play with these models all the time.
The only time that I've seen that before is if you fine-tune a model using Laura
or some other method to do that and you need multiple images
and takes a really long time and then you have to actually serve it somewhere.
So this was the first time when it was like zero shot.
wow, just one image of me, and it looks like me, and I was like, wow.
And then there became these, like, we have decks that are just, like, covered in my face
as I was trying to convince other people that it was really cool.
And really, I think the moment more people realized that it was, like, a really fun feature to use
is when they tried it on themselves.
Because it's kind of fun when you see it on another person, but it doesn't really
resonate with people emotionally.
It makes it so personal where it's like you, your kids, you know, your spouse.
And I think that's your dog.
Your dog.
And that's really what started kind of resonating.
internally and then people just started making all these like 80s makeover versions of
themselves and that's when we really started to see like a lot of internal activity and we're
like okay we're onto something it's a lot of fun to test these models when we're making them
because you see all these amazing creative things that people make go wow i never thought that was
possible so it's really fun no it's i mean we've done with the whole family and it's a crazy
amount of fun so think a bit of a long term where does this lead right i mean we built these
new tools that i think will change visual arts forever right we suddenly
can transfer style, suddenly can generate consistent images of a subject, right? I have what used
to be a very complex manual Photoshop process, suddenly I'd have one command and magically happens.
What's the end state of this? I mean, do we have an idea yet? How will creative arts be taught
in a university in five years from now? So I think it's going to be a spectrum of things, right?
I think on the professional side, a lot of what we're hearing is that these models are allowing
creators to do less tedious parts of the job, right? They can be more creative and they can spend
90% of their time being creative versus 90% of their time like editing things and doing these
tedious kind of manual operations. So I'm really excited about that. I think we'll see kind of an
explosion of creativity like on that side of the spectrum. And then I think for consumers,
they're sort of like two sides of the spectrum for this probably. One is you might just be doing
some of these fun things like Halloween costumes for my kid, right? And
The goal there is probably just shared with somebody, right, your family or your friends.
On the other side of the spectrum, you might have these tasks, like, putting together a slide deck, right?
It started out as a consultant, we talked about it at the beginning.
And you spent a lot of time on, like, very tedious things, like trying to make things look good, trying to make the story make sense.
I think for those types of tasks, you probably just have an agent who you gift the specs of what you're trying to do.
And then it goes out and, like, actually lays it out nicely for you.
it creates the right visual
for the information
that you're trying to convey
and it really is going to be
this I think spectrum
depending on what you're trying to do
do you want to be in the creative process
and actually tinker with things
and collaborate with the model
or do you just want the model
to go do the task
and be as minimally involved as possible
so in this new world
then what is art
I mean somebody recently said
art is if you can create
an out of distribution sample
is that a good definition
or is it aiming too high
or do you think if art is out of
distribution or in distribution for the model.
There we go.
I think that out of distribution sample, that is a little bit too restrictive.
I think a lot of great art is actually in distribution for art that occurred before it.
So, I mean, what is art?
I think it's like a very philosophical debate, and there's a lot of people that do discuss this.
To me, I think that the most important thing for art is intent.
And so what is generated from these models is a tool to allow people to create art.
And I'm actually not worried about the high end and the creatives and the professionals,
because I've seen, like, if you put me in front of one of this model,
I can't create anything that anyone wants to see.
But, like, I've seen what people can do who are creative people
and have, like, intent and these ideas.
And I think that's the most interesting thing to me
is the things they create are really amazing and inspiring for me.
So I feel like the high end and the professionals and the creatives,
like they'll always use state-of-the-art tools.
And this is, like, another tool in the tool belt
for people to make cool things.
I think one of the really interesting things
that I kept hearing about this model in particular
from like creatives and artists was a lot of them felt like they couldn't use a lot of AI tools before
because it didn't allow them the level of control that they expected for their art.
On one side, that was like the characters or object consistency.
Like they really used that to have a compelling narrative for a story.
And so before, when you couldn't get the same character over and over, it was very difficult.
And then I think the like second thing I hear all the time from artists is they love being able to
upload multiple images and say use the style of this on this character or add this thing to
this image, which is something that I think was very hard to do, even with previous image
edit models. I guess I'm curious, was that something you guys were really optimizing for when
you trained this one? Or how did you think about that? I mean, yeah, definitely for customizability
and character consistency are things that we more closely monitor during the development and we
tried to do the best job we could on them. I think another thing is also the iterative nature
of kind of like an interactive conversation. And art tends to be iterative as well where you make
lots of changes. You see where it's going and you make more. And this is another thing I think makes
the model more useful. And actually that's an area that I also feel like we can improve the model
greatly. Like I know that once you get into real long conversations, like it starts to follow
your instructions a little bit worse. But it's something that we're planning to improve on and
make the model more kind of like a natural conversation partner
or like a creative partner in making something.
One thing that's so interesting is after you guys launch nanobanana,
we start to hear about editing models all the time everywhere.
Like it's like after you launch the world, woke up and you're like editing model.
It's great.
Everyone wants it.
And then obviously like it kind of goes into the customizable ability,
the personalization of it.
And then Oliver, I know you used to be Adobe.
And then there is also software.
where we used to manually edit things.
How do you see the knobs evolve
now on the model layer
versus what we used to do?
Yeah, I mean, I think that one thing that Adobe has always done
and the professional tools generally require
is lots of control, lots of knobs, lots of...
So there's always a balance of...
We want someone to be able to use this on their phone
maybe with just like a voice interface.
And we also want someone who can really,
like a really professional art creative
to be able to do fine-scale adjustments.
I think we haven't exactly figured out
how to enable both of those yet.
But there's a lot of people
that are building really compelling UIs
and I think there's different ways
it can be done.
I don't know.
You have thoughts on this.
Well, I also hope that we get to a point
where you don't have to learn
what all these controls mean
and the model can maybe smartly suggest
what you could do next
based on the context of what you've already done, right?
And that feels like it's kind of prime
for someone to tackle that on.
So, like, what do the U.I.
of the future look like in a way where you probably don't need to learn a hundred
things that you had to before but like the tools should be smart enough to suggest to you
what it can do based on what you're already doing that's such an insightful take i definitely had
moments when when i used nanobanana i was i didn't know i wanted this but i didn't even
ask for the style i don't even have the words for that with that style even you know it's called so
this is like very insightful on how image embedding and the language embedding is not one-to-one
Like, we can now map to, like, all the editing tasks with language.
So, oh, go ahead.
Let me start taking a little with the counterpoint, just to see where this goes.
At the end, the question of how complex the interface can be limited by,
so what we can express in software, how easy we can make something in software.
To some degree is also limited by how much complexity is a user willing to tolerate.
And, you know, if you have a professional, they only care about the result.
They're willing to tolerate a vast amount of complexity.
If they have the training, they have the education, they have the experience to use that, right?
then we may end up with lots of knobs and dials.
It's just very different lots of dials.
But I mean, today, if you use a cursor or so for coding,
it's not that it has a super easy, you know, single-text prompt interface.
It has a good amount of, you know, here,
add context here, different modes and so on.
So will we have like the ultra-sophisticated interface for the power user?
And how would that look like?
So I'm a big fan of comfy UI and node-based interfaces in general.
That is complex.
And it's complex, but it's also, it's very robust and you can do a lot of things.
And so after we released Nanobanana, we saw people building all these really complicated
comfy workflows where they were combining a bunch of different models together and different tools.
And that's generated some of the, like, for example, using Nanobanana as a way to get storyboards
or keyframes for video models, like you can plug these things together and get really amazing outputs.
So I think that like at the pro or the developer level, like these kinds of interfaces are great.
in terms of like the pro-sumer level,
I think it's very much unknown
what it's going to look like in a couple years.
Yeah, I think it just really depends on your audience, right?
Because for the regular consumer, like I use my parents always as an example,
the chatbot is actually kind of great.
Oh, yeah, totally.
Because you don't have to learn a new UI.
You just upload your images and then you talk to them, right?
Like, it's kind of amazing that way.
Then for the pros, I agree that, like, you need so much more control than, you know,
and then there's somewhere in between, probably,
which are people who may want to be doing this,
but they were too intimidated by the professional tools in the past.
And for them, I do think that there's a space of, like,
that you need more control than the chatbot gives you,
but you don't need as much control as what the professional tools give you.
And, like, what's that kind of in-between state?
There's a ton of opportunity there.
There's a ton of opportunity there.
It is interesting you mentioned ConfiUI,
because it's on the other far spectrum of workflow.
Like a workflow can have hundreds of steps and notes,
and you need to make sure all of them work.
Whereas on the other side of spectrum, there's nanobanana.
you kind of describe something with words and then you get something out.
Like, I don't know what's a model architecture, stuff like that,
but I guess is your view that the world is moving to ensemble a model
hosted by one provider doing it all?
Or do you think the world is moving to more of everyone building a workflow?
Nano Banana is one of the nodes in comfy work.
I definitely don't think that the broad amount of use cases will be fully
satisfied by one model at any point.
So I think that there will always be a diversity
of models.
I'll give you an example, but we could
optimize for instruction following in our models,
make sure it does exactly what you want.
But it might be a worse model for someone
who's looking for ideation or kind of inspiration
where they want the model to kind of take over
and do other things, go crazy.
So I just think there's so many different use cases
and so many types of people that there's a lot of space.
There's a lot of room in this space for multiple models.
So that's where I see us going.
I don't think this is going to be like a single model to rule them all.
Let's go to the very other end of the spectrum from the professional.
Do you think kindergartners in the future will learn drawing by sketching something on a little tablet
and then you have the AI turn that into a beautiful image?
And so that's how they're alone getting in touch with art.
I don't know if you always want it to turn into a beautiful image.
But I think there's something there about the AI being, again, a partner and a teacher to you in a way that you, like, didn't have.
So I didn't know how to draw still don't, don't have any talent for it, really.
But I think it would be great if we could use these tools in a way that actually teaches you kind of the step-by-steps and helps you critique.
And maybe, again, shows you kind of like an auto-complete, almost for images.
Like what's the next step that I could take, right?
Or maybe show me a couple of options and, like, how do I actually do this?
So I hope it's more that direction.
I don't think we all want, you know, every five-year-old's image to suddenly look perfect.
We would probably list something in the process.
As someone who struggled the most in high school out of all my classes,
the art and the sketching class, I actually would have preferred it.
But I know a lot of people want their kids to learn to draw, which I understand.
It's funny because we've been trying to get the model to create like tile-like crayon drawings,
which is actually quite challenging.
Ironically, you know, sometimes the things that are hard to make are...
Because the level of abstraction is very large.
Right.
So it's actually quite difficult to make those types of images.
You're dedicated pre-K fine too much.
Yeah.
We do have seminar evils right now to try to see if we're getting better.
In general, I'm very optimistic about AI for education.
And part of the reason is I think that most of us are visual learners.
Right.
So that AI right now, as a tutor, basically all I can do is,
talk to you or give you text to read.
And that's definitely not how students learn.
So I think that these models have a lot of potential as a way to help education
by giving people sort of visual cues.
Imagine if you could get an explanation for something
where you get the text explanation,
but you also get images and figures that kind of like help explain how they work.
I think it just everything would be much more useful, much more accessible for students.
So I'm really excited about that.
On that point, one thing that's very interesting to us is that when Nanopanana came out,
it almost felt like there's part of a use case is the reasoning model.
Like you have a diagram.
Absolutely, yeah.
Right?
Like you can explain some knowledge visually.
So the model not just doing an approximation of the visual aspect.
There's the reasoning aspect to it too.
Do you think that's where we're going to?
Do you think all the large models will realize that, oh, like to be a good LM or like a VLM,
we have to have both image and language and audio and so on and so forth?
100%.
I definitely think so.
The future for these AI models that I'm most excited by
is where they are tools for people to accomplish more things.
I think if you imagine a future
where you have these agentic models that just talk to each other
and do all the work, then it becomes a little bit less necessary
that there's this visual mode of communication.
But as long as there's people in the loop
and as long as the motivation for the task they're solving comes from people,
I think it makes total sense that visual modality
is going to be really critical for any of these AI agents going forward.
will we get to a point where there's actually,
so, you know, I'm asking you to create an image.
It sits for two hours, reasons with itself,
has drafts, explores different directions,
and then comes back with a final answer.
Yeah, absolutely.
If it's necessary, yeah.
And maybe not just for a single image,
but to the point of, you know,
maybe you're redesigning your house
and maybe you actually really don't want to be involved in the process,
but you're like, okay, this is what it looks like,
like this is some inspiration that I like,
and then you send it to a model the same way
you would send it to like a designer.
It's the visual deep research.
It's like visual deep research, basically.
I really like that term.
And then it goes off and does its thing
and searches for maybe the furniture
that would go with your environment
and then it comes back to you
and maybe it presents you with options
because maybe you don't want to sit for two hours
and good one thing.
There's a hundred page art book on a new house.
Here's your 10 slide deck.
I mean, also I think if you
if you think about like instruction manuals
or like IKEA directions or something,
then like breaking down a hard problem
into many intermediate steps could be really useful as a way to communicate.
So when can we generate Lego sets?
Yeah, soon maybe.
Do we at some point need 3D as part of it?
Right.
There's a whole debate around world models and image models and how they fit together.
Enlighten us here.
What is the short summary of where we'll end up there?
I mean, I don't know the answer.
I think that obviously the real world is in 3D.
So if you have a 3D world model
or a world model that has explicit 3D representations,
there's a lot of advantages.
For example, everything stays consistent all the time.
Now, the main challenge is that we don't walk around
with 3D capture devices in our pocket.
So in terms of the available data for training these models,
it's largely the projection onto 2D.
So I think that both viewpoints are totally valid for where we're going.
I come a bit from the projection side.
Like I think we can solve almost all the problems,
if not all the problems working on the projection
of the 3D world directly.
and letting the models learn the latent world representations.
I mean, we see this already
that the video models have very good 3D understanding.
You can run reconstruction algorithms over the videos you generate,
and they're very accurate.
And in general, if you look at the history of human art,
it starts as like the projection, right?
People drawing on cave walls.
All of our interfaces are in 2D.
So I think that humans are very well suited
for working on this projection of the 3D world into a 2D plane,
and it's a really natural environment
for interfaces and for viewing.
That is very true.
So I'm a cartoonist in my spare time.
And then drawing in 2D is just light and shadow.
And then you present yourself with 3D.
We trick ourselves to believing it's 3D or it's on a piece of paper.
But then what human can do that, you know, like a drawing or like a model can do is we can navigate the world.
Like we see a table.
We can't walk past it.
I guess the question becomes if everything is 2D, how do you solve that problem?
Well, I don't think, yeah, so if we're trying to solve the robotics problems, I think maybe the 2D representation is useful for planning and visualizing kind of at a high level.
Like, I think people navigate by remembering kind of 2D projections of the world.
Like you don't build a 3D map in your head.
You're more like, oh, I know I see this building.
I turn left.
Yeah.
So I think that like for that kind of planning, it's reasonable, but for the actual locomotion around the space, like definitely 3D is important there.
So robotics, yeah, they probably need 3D.
That's the saving grace.
So character consistency, which you previously mentioned,
I really love the example of like when a model feels so personal,
like people are so tempted to try it,
how did you unlock that moment?
The reason why I ask is that character consistency is so hard.
There's a huge uncanny valley to it.
Like, you know, like if it's someone, I don't know if I see their AI generation,
I'm like, okay, it's maybe a same person,
but it's someone I know if there's just a little bit of a difference.
I actually felt very turned off by it because I'm like, this is not a real person.
So in that case, how do you know where you're generating is good?
And then is it mostly by user feedback or like, I love this or is it something else?
You look at faces you know.
But that's a very small sample size.
So face detection camera, user and.
So not even before you ever release this, right?
So when we were developing this model, we actually started out doing character consistency evals on faces we didn't know.
And it doesn't tell you anything.
And then we started testing it on ourselves and quickly realized like, okay, this is what you need to do because this is a face that I'm familiar with.
And so there is a lot of sort of eyeballing evaluations that happens and just the team testing it on themselves.
And just generally people they know, like Oliver probably knows my face at this point enough to be able to tell whether or not it's actually me when it's generated.
Yeah.
And so we do do a lot of that.
And then, you know, you ideally tested on different sets of people, different ages, right?
Different kind of groups of folks to make sure that it kind of works across the board.
Yeah, I think they're right.
I mean, that touches a little bit on this bigger issue, which is that, like, evals are really difficult in this space.
Because human perception is very uneven in terms of the things that it cares about.
So really, it's very hard to know, like, how good is the character consistency of a model?
And is it good enough?
Is it not good enough?
Like, you know, I think there's still a lot of improvement we can make on character consistency,
but I think that for some use cases, like, we got to a point, and that's, you know, we weren't
the first edit model by any means, but I think that, like, once the quality gets above a certain
level for character consistency, it can kind of just take off because it becomes useful for
so much more.
And I think as it gets better, it'll be useful for even more things, too.
Yeah.
So I think one of the really interesting things we're seeing across a bunch of modalities of which
image, edit, and generation, obviously is one, is like.
I think the arenas and benchmarks and everything are awesome,
but especially when you have like multidimensional things like image and video,
it's very hard as all of the models get better and better
to condense every quality of a model into like one judgment.
So it's like, you know, you're judging, okay, you swap a character into an image
and you change the style of the image.
Maybe one did the character swap inconsistency much better.
And the other did the style much better.
Like how do you say which output it?
better and it probably comes down to like what the person cares most about and what they're what
they want to use it for. Are there like certain, you know, characteristics of the model that you guys
value more than other things in like making those tradeoffs when deciding which version of the
model to deploy or like what to really focus on during training? Yes, there are. One of the things
I like about this space is that there is no right answer. So actually there's quite a lot of, of
I don't know if it's taste, but it's like preference that goes into the models.
And I think you can kind of see the difference in preferences of the different research labs in the models that they release.
So, like, when we're balancing two things, a lot of it comes down to like, oh, well, I don't know, I just like this look better or, you know, this feature is more important to us.
I'd imagine it's hard for you guys too because you have so many users, right?
Like Google, like being in the Gemini app, like everyone in the world can use that versus like many other AI companies just think about like,
we're only going for the professional creatives,
or we're only going for the consumer meat makers.
And, like, you guys have the unique and exciting
but challenging task of, like, literally anyone in the world
can do this.
How do we decide what everyone would want?
Yeah, and it is, sometimes we do make these trade-offs.
We do have a set of things that are sort of, like,
super high priority that we don't want to regress on, right?
So now, because character consistently was so awesome
and so many people are using it,
we don't want our next models to get worse on that dimension, right?
So we pay a lot of attention to it.
We care a lot about images looking photorealistic when you want photos.
And this is important.
One, I think we all prefer that style.
Two, you know, for advertising use cases, for example,
like a lot of it is kind of photorealistic images of products and people.
And so we want to make sure that we can kind of do that.
And then sometimes there are just things that, like, will kind of fall down the wayside.
So for this first release, the model is not as good as text rendering as we would like
to be, and that's something that we want to
fix in the future. But it was kind of one of
those things where we looked at, okay, the model's
good at X, Y, Z. Not as good at
this, but we still think it's okay to release
and it will still be an exciting thing for people
to play with. If you look at the past,
right, we have, for previous model
generations, a lot of things we did
with like side car models, like ControlNet
or something like that. We basically figured out
a way to provide structured data
to the model to achieve a particular result.
It seems like these newer models that has taken
a step back just because they're so incredibly good
and just prompting
or giving a reference image
and picking things up from there.
Where will this go long term?
Do you think this will come back
to some degree?
You know, like, I mean,
from the creator's perspective,
right, having, I don't know,
open pose and formations
so I can get a pose exactly right,
for multiple characters,
this seems very, very tempting, right?
It's like,
but to rephrase it a little bit,
it's like, does the bitter lesson hold here
at the end of the day,
everything's just one big model
and you throw things in,
or is there's a little bit of structure
we can offer to make this better?
I mean, I think that there will be,
there will always be users that want control
that the model doesn't give you out of the box
but I think we tried to make it so that
because really what an artist wants
when they want to do something is they want the intent to be understood
and I think that these
AI models are getting
better at understanding the intent of users
so often when you ask text queries now
the model gets what you're going for
so in that sense I think we can
get pretty far with
understanding the intent of our users
and maybe some of that
is personalization, like, we need to know information about what you're trying to do
or what you've done in the past.
But I think once you can understand the intent, then you can generally do the type of edit.
Like, is this like a very structure-preserving edit, or is this like a free form kind of,
like we can learn these kinds of effects, I think.
But still, of course, there's one person who's going to really care about every pixel
and, like, this thing needs to be slightly to the left and a little bit more blue,
and, like, those people will use existing tools to do that.
I mean, I think it's like, you know, I want an image with 26 people spelling out every letter
or the alphabet or something like that.
That's sort of the thing where I think we're still quite a bit away from getting that right,
you know, the first try.
On the other hand, with pose information,
but potentially, yeah, but then the question, I guess, is like,
do you really want to be the one who's, like, extracting the pose
and providing that as an information?
It's a very good question.
Or do you just want to provide some reference image and say, like, this is actually what
I want, like, model, go figure this out, right?
Yeah, there are 26 people.
Yes, yes, yes, yes.
Yeah, I think in that different style, fair enough.
Yeah, I think in that.
In that case, I wouldn't spend a ton of time building a custom interface for making this picture of 46 people.
It seems like the kind of thing that we can solve.
Just transfer.
Do you think the representation of what the AI images are will change?
So the reason why I ask the questions, as artists, there's different formats we play with.
There's the SVGs.
We have N-curve-Grains and Bezure curves.
And on the other side, there's procreate or like fresco, what have you.
So there's layers that we can also play with.
There's the other parameter, which is what's the brush you use,
like the brush, the texture of it.
So every one parameter, you can write script and actually do something very personal about it.
Do you think, like, pixel is the right representation, the endgame for image generation model?
Or do you think there's a new representation that we haven't invented yet?
That's an easy question, too.
Wow.
I'll say that
everything is a subset of pixels
So text is a subset of pixels
Because I could just render all the text as an image
So how far can we get with just pixels
Is an interesting question
I think if the model is really
responsive and handles multi-turn interactions well
Then I think you can probably get pretty far
Because the primary reason
I think you would want to leave the pixel domain
Is for editability
And so
in cases where you need to have
your font or you want to change the text
or you want to move things around
just like with control points
it could be useful to have
kind of mixed generation
which consists of pixels and
SVGs and other forms
but if we can do it all
if the multi-generation is enough
then I think you can get pretty far with pixels
I will say that one of the things is exciting
about these models
that have native capabilities
is that you now have a model that can generate
code and it can generate images
So there's a lot of interesting things that come in that intersection, right?
Like maybe I wanted to write some code and then make some things be rasterized,
some things be parametric, like stick it all together, train it together.
It would be very cool.
That's such a good point because I did see a tweet of someone asking Cloud Sona to replicate a image on an Excel sheet
where every cell is a pixel, which is like a very fun exercise.
It was like a coding model and it doesn't really know anything about, you know, images yet it worked.
Yeah, there's the classic pelican riding a bicycle test.
Right, yeah.
Yeah, totally.
I have one on model, like on interfaces, if that's okay.
I don't, sorry if I'm bringing up too much product stuff, guys.
I'm just very curious on the product front.
Like, I guess I'm curious how you think about, like, owning the interface where people
are editing or generating images with nanobanana versus really just wanting a ton of people
to use the model for different things in the API.
Like, we've talked about so many different use cases, like ads, you know, education,
design, like architecture.
Each of those things could be, there could be a standalone product built on top of
nanobanana that prompts the model in the right way or allow certain types of inputs
or whatever.
Is your guys' vision, like, that the kind of the product in the Gemini app is like a playground
for people to explore and then developers will build the individual products that,
are used for certain use cases, or is that something you're also kind of interested in owning?
I think it's a little bit of everything. So I definitely think that the Gemini app is an entry point
for people to explore. And the nice thing about nanobedana is I think it shows that fun is kind
of a gateway to utility where, you know, people come to make a figurine image of themselves,
but then they stay because it helps them with their math homework or it helps them write something,
right? And so I think that's a really powerful kind of transition point.
there's definitely interfaces that we're interested in building and exploring as a company.
And so, you know, you may have seen flow from Josh's team in labs that's really trying to rethink, like, what's the tool for AI filmmakers, right?
And for AI filmmakers, image is actually a big part of the iteration journey, right?
Because video creation is expensive.
A lot of people kind of think in frames when they initially start creating.
And a lot of them even start in the LLM space for like brainstorming and thinking about what they want to create in the first place.
And so there's definitely kind of place that we have in that space of just us trying to think about, like, what does this look like?
We have the advantage of a kind of sitting close to the models and the interfaces so we can kind of build that in a tight coupling.
And then there's definitely the, you know, we're probably not going to go build a software for an architecture firm.
My dad is an architect, and he would probably love that.
But I don't think that's something that we will do, but somebody should go and do that.
And that's why it's exciting because we do have the developer business and we have the enterprise business.
And so people can go use these models and then figure out like what's the next generation workflow for like this specific audience so that I can help them solve a problem.
So I think the answer is kind of like, yes, all three.
Yeah, I brought that up.
I don't know if you guys have been following the reception of nanobanana in Japan, but I'm sure you've had it's been insane.
And it's so funny, like, I, now half of my X-Feed is these really heavy nanobanana users in Japan
who have created, like, Chrome extensions called, there's one called like Easy Banana
that's specifically for using nanobanana for like manga generation and specific types of anime
and things like that.
And like, they go super deep into basically prompting the model for you and storing the outputs
in various places, using obviously your underlying model to generate.
these like amazing anime that you would never guess were AI generated
because like the level of precision and consistency
and that sort of thing is just beyond what I've seen any single model
be able to do today.
I guess what are some like to Justine's point,
what are some force multipliers that you guys have seen in the model?
So what I mean by this is, for example,
if you unlock character consistency,
you can generate different frames and then you can make a video
and then you can make a movie, right?
So these are the things that.
that if you get it right and get it really well,
there's so much more downstream tasks that can derive from it.
Just curious, like, how do you think about
what are the fourth multipliers that you want to unlock?
So the next big one, what's the next?
Yeah, big wave of people who can just use nanobanana as a base model
for all the downstream tasks.
So I think one, one actually, is also the latency point, right?
Because I think it's also just like,
it makes it really fun to iterate with these models
when it just takes 10 seconds to generate the next frame, right?
If you had to sit there and wait for two minutes,
like you would probably just give up and leave,
a very different experience.
So I think that's one, just like,
there has to be some quality bar
because if it's just fast and the quality isn't there,
then it also doesn't matter, right?
Like, you have to hit a quality bar
and then speed becomes, of course, a multiplier.
I think this general idea of just, like,
visualizing information to your education point from earlier
is sort of another one, right?
And that needs good text.
It needs factuality, right?
Because if you're going to start making kind of visual explainers about something, it looks nice, but it also needs to be accurate.
And so I think that's probably kind of the next level where at some point then you could also just have a personalized textbook to you, right?
Where it's not just the text that's different, but it's also all the visuals.
Yeah.
The Diamond Age, that was basically, yeah, yeah, basically.
And then it should also internationalize really well, right?
Because a lot of the times today, you might actually be a much.
to find a diagram that explains the thing that you're trying to learn about on the internet,
but it's maybe not in the language that you actually speak, right?
And so I think that becomes just like another way to improve and open up accessibility of
information to just a lot more people.
And again, visually, because a lot of people are visual learners.
Interesting.
How do you think about, like, images generated?
So the reason why I ask is that there's another very cool example.
I've seen someone making it work with Nanoponata, which is he wrote a script.
And then he kept prompt the model to say, generate the frame one second after this.
And then it became a video.
So, and then when I saw it, I'm like, well, is every image just one frame in a continuum?
Like, you always know about the continuum in a parallel universe.
You could have, you know, generated any one of the images.
It's one big directive graph.
Right, exactly.
And then maybe it's video at the end of the day.
So how do you see that?
Where does it, you know, intersect or intersect?
I think it's very, yeah, video and images are very closely related. And also I think what we're seeing in these kind of what's coming next or sequence predicting use cases is the generalization and world knowledge of the model as well. And this is, and so where do I think it's going? I think that we will have, yeah, I think video is an obvious next.
kind of domain. I think that like
when you have editing
a lot of times what you're asking is like, you know, what happens
if I do this? And that's what video has. It has
the time sequence of actions.
So it's like we have
a slow frames per second video
that you can interact with, but obviously making
something that's like fully interactive and real time
and is the direction this
field is headed.
So you're probably in the zero dot, I don't know, how many
zero, zero, zero, zero percent of most
experienced people in the world,
using image models.
What are your personal favorite use cases?
How do you use it day-to-day if you're not just testing the existing model?
Well, I, so I'm not sure I am in the very...
But I'll tell you what...
I mean, it's like we were saying earlier,
the personalization aspect is the thing that totally drives at home for me.
I have two young kids,
and like, the best things that I do in the model
are the things I do with my kids,
and like we can make, you know, make their stuffed animals come to life and these types of applications.
And it's just so personal and gratifying to see, we all saw a lot of people taking old pictures of their family, for example, and, like, showing them.
And so I think that that's the real beauty of the edit models is that you can make it about the one thing that matters most to you.
So that's what I use it for is my kids, basically.
Very nice.
You're basically making content that you probably would have never made before, and it's like for the consumption of one person, right?
or one family.
And so you're kind of telling these stories
that you would have never told before.
So kind of similar.
Like I do a lot of family holiday cards
and birthday cards and whatnot.
Now anytime I make a slide deck,
I like force myself to generate some images
that are like contextually relevant
and then try to get the text right
and all of those things.
And then we try to push the boundaries around
like, can you make a chart in the pixel space?
Do you want to?
That's another question, right?
Because you also want the bars
in the bar chart to be accurately positioned
relative to one another.
So I think we do a lot of,
of these things. I'm actually really impressed with the people we work with on the team who are just
like very creative. We have a team who just works really closely with us on models that we're
developing. And then they just like push the boundary. They'll do like crazy things with the
models. What's the most surprising thing you've seen here? I didn't know our model can do this.
Yeah. This is even just kind of like simple things where people have been doing like texture
transfer. Like they will take a picture. Yeah. Like you take a portrait of a person and then you're
like what would it look like? But if it had the texture of.
this piece of wood.
And I'm like, I would have never thought of this being a use case because my brain just
doesn't work that way.
But people that kind of just push the boundaries of what you're, where you can do with these
things.
That is an interesting example of the world knowledge, because texture technically is 3D, because
there's like a whole 3D aspect of it.
There's a light and shadow of it, but this is a 2D transfer.
Yeah, so that's very cool.
I think for me, the thing I'm most excited by and maybe most impressed by is, are the, the
use cases, the test, the reasoning abilities of the models.
So some people in our team figured out you could, like, give geometry problems to the model
and, like, ask it to kind of, you know, solve for X here or fill in this missing thing
or, like, present this from a slightly different view.
And, like, these types of things that really require world knowledge and the reasoning ability
of, like, a state-of-the-art language model are the things that are making me really go.
Wow, that's amazing.
I didn't think we would be able to do that.
Can it generate compile code on the blackboard yet?
Like if I take a picture of my, I don't know, like code on the laptop,
would it know if it compiles on the image model?
I've seen examples where people give it like an image of HTML code
and have the model render the webpage.
Wow, that was very cool.
The coolest example I saw, so I came from academia,
so I spent a lot of time writing papers and making figures.
And one of our colleagues took a picture of one of the result figures
from one of their papers
with a method that could do a bunch of different things
this one you know a bunch of different
type of applications in the paper
and asked the model to
and like sort of erase the results
so you have like the inputs
and ask the model to like solve all of these
in picture form in a figure of a paper
and it was able to do that
so it could actually like figure out
what is the problem that this one figure is asking for
find the answer and put it in the image
and then do that for a bunch of different applications
at the same time which was really amazing
that's very cool have
Has anyone built application on top of that capability yet?
Like, what's the application that will come out of that?
I think that there are a lot of very interesting,
I would say, zero-shot transfer capability,
like problem-solving type things,
that we don't even know the boundary of yet.
And some of these are probably quite useful.
Like, you know, if you want to have a method that does solve some problem X,
I don't know, like finds the normals of the scene or something,
the service orientations or something,
you probably can prompt the model
to give you kind of a reasonable estimate
so I think there's lots of problems
like understanding problems
and other types of things that we could
maybe solve with zero or a few shot prompting
that we don't know yet.
There's one thing you mentioned that I found super interesting
which is the world knowledge transfer
but in a lot of world models
or video models
there always is something that keeps the state
like just because you look at away
doesn't mean that the chair should disappear
or change color because that's not what the state
the world is, how do you see that?
Do you think there's relevance there in image model?
Is that something you even consider optimizing for?
Yeah, I mean, if you think about an image model that has a long context
where you can put other things in that context,
like text, images, audio, video,
then I think it's definitely like you're reasoning over the context
of things you have to produce a final output image or video.
So, yeah, I think there's definitely some model capability
to do this type of stuff.
already.
Got it.
I haven't tested it out yet for this
big use case.
I'll let you know.
That's one of my favorite things
about these models is just fine.
And I'm sure it's really fun for you guys
and you guys probably have much more of a hint
than we do about what they can do.
But sometimes you'll just see some crazy X
or Reddit or whatever post
about some incredible thing
that someone has figured out
how to do that you would never expect
that the model might be able to do necessarily.
And then other people kind of build on that
and say like, oh,
and then I tried the next iteration of this thing.
And suddenly you have this, like, most entirely new space
that's been discovered in terms of what the models are capable of.
It must be fun as people much more deeply involved
in kind of building these models
and building the interfaces to kind of watch that happen.
Yeah.
So if you talk to visual artists today,
I personally love this stuff I post about it on the Internet,
you've got some very skeptical answers.
People are like, oh, this is terrible.
Like, do you have any idea what?
triggers this reaction.
I'm convinced that this
ultimately really empowers
the artists, right? It gives you new tools, right?
It's like, hey, we now have, I don't know,
what are colors for Michelangelo? Let's see what he does
with it, right? And amazing things come out. It's of the
similar thing. But what triggers
this strong reaction against it?
So I think it's something
to do with the amount of
control over the output.
So, you know, in the beginning when we had these kinds
of text image models, it would be very
much like a one shot. You put in some text
you get an output, and people would be like,
oh, this is art, this is this thing I made.
And I think that maybe rubs people a little bit the wrong way
who come from the creative community
because most of the decisions that were made
were made by the model, by the data that was used to train the model.
You can't express yourself anymore physically.
Yeah, exactly.
So as a creative person, you want to be able to express yourself.
So I think as we make the models more controllable,
then a lot of these concerns of like,
oh, that's just the computer is doing everything
kind of may go away.
And the other thing is I think that there was a period of time
where we were all so amazed by the images of these models could create
that we were pretty happy to see just like,
oh, this stuff comes out of these models.
But I think humans get really bored fast of this type of thing.
So, like, there was a big rush.
And now if you see an image that you know was just like,
oh, that's just like a single prompt.
Person didn't think about it much.
You can kind of tell, like, that's an AI-genered image,
not that interesting.
So I think, like, there's still this boundary of like,
now you need to be able to make interesting things with the AI tools.
which is hard, but it, this will, yeah, this will always be, you know, a requirement.
We need someone to be able to do this.
We still need artists.
We still need artists.
And I think artists will be able to also recognize when people have actually, like, put a lot of control and intent into it.
I would still not be an artist.
But it is, there's a lot of craft and there's a lot of taste, right, that you accumulate sometimes over decades, right?
And I don't think these models really have taste, right?
And so I think a lot of, like, a lot of the reactions that you,
you mentioned, maybe also come from that.
And so we do work with a lot of artists across all the modalities that we work with.
So image, video, music, because we really care about, like, building the technology step-by-step
with them and trying to figure out.
They really help us kind of, like, push the boundary of what's possible.
A lot of people are really excited, but they really do bring a lot of their knowledge and
expertise and kind of, like, 30 years of design knowledge.
We just work with Ross Lovegrove on fine-tuning a model on his sketches so that he can then
create something new.
out of that, and then we design an actual physical chair that we, like, have a prototype
of. And so there's a lot of people who want to kind of bring the expertise that they've
built and kind of like the rich language that they used to describe their work and have that
dialogue with the model so that they can push their work kind of to the frontier. And it is,
you know, it doesn't happen in like one prompt and two minutes. It does require a lot of that
kind of taste in human creation and craft that goes into building something that actually
then, you know, becomes art.
At the end, it's still a tool that requires the human behind it to express the feelings
and the emotions and the story.
Yeah, yeah, absolutely.
And that's what resonates with you when you probably look at it, right?
You will have a different reaction when you know that there's a human behind it who has
been 30 years thinking about something and then pour that into a piece of art.
Yeah, I think there's also a bit of this phenomenon that, like, most people who consume creative
content and maybe even ones that care a lot about it, like they don't know what the
going to like next. You need someone who has a vision and can do something that's interesting and
different. That's right. And then you show it to people and like, oh, wow, that's amazing.
But like they wouldn't necessarily like think of that on their own. So when we're, you know,
when we're optimizing these models, like one thing we could do is we could optimize for like the
average preference of everybody. But I don't think you end up with interesting things by doing
that. And you end up with something that everyone kind of likes. But you don't end up with things
that people are like, oh, wow, that's amazing. Like I'm going to change my whole like perspective
of art because I saw that.
avant-garde edition of the model.
Yeah.
If I use it at the town,
I don't know.
What's what's at the other end of the spectrum?
The marketing edition or so,
where it's very predictable.
Yeah.
Well, since we're coming up on time,
last couple questions.
One is, what's one feature that you know the model is capable of
that you wish people ask you more?
Inner leave?
Yeah, Inerlead.
I think we've always been amazed that nobody ever post anything about,
so interleave generation is what we call.
the model's ability to generate more than one image for a specific prompt.
So you can ask for like, I want a story, like a bedtime story or something,
like generate the same character over these series of images.
And I think that, yeah, people haven't really found it useful yet or haven't discovered it.
I don't know.
Oh, interesting.
Well, if you're listening to the podcast, go try this out.
Try.
Yeah.
Yeah.
And what's the most exciting technical challenge that you look forward to tackle on in the next, I don't know,
month, years?
So I think that there's really a high ceiling in terms of quality for where we're going.
Like I think people look at these images and say, oh, it's almost perfect. We must be done.
And for a while, we were in this like cherry pick phase where we would, you know, everyone would pick their best images.
So you look at those and they're great.
But actually, what's more important now is the worst image.
We're in a lemon picking stage because every model can cherry pick images that look perfect.
So like now I think the real question is like how expressible is this model and what's the worst image you would get, given what you're trying to do?
so I think by raising the quality of the worst image
we really open up the amount of use cases
for things we can do.
There's all kinds of productivity use cases
beyond this kind of immediate creative tasks
that we know the model can do
and I think that's a direction we're headed.
We're headed to where if these models can do more things
reasonably, then the use cases will be far greater.
So that's the moral equivalent of the monkeys on typewriters
basically any model given enough tries
who will eventually be an amazing convention.
But the other round is hard.
Yeah, the other round is hard.
One monkey writing a book would be very hard.
It would be a very good monkey for that one.
What are the applications you think that would come out when we raise the lower bound?
So the one I'm most interested in me mentioned this before is education factuality.
I have, you know, I have every, I don't know how many times I want to use these models for creative purposes a month,
but like I have way more use cases for information seeking factuality, kind of like learning, education type use cases.
So I think, like, once that starts working, then it'll be opening up all these new areas.
Amazing.
There's also something about, I think, taking more advantage of the models context window.
So you can input a really large amount of content, right, into these LLMs.
And some companies, you mentioned a few before, they will have, like, 150-page brand guidelines on, like, what you can and cannot do, right?
And they're, like, very precise, right?
Like, colors, fonts and it, right?
and like the size of like a Lego brick maybe
and so being able to actually like take that in
and follow that to a T when you're doing a generation
that's like a whole new level of control
that we just can't we don't have today right
to make sure that you're actually kind of like following that to a T
I think that will build a lot of trust with you know very established
brand we have a second creative compliance review model
the projects between the liquid music
the model should do it on its own right
like it just kind of have to have
this loop. Yes, it should have this loop. It's like, okay, I generate this, but then page 52
says that I shouldn't have, right? I'm going to go back and try again. And then two hours
later, we'll come back to you with respect to. Yeah. So we saw with the text models how this
inference time scaling, how much it can help, right? Being able to critique your own work.
Yep. So this, this feels really important. Boy, incredibly amazingly exciting future for
Yeah, and congrats on all the amazing work. Thank you. Thank you.
Thank you so much for coming on the pot.
Thanks for listening to this episode of the A16Z podcast.
If you like this episode, be sure to like, comment, subscribe,
leave us a rating or review, and share it with your friends and family.
For more episodes, go to YouTube, Apple Podcasts, and Spotify.
Follow us on X at A16Z and subscribe to our substack at A16Z.com.
Thanks again for listening, and I'll see you in the next episode.
As a reminder, the content here is for informational purposes only.
should not be taken as legal business, tax, or investment advice, or be used to evaluate any
investment or security, and is not directed at any investors or potential investors in any A16Z fund.
Please note that A16Z and its affiliates may also maintain investments in the companies discussed in
this podcast. For more details, including a link to our investments, please see A16Z.com forward slash
disclosures.
You know,
