Big Technology Podcast - Predicting AI’s Next Advances — With Suhail Doshi
Episode Date: March 20, 2024Suhail Doshi is the CEO and founder of Playground, an AI image generation and editing software company. Doshi joins Big Technology Podcast to discuss where the next generative AI breakthroughs might c...ome from. Tune in to hear Doshi's insights on the innovation within language models, image generation, video, and 3D creation. We also cover the business opportunities the competitive landscape among tech giants like Google, Meta, and Microsoft, and the debate around AI's true capabilities in understanding physics and reasoning. Tune in for a nuanced exploration of the cutting edge of AI from one of the leaders driving it forward. ---- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
A leading AI CEO and entrepreneur joins us to talk about the state of the field, the research, products, competition, and where this is all heading.
All that and more coming up right after this.
LinkedIn Presents.
Welcome to Big Technology Podcast, a show for cool-headed, nuanced conversation of the tech world and beyond.
Today we have a great guest for you, someone I've been trying to bring on for months, and I'm very excited to have him here.
Swayl Doshi is here.
He's the CEO and founder.
of Playground, which is an AI image generation and editing software company, somebody that I've
been following on Twitter pretty religiously to get a sense as to like where this is all
heading. So I'm thrilled he's here. Sohail, welcome to the show. Yeah, thanks for having me,
Alex. Thanks for being here. I always end up learning about like the latest breakthroughs
through your Twitter account. You're like definitely on it. You're talking with the right people.
You have a sense as to where this is going. So just to start this conversation on that,
that theme, where are we right now in terms of the curve, you know, the curve of the innovation
curve? Are we at the early part? Are we kind of troughing here? And I'm also curious to hear
just kind of a two-parter. Where do you think the business cases for this stuff are going to land?
Because I think that's still kind of an open question. I mean, you run an image generation
company and editing company. So that's something I'm sure you think about because you have to figure
out who you're going to sell to. So yeah, let's just start real broad to begin with.
Yeah, in terms of where we are, gosh, I am always surprised how, you know, I used to have
this tweet last year that there were AI breakthroughs every single week.
Right.
And eventually I got to the point where I think like Elon must tweeted every single day.
And the interesting thing is that it still feels like that's happening.
I try to follow just the right people, researchers.
You know, I troll through this place where all the research papers get uploaded called
archive, and I just read through things that are interesting that are kind of taking off.
Sometimes these things are good demos, and they're nothing more than a good demo,
but sometimes they are truly breakthroughs.
So right now, the pace continues to seem relentless.
You know, I'm often surprised by what new thing happens.
And what's interesting is sometimes a breakthrough will happen in a week, and it's only like a couple days later that something beats that thing in terms of performance or capability or some sort of prize.
But if I were to take a snapshot about roughly where we are right now, it has been over a year since GPT4 has basically been out.
I view it as a little bit over a year because I was a lucky person and I had access around October to GPT4.
And I think it kind of came around, came out around February.
And so it's interesting, I think it's interesting to point at that milestone because we don't know what has been internally happening at a company like Open AI over the last year.
And they're definitely training a new model.
So the question, and so I had this thought today that was kind of like, we don't really know how far behind everybody else is in language.
Because we have no idea what OpenAI or Anthropic or some other company is training internally.
We have markers for things like Gemini from Google or we have markers from Mistral.
But we really don't know how far behind they are.
We only know where they are matching compared to last year or the year before that in October.
That's in language.
And in images, images is interesting because it's probably a couple years behind a GPT4.
moment, right? And now audio is starting to happen with a company called Suno that I actually
tried out this weekend. I'm a producer, so I was making songs. So I was trying that out.
So I've some weird thoughts about that. And then I think the last area is the companies that are
doing 3D are just starting to get started. There's a friend of mine who's starting like a 3D
foundation model to do like Pixar level type of creation. Wow. I don't know if I can name them yet,
so I'll probably avoid doing that for now. Yeah, but even the fact that that's happening is
but it's getting it's and there's video yeah there's video yeah now we're getting like minute long
sequences that are not kind of artifacty they're they're sort of more coherent with the right
character consistency we are at the very beginning i think so where to where do i mean what is the north
star for this stuff i'm trying to think about a chat bot like or a gpt model right so okay it's
already pretty good like it does a great job of synthesizing information and spitting stuff back
like where does where does it end where are you we're like i'm trying to
Because you mentioned, okay, they're working on a new model.
Well, how's that new model going to be an improvement from what we have?
And then where do we end up getting to if this keeps on getting better and better?
I mean, Sam's, Sam Alvin's response would be like AGI, right?
Yes, what's yours?
I think there's another, there's like another version of his belief, let's just like one, I think one time when we were talking with Sam, just kind of at a dinner party, his, I think he said this thing that he believed that everybody would just like have a thousand employees.
And I think we all thought he was crazy.
Still do, by the way.
And he may prove to be right.
But I think chatbots are just like a very like sort of very basic primitive thing that
will end up getting.
You know, my general feeling is that one of the, I was talking to someone at OpenAI
who was working on the robotics team there back in the day.
And I was starting to get into AI robotics a little bit.
I was kind of curious where things were.
And the general, you know, to summarize kind of like where the field seems to be, I'm not an expert, but I've talked to enough to enough people that are that, you know, broadly, robotics kind of asymptoted and hit a ceiling about like three or four years ago.
And the research still isn't like kind of on a trajectory that's amazing.
But the reason why I'm bringing robotics is because I want to answer a question about where I think things are headed is that there was a belief for a little while.
someone at OpenEI, at least have this belief, can't say that it was everybody there,
that the ceiling due to robotics was in part because maybe the solution to solving it
was actually through large language models first. Maybe if we could find a model that could reason
to the extent of language, that maybe that could help the robots sort of navigate
through some of the toughest problems that they're having trouble with. And so it's kind of
like the sequence is sort of like first make language great. And now,
we're starting to see, and then the second thing we start to see is image models and graphics
becoming great. And now we're, and then the next piece is now we're starting to get a sense
of multimodal vision plus language plus maybe audio. Can we make a multi, a very powerful multimodal
model? And if we can do that, maybe those will surface and cause many breakthroughs, one of which
could be in robotics, one of which means that you'd have a robot, not a Roomba, like a robot
that, you know, maybe embodies us.
Like, you know, there are a lot of humanoid startups right now.
And by the way, the reason why the humanoid startups are humanoids and not different
looking robots is because we know that humans are already able to generalize to lots
of human-related things, right?
We know that if we're human, we look like a human, then we can hit a printer button
and take a box and, you know, all these different activities.
So I think that if the models get more powerful, we're probably going to see, we're probably
going to see some kind of Westworld version of the world. We're going to go way beyond a chatbot.
And what about reasoning? Because Open AI had this QSTAR thing that people were talking about.
There's another company that says they've been able to reason. Is the adding reasoning into large language models, is that like another next new bound? Or is that just a way to get us to this reality that you're talking about?
And reasoning and AI is kind of like this really big philosophical word, I think, amongst researchers. You work with a research.
team, you know, you talk to researchers. Reasoning is really tough for anyone to really prove
whether that's actually happening or not. Right, because the big question is, is it just like
spitting back the next word or is it really able to like work through problems on its own,
which? I mean, how do we know your reasoning or I'm reasoning right now? How do we know we're not
doing that? How do we know we're not just reacting to our surroundings and predicting the next
token? We don't really know. Interesting. But then why do you think that there are
are AI companies that are working on this problem. I mean, it's not an abstract thing. Like,
there's actual research programs and progress that are being made on this question itself. You know,
I think the QSTAR example from what I've read is that it basically can solve complex math
problems on its own. So that requires being able to conceptualize and reason through a problem
as opposed to like take all you know and it can do novel novel problems so as opposed to like take
what you've seen before and spit out something that looks like an answer to a math problem
yeah i mean just to dive into the philosophy of reasoning just for like just the tip of the iceberg
i think that um you know just something just because something is able to articulate its reason
for doing for getting to an answer doesn't mean that it is necessarily reasoning interesting
the one way that you could prove reasoning is maybe uh you know like one one one
because it's possible that its training data is just really tightly fit to its training data
and it kind of knows it just happens that the next token ends up being step one and then
step two and step three.
We don't really know.
But I think one way that you could maybe litmus test reasoning is if you gave the model something
truly out of distribution, right?
Like an example of something that's truly out of distribution that humans face was COVID,
it, a pandemic, a pandemic that it had not yet seen before.
And then we had to reason about how we would go and deal with that kind of odd current
event.
The question would be like, could you, if you gave a model something that it was truly not
trained on, you could prove that it was not trained on, and you gave it, you asked it
for a solution, could it really figure out the right solution?
That might be really hard, we might find that really difficult.
And humans don't necessarily do either, right?
Well, it's interesting because when people talk about like artificial general intelligence, it's like, well, who's your baseline?
Because I don't know, we'll see. I mean, AI can definitely exceed humans in some areas and, um, and others it can't.
So anyway, it'll be something that we'll all be talking about for a while. Um, obviously like image generation is something that requires some understanding of the world. You're doing it at at playground, right? That image will understand like, let's say you say, you know, show me a monkey sitting on a beach ball will understand.
that there's some physics in the world and the monkey has to sit on top of a beach ball.
So I'd love to hear your perspective on the state of image generation right now.
You have an update that you're releasing or you have released by the time this goes live.
Obviously, it's an exciting time, but you're also coming up against some very big companies that are trying to do this as well.
Mid Journey's been at it for a while.
Dolly 3 is pretty impressive.
I use it through co-pilot from Microsoft.
Google, of course, has.
has tried it, but they've had some problems there.
So talk a little bit about it.
And also, I'd love to hear like the business case here because, you know, for LLMs,
that's one thing.
Like you can say, all right, it will read contracts, understand them, help us spit back.
But for images, you know, is it that it will replace design, that it will democratize design
and make it available to everyone else?
Like, I'm curious to hear your perspective on why that's a problem to work on.
Right.
Yeah.
Yeah.
You know, images are definitely behind in terms of overall capability and utility relative to language.
You know, like, I think at the end of the day, all of these models, for the time being, have their kind of narrow somewhat utility, right?
You know, there are a lot of things posted on Twitter about how people are using language models, but the predominant use case continues to be homework.
And then there's kind of this other one that's like coding.
For images, it just turns out that the predominant use.
case is making art. And it just turns out, and it's very surprising, but it just turns out that
millions of people are very excited to make art. And art can be, it's not art that you're going to
necessarily always put on your wall, but it's art that could be used in marketing. Like maybe
you posted on your Instagram or maybe make an icon and you use it as an icon for your app or
maybe it's a YouTube thumbnail or it's an image that you put in a blog post. Maybe it should
just a fun meme that you send a friend. That's some of the state of images right now. It's really
interesting imaginative art. But I think that it hasn't quite gotten to the utility of language.
And I think there are a number of things that are probably coming for graphics. I do think
it's going to be about democratizing graphics for people. I mean, our company is trying to
help people make graphics like a pro without being one. You shouldn't have to, if you ever
open up Photoshop, I mean, there's a dizzying amount of menus, right?
There's all these icons.
You have to go to YouTube and do a really sophisticated tutorial to be good at Illustrator or Photoshop or Lightroom.
I had to take classes on both of those Photoshop and Illustrator, semester-long classes to be able to do that stuff.
Right.
Yeah.
There was a summer where I grinded just making logos and then I would upload them to site point and try to win logo contests to get better at my own skills back in high school when I was a lot younger.
And it doesn't have to be that way anymore.
right? So I think the first thing that's going to happen is that a lot of graphics are going to be able to be capable on the model.
Maybe if you have wedding pictures and you wish you could like color grade them somehow, maybe you would use lightroom or something for that.
I think an image model will be very good at that.
I know you're about to move on, but graphic designers, do they become like a extinct profession or where do they go?
Because you can not only create images within playground, you can edit them.
no i mean look like walt dsney started as a you know person that drew pictures and then he
worked somewhere where he animated them and then that just like evolved and eventually we got to
two-d cartoon movies like snow white and things like that right and and then and then and then things
like pixar came up came about and we built like a 3d rendering engine so did all the people
that um you know that were that were people that drew the two-d cartoon
They lose their jobs and, you know, that was the end of the end of an era.
Definitely not.
People retooled.
The stories that came from them were still really material to their creative process.
Story matters more for a company like Disney and Pixar than the animation itself.
So I think in this case, you know, the graphics matter, but I think that people will retool.
The question is, are we giving people enough time to retool?
Yeah, so like if you're doing one sheets, like I used to work in marketing before I went into reporting.
And we used to do one sheets.
And if you're doing one sheets, right, like get the headline image, you know,
and passed off as a piece of marketing collateral, like, that seems like you might want to
try to invest in some new skills.
Yeah, I also think that there's something, there's something beautiful about the person that
is creating the thing.
Like, let's say you're doing a piece of writing or maybe you're a music artist and you made a song.
I think there's something really beautiful about that person being able to connect their art,
to their graphics as closely intertwined with the other kind of art that they're making.
Like, if I'm a music artist, I want to be able to choose the exact album art.
I don't want to have to always outsource that to somebody who may not really understand
what I want.
Yeah, it totally resonates because, like, I've been using image generation for big technology.
And, like, we, I'm a one-person shop.
Like, I couldn't afford to do graphic design for every single story.
I mean, I was hardly making it work with, like, the whatever I stock photo.
And every now and again, I would.
you know, pay for the image. But like now it's like these, we get perfect, almost perfect
illustrations every time for every story because this technology has made that possible.
Right. You're able to marry the creative process of your podcast with the graphics and the thing
you want to show people. And only you, you can go through as many iterations as you want to find
the perfect thing that you think is like the right mapping. So I think, though I think, like,
and I have actually talked to people, like,
artists or people that draw who hate this stuff.
Last year, I think like a year or two ago, I basically got almost canceled like on Reddit
and Twitter and everywhere.
Back when AI art was, you know, people hated the idea that you would even say AI art is art.
And so one of the things I decided to do is be really curious.
And I said, let me go talk to some of these people that, you know, basically they're sending
me death threats on Twitter or something like that.
And, you know, some of these people love drawing.
it doesn't matter that you offer them a better tool.
They love the idea of picking up a pencil and drawing.
And so for those people, certainly, you know, that's one way of making art.
And some people will treasure that and enjoy that.
But that would be taking something that they enjoy.
Well, they still can do it.
They can still enjoy it.
It just might not mean that it kind of doesn't evolve perhaps with the time.
So there will be some people.
I think it does matter to think a little bit about how fast the technology is moving and how people
will deal with that.
Right.
Definitely.
Okay.
Sorry,
I didn't mean to break your momentum.
No worries.
Yeah, so I think that those things will be possible.
I think the utility of graphics are going to increase those significantly in the next year.
I think that we haven't really thought through editing, for example, right?
You know, a lot of this stuff is generating synthetic images, but we haven't really thought
about like, what if I have an image and I want to add my dog who's not in the image for
my holiday card?
Could I take a dog?
I take my dog from a different image and then just like insert it where it gets all the lighting and the shadows and the color and all the ambiance. Correct. What if I want to do fantastical things? Like what if I want to see what you would look like if you were the incredible Hulk? Big green, but it really had your face. And you thought that that face was your face. Doesn't take much imagination to tell you the truth. I'm kidding. Yeah. I think I think that what if I want to make a logo? Logo is really hard. I mean, I remember having to pay, I have paid people 50.
$50,000 to just generate, you know, to make, hand-make a bunch of logos that I want
to use to brand my product.
Sometimes you can only get five of them, though.
Why can't I get 100 of them?
You know, I think with graphics, it's, graphics is on this never-ending cycle where we
never feel like it's good enough.
If you can, you can think a little bit about like PlayStation 1 and then PlayStation 2 and then
three, four, five, right?
Grand Theft Auto, the first Grand Theft Auto to now maybe what five looks like, we can,
can see that graphics is still improving 30 years later. So I think by giving people tools
where they can do incredible feats of graphics, it's going to be really exciting. But I think
graphics is only a subfield of a bigger plan that at least our company has. I don't know
if there are other companies that care to do this, but our company cares about creating a unified
vision model where we can create and edit and understand anything with pixels, a single
single unified model. This is missing in vision, but definitely kind of exists in language.
In language, we can solve hundreds or thousands of different tasks. But in graphics, but in vision,
it's all separated. It's kind of like where language was back three or four years ago,
where there be a model that could summarize and a model that could do sentiment analysis and a model
that could do these different little act tasks. But there wasn't a unified single large language
model. But there's no equivalent for vision. What is a large vision model? We don't really
have a term for that. So my feeling is that vision as a field is going to significantly expand.
Why can't a robot look at images and navigate the world like a self-driving car, right?
That's one thing. Why can't we understand images or what's going on better?
You know, we've seen early glimpses of that. I think there's like a famous picture of Barack Obama stepping on a scale and the model goes, knows that Barack Obama is like trying to increase the stales like a joke.
But can models, like, really understand what's going on in these images to a much deeper level?
So there's large vision models that are starting to incorporate language and images.
But vision, there's no real, all-encompassing, multitask vision model.
So I have a couple of questions for you on this.
First of all, on the vision part, does that sort of play into, like, a lot of people have been talking about how, you know,
one of the biggest applications of this current generation of AI is going to be an augmented reality.
right and like the meta has those glasses where like you're not you don't have an overlay right now
but you can talk to things to their AI bot and it will look at the world and then give you a sense
as to like what you're looking at or you can even just ask questions about things and it will talk
to you so I'm curious like how how seriously you take this this new era of augmented reality
that we seem to be heading in because speaking of one of your tweets
you wrote, it's going to be hard to beat a computer in your pocket you can use inconspicuously
when you need to. So it sounds like you're a believer that the phone is going to be the
way that we're going to interact with computing for a while. But maybe there's something I'm
missing. Yeah, I mean, I think the phone is a really good form factor of computing.
You know, I've talked to lots of different friends who've tried the Vision Pro and such.
You know, it seems like that's still kind of early in terms of its use cases and its utility.
So we'll see what happens over the next year or two.
I tend to be like more optimistic no matter what because you never know about these things.
I think one thing that meta is doing, regardless of where VR is headed or AR is headed,
is they have one of the most world-class teams for graphics.
And they have to because of all the stuff that they're doing in VR.
But yeah, you know, I think.
It's kind of unclear what the right form factor is.
Is it on your face?
Is it somewhere else?
Is it a thinner video screen?
I'm not sure.
But one thing I do feel pretty confident in is that we will care a lot about being
able to use AI to manipulate graphics, regardless of the form factor.
Like, I'm somewhat form factor agnostic.
Is it a TV?
Is it a watch?
Is it glasses?
Is it some new thing?
I don't know.
but it seems very likely that we're going to care.
You know, like an example would be, I wish I could just go into a store, stand in front of a mirror,
and then just sort of swipe for like a jacket that I'm wearing.
You know, I went to, I went this weekend to the, the DeYoung and it had this, like,
sort of fashion, San Francisco fashion exhibit.
And there was this, you know, powered by Snapchat thing where it would, like, put on a dress.
And so I was an address at the exhibit.
it. But it was really cool. And it's very obvious that this thing could be higher fidelity.
Right. And that was like a really cool AR experience. But why can't I have that for jeans and
a jacket or anything that I want to wear without having to try it on? So I think those kinds of
experiences seem inevitable regardless of the form factor. Yeah. And then talking about the
limitations of image generation models today, you know, it just seems like they all end up generating
images that look so similar.
And you know, when I said before that like I generate the perfect image for each story,
conceptually, yeah.
But you can still tell that it's been generated by an AI image model and not a graphic
designer.
So I'm curious, like, why so much of these AI generated images from your perspective
look so similar?
Is it because they're using the same underlying technology, using the same training
set?
Is it just that they're not quality enough that people can pick them out?
What do you think?
Yeah, I mean, language models have this problem, too.
Right.
Like, the way that we know this is that language models are kind of overly bobert, verbose, right?
They talk a lot.
They talk a lot, right?
So that's kind of the little tell for language models.
For images, you know, the tells are a little bit different.
I mean, they have overly crazy bouquets or they are super lush in ways that you don't need them to be lush, right?
But I think with images, you know, what's happening is, is that maybe the models are a little bit too curated.
It's at its infancy, but I think that the models are probably too curated and maybe overfit to be based on human preference.
And human preference isn't your human preference.
It's your preference, I mean.
It's some kind of average of human preference.
And so, you know, in art, there's art that we like in the modern time.
And then there's kind of avant-garde art.
And maybe you prefer that, right?
You want something more ostentatious.
Maybe you want something more minimal and laid back.
And I think what we've kind of discovered is that actually, like, there are just huge
wide varieties of preference.
And then there's the average.
And so I think with image models is somewhat twofold.
It's that we're not catering to people's personalized preferences and styles.
I think that's one problem or the niches, right?
And I think the other is that quality is the lowest it's ever going to be starting today, right?
Right.
So the quality is going to get incredibly good.
But it's also, it's interesting because so one of, I having worked at publications where we did have graphic artists, right?
Like one of the interesting things was you would give a prompt or that you'd write a story.
And that artist would then sort of take it back and based in their own style end up creating an image.
And I love doing this because I was seeing what they came up with because I was always surprised by what they'd built because they would do it through their own lens and focus.
But what AI does, I think, is it tends to sort of take everything into account and spit back the average, right?
Like kind of like the average type of words or average image.
And that's where I sort of say, you know, sometimes I'll be surprised by what an AI image generation engine will create.
But oftentimes it's like, yeah, that sounds right.
Or that's close enough.
Let's put it in the top of the story.
Right.
Yeah.
I mean, it's interesting.
Humans, human graphics designers are also kind of overfit, right?
They have their own particular style.
Like anytime I've reached out to a graphics designer, sometimes they'll say, hey,
why did you reach out to me?
Like, what did you like that I did?
Or an interior designer or whoever, right?
So they all have their style.
These human graphic designers are somewhat less robust designers in some sense.
like they are very skewed to something and then you pick them.
And that's cool because they can lead to brandable things.
The models, if you prompt them simply, then they will be an average style.
An average style that represents something, you will probably get, at this point, you will get
something that is beautiful.
But it may not be, it might not be like stretched, it might not be headed in a stylistic
direction because everybody uses it.
Then it feels kind of fatigued.
It's like a, you know what I call it?
Internally, I call this pop.
It's like pop art.
Yeah.
There's pop music, top 40 music.
Exactly.
And then you like like,
it's a great way to think about it,
like growls and noises and sounds, right?
But then there's pop music.
There's like Justin Bieber, that kind of thing.
What you're getting from the image models is pop.
And people love pop.
We know that.
Love top 40.
But it's hard to market pop all the time because it gets tiring.
So yeah.
And in this case, you know, it is definitely,
the models are capable.
It's just you have to have like this perfect.
alchemy of figuring out the right prompt engineering.
But then that's where it gets interesting then.
Because then let's say, okay, you know, I like, like art in the style of a specific graphic,
contemporary graphic designer.
Yeah.
And let's say the model's trained on that art.
And I say, all right, you know, create an image of like a robot playing tennis in the style of,
you know, person why.
Sure.
But then you get into some really tricky questions.
we have to figure out a way to either compensate these people because it's like it really becomes
like some sort of intellectual property theft. So I'm curious, like you're running a company
that does image generation. So how do you think about this? Oh, yeah, super, super interesting
issue and really complex. You know, these days, we don't really, we're not really seeing
customers, you know, wholesale be like I wanted this person's, this one.
one person's name, you know, which is good.
I think it's a good thing.
You know, Greg Rukowski is someone that I think about in this case,
because a lot of people add that person's name.
He makes really amazing fantasy art that on DVNR
and maybe he's helped out some video game studios and stuff.
I'm not sure.
But his name is kind of quintessential in this debate.
And, you know, the reason why people are,
it's important to understand the reason why people are doing it.
people aren't doing it because they're trying to copy Greg Rikovsky.
They're doing it as a shortcut to get somewhere.
Because if you take an image from Ravikowski, it's not easy to articulate.
In fact, there's a reason why there's a phrase called, you know,
picture speaks a thousand words.
It's very hard to describe his art completely.
We can come up with some words, but it's a vibe.
It's a style.
And so people are using it as a shortcut.
And there's other people like H.R. Geiger, who does like kind of more eerie.
worst type stuff.
I've learned about a lot of artists because of how people are
there's a lot.
And there's no easy way around this.
But I think that this thing is going to go away in the next year, this year.
I mean, we're working on something.
I can't talk about it exactly right now.
But I think this idea is that users are doing this
because it's a shortcut to get to a very difficult to describe style.
And what they really want is to say, I like this, like I want to reference this.
I want to reference, actually I want to reference five of these different things and get to this, get to an image.
Because actually with graphics and a lot of images and art, what's happening is like, it's like remixing a lot of things.
Like even I, because I make music, I can kind of relate to it because it's sort of like, you know, if your inspiration is Kanye West and then your other inspiration is, you know, Dr. Dre.
And then your other inspiration is, you know, six, who.
produces music for logic, and you want to combine, like, the drum rhythms of this person and the
instrumentals of this person and the lyrics of this person, did you copy them? I mean, all of these
people were inspired by people. And so I think in this case, people are just feeling inspired,
but they're using a shortcut. So the question is, how do we get them away from just copying
actual Greikowski? Because that's definitely the wrong thing. We definitely don't want that in the world.
Nobody should be copying Kanye, you know, wholesale. That's bad, too. Right. But you just kind of, it's
difficult to eliminate the prompts completely.
Like, let's say you did have audio generation, you know, and you could say, write me a song
about, you know, I don't know, my girlfriend in the, you know, style of Kanye West.
Yeah.
I don't know why you'd want to do that, but you could.
And, you know, you sort of get into those issues.
You do, yeah, you definitely do.
But I don't think, I don't think that's people's true intent.
Let me ask you this.
Do you think that the artists that are, whose work is being trained on should be
compensated?
I think we need to find some solution for that.
them. Yeah, you know, we do a small, you know, it's not clear like, you know, every time you
generate an image, they get like, you know, some Spotify streaming payment, you know, one thousandth of
a penny. I don't think anyone's going to be happy in that circumstance. But, you know, we try to do
something small. Small. We don't, we don't think this is like solution. We don't think this is enough
of a remedy per se by any means of the stretch of imagination. But one thing we do that nobody seems
to else to do is we actually look back to a lot of these artists. Yeah. When an image
gets generated, we say additional credit, Greg Warkowski, and it links directly back to his
devian art page so that people can find him, learn about him, pay him, donate to him, whatever they
want, right? We even link back to like Wikipedia artists that, you know, no longer, there are artists
that are on Wikipedia that are not living, just so people understand what they're doing.
Right. Yeah. I think that's a good start. Okay, so let me ask you this.
you're like you're doing an image generation startup you're very focused like you'll tweet often
about how it's so important to stay focused and I do think there's something to be said for that
because there's so many other companies that are just kind of going all over the place
do you regret not doing video though because what we've seen out of open AISora and others
is just kind of you know jaw dropping it's pretty amazing so is that something that now you think
you should have done? I mean, it's too soon to tell. You have to ask me in a year to find out
if I regret it, right? Right now, I don't have any regret. It's funny. Where we are with video
is kind of where we were with images. And I don't know if people remember, but about two years
ago, Dolly 2 came out in April. And the world was amazed. Totally amazed.
But if anyone goes, looks at a Dolly 2 image today.
Those images are awful.
Horrendous images.
You would laugh.
Right.
Right.
So when we see something like SORA come out, you know, I have this belief.
I've been having this personal reaction or a moment with all of this stuff, which is that my baseline for quality instantly resets.
Like 15 minutes after the technology comes out, I'm just kind of like anything worse than this is unacceptable.
And my feeling is that we're only at the very beginning of video.
And the truth is, if you could probably go talk to real video people, they'll be like,
yeah, this is not good enough.
This is definitely, I can't use this.
People are going to have a lot of fun doing it.
But the utility is probably not there yet.
It's probably, we're really just at the beginning, I think, for video.
So I think my feeling is my bet to people, you know, sort of listening is that in a year,
we will think, you know, we'll think something like Sora was not even close.
Right. Yeah, it is amazing. I mean, that transition, you talked about from Dolly 2 to Dolly 3. I mean, even going from Dolly 2 to Mid Journey, I was just like, I'll never type the word Dolly in a Google search ever again. I don't want to get anywhere close to it. And it is, it's amazing. I think you've pointed this out, how fast that we're moving that these jaw-dropping breakthroughs become obsolete or like kind of looked at as unimpressive a few months later. That's the speed that this stuff is moving at.
totally and i think with you know so that's video so i don't i don't have any
worries about video because i think video is still early like there's still maybe a moment
where we can do video is there's nothing there's nothing seems like you'd be able to
somewhat easily take into what you've learned with images and go to go to video
underrating it but yeah yeah i think i think you know without saying too much i feel like
probably where we're headed with images is not going to be like it's not going to be like a
completely we throw every we have to restart and throw everything away to go do video
you know, to put it simply. I mean, we were trying to work to a unified vision model that
incorporate 3D and video and everything related to pixels into a single model that's capable of
everything. But I think for now, we're just, we're trying to start with something that's narrow
and sharp that we think is deeply underinvested in. Right. I still think that images have ways to
go. Yeah. Let me ask you something about video before we go to break. Because there's a debate that I've been
trying to wrap my head around, which is kind of this debate between Jan Lacoon who built this
thing called V Jepa, right, which will black out a portion of a video, and then the model with
its understanding of the world will basically fill in what it should have been. So, you know,
you have a guitar and someone seems to be playing it, black out the hand, and the model will create
the hand in the strings, showing that it has an understanding of the real world. They say that
that's not generative, that that's actual real world understanding.
And then on the other side, you have open AI that's created SORA, and it's this pretty
amazing thing where, like, clearly this model understands the physics of what's happening
because the pirate ships are, you know, sloshing around in a cup of coffee ocean.
And it's like, oh, they understand that the ships belong in the ocean, and this is the way the ocean
moves, and this is the way the ships should interact with the water.
It's so impressive, and it seems like it also understands the world.
But you ask the meta folks, and they would say that actually that process of generating these videos is actually limited and doesn't achieve what the AI research community is trying to achieve.
What do you think?
Yeah, I think that, you know, I haven't studied Yon's B' Jepa thing too deeply.
but I get the gist of it.
I mean, I would posit this to you.
Are you sure it understands physics?
No.
Actually, let me stand on the side of that it does
and then you can sort of take this argument down.
I mean, come on, like boats in the water,
the water's coffee, you know?
Right.
That's my argument.
What do you have to say?
Well, I mean, that is not too difficult to refute in part
because like, just imagine that they,
There's video, and the video represents the physics of a different world, like Mars, right?
And even though there are natural physics to Mars, they don't necessarily represent the physics of Earth.
They represent some physics.
It just happens to not be Earth.
And so I would say you could just pull that thread a little bit longer and just say, actually, what's really doing is it's representing the physics it understands and the videos it's being trained on, which could be.
incorrect physics
it understands what it's being trained on
is kind of my main the main thrust of my point
and that to a human
to us it looks like physics
it's imitating physics
and it's not but it's not necessarily imitating
correct physics
right so it's really mimicking
an understanding of its training data
and likely and if there's any training data
that's like cool CG or like you know
the matrix where neos like bent
you know on his back
that's not real physics
of our world, but it models its training data.
And I think that that's totally fine, though.
For a tool that's meant for creativity, that's acceptable.
But can we really say that it has learned physics?
I can't say that.
I don't think we can't, not yet.
You know, maybe lighting, but even the videos that have lighting could have
incorrect lighting.
Right.
It is interesting.
Yeah, I think that the folks that I speak within the AI community are really divided on
this.
Like we had Brian Kanesera from Nvidia a little bit back.
He runs Applied Machine Learning there.
And he's like implying some metaphysical capabilities in these large language models
whereas like others would say that it's just predicting the next word.
And this could be the same thing.
We're still so early on and still trying to figure out like what's happening in these advances
that it's still an open question.
Or maybe I'm just giving the people who disagree with you too much credit.
I just, but I take like a very different art.
argument than these like two different factions.
Yeah, okay.
I posit, I take the argument that it doesn't, I just like, I don't think it matters.
Right.
Like at the end of the day, we are making, we are making, these models could do it or could not.
But either way, what matters is what utility it brings to humanity.
And if what it brings is this amazing, you know, creative tool to create super slow motion action
shots for the next matrix movie, that's fine.
And if it can truly model physics in the real world,
because we want to simulate what might happen with self-driving cars at a faster speed than
actually having the cars be out in the world, so be it. To me, it doesn't matter. It's kind of
irrelevant. What matters more is its value to us as humans. And I think we're a little like too
deep on a philosophical level about whether it's this or that. The reason why I ask the philosophy
questions is that they matter from my perspective in terms of like what you can do next. Like if it does
understand physics, then you can imagine or anticipate that it will be able to do more than
if it doesn't. But it's definitely interesting. I guess I'm trying to say that it can do both.
Right. No, no. Anything is really like, the options are kind of wide open. Definitely.
Okay, let's take a break. When we come back, I want to do a quick lightning round through
the tech giants and also talk a little bit about one of the tech giants, the state of Google.
So why don't we do that when we come back right after this?
Hey everyone, let me tell you about The Hustle Daily Show, a podcast filled with business,
tech news, and original stories to keep you in the loop on what's trending.
More than 2 million professionals read The Hustle's daily email for its irreverent and informative
takes on business and tech news.
Now, they have a daily podcast called The Hustle Daily Show, where their team of writers
break down the biggest business headlines in 15 minutes or less and explain why you should
care about them.
So, search for The Hustle Daily Show and your favorite podcast app, like the one you
using right now.
And we're back here on big technology podcast.
We're here with Suhail Doshi.
He's the CEO and founder of Playground.
We talked a little bit about image generation in the beginning.
In the time we have left, let's go rather quickly through the tech giants.
Let's start with Google because Google's been sort of like the punching bag of the AI community
for a while.
Suhail, you even had a tweet that says Google's lost its way.
it's the best company to compete with, even investors have stopped asking what if Google does it?
I mean, Google did just start doing image generation. They had to shut it down. What is happening there?
Oh, man, I wish, you know, obviously, I only have a slight preview into what's going on at Google.
But, you know, my guess as to what feels like is happening is there in a significant,
race where either investors or customers believe that by losing this race, it's an existential
issue. Time will tell, however. And Google's rushing to be a strong leader in that race. And
they have to contend with a significant complex bureaucracy that is not really well attuned
for the velocity that AI is running at right now. So it's our organization.
organizational. I've wondered how much of it is because Google sees a threat to search over time if it
pushes the status quo forward too quickly. And right before we were talking, I was on CNBC talking about
the state of Google. And I was absolutely floored by one of the numbers that Deirdre Brosa,
who's an anchor there brought up, which is that I think Gartner believes that by 2026,
we will be doing 26% less searches than we are today.
Our search engines will have 26% less traffic.
I know you're connected with perplexity in some way.
We just said Arv and Serenivas.
I'm kind of floored by that number.
I don't believe it.
I think that search is going to continue to be a way that we use web navigation
and AI search like perplexity will be more to satisfy curiosity
and engage with different topics.
What do you think about the stat
and what do you think about that argument that I'm making?
I mean, I think there's a very high probability.
It is greater than that number in a shorter span of time.
Whoa, for real?
That will be doing even fewer,
that search engines will have even less traffic than
an even greater decline than 25%.
That's right.
And that will happen before 2026.
Okay.
Yeah, that's right.
Exactly.
Because let's think about the model jump so far, right?
We've got Dolly 2 in April two years ago.
Look at the difference between that and any cutting edge model.
We can look at GPT3, which was four years ago.
And now we have GPT4.
GPT5 is probably slated imminently this year.
The jump from four to three was incredible.
And I think the reason why I believe this,
perhaps this very surprising thing,
is because I don't think people quite,
internalize how many more how big of a jump can be had still like we're still so at the
beginning the early phases of this thing that um it is it is moving faster than more's law
by a lot and the biggest people right now are putting in huge quantities of money that i think
i already find it annoying to have to go to google and like run through a few links and then click
and then back and then click and then, oh, there's an ad here.
Okay, let me scroll down.
You know, it's already, it's, I think humanity already can tell it's frustrating.
So if you were to, if you were to go, hmm, this thing is already kind of inefficient,
somewhat frustrating.
In fact, like, I just want the answer.
I don't want to have to find the answer, right?
I think that's the problem these things are solving.
And you look at the model jumps, right, over the last three, four years.
It doesn't seem, it doesn't seem surprising that, like,
almost all the traffic would shift to something that is,
I mean, Google has very low switching costs.
I mean, right now it happens to be integrated well in the browser, right?
It happens to be, actually, the funny thing about Google is like,
it has slightly less lock-in and ease on mobile.
And most consumer traffic, desktop is shrinking for consumer while mobile's dramatically
increasing.
They do have Android.
We do have Android.
Yeah.
We do have Android.
But we're talking about whether Google's search matters.
Google could make a model that matters and is relevant.
But it still might spell the end of its search business.
So my general feeling about this is that the UX of something like perplexity,
we've already figured out is like a nice UX.
And you combine that with another model jump like GPT5 or six.
It doesn't seem that crazy to me that we end this desire of going to Google
and then scrolling through blinks and clicking on each of them.
Is your default search engine Google or something else?
Oh, certainly it's Google, but I've already, but I've already shifted so much.
It's not my first go to.
Right.
Unless I want to go to a very specific site, I mean, people going to their address bar
inside of a browser or phone to search a website that they're trying to go to is just,
they're not really using Google's value.
They're just, you know, it's like my, you know, my dad used to type in CNN.com into Google
and you could just type it in the address bar.
That's not a real search.
Yeah.
Right.
I already think it's not really a great go-to.
Interesting.
Okay, another thing that you said, let's go to meta.
You said the only thing scary than Satya is Mark Zuckerberg taking AI seriously.
Unpack that.
Well, I feel like Mark has been very focused on VR because he's trying to do something that I think,
regardless of your view of whether VR is going to succeed or not, it's ambitious if he succeeds.
And I think he's like a very relentless entrepreneur and founder.
And he's one of the few entrepreneurs and founders that are like running a trillion
dollar plus company.
Not that many left.
So I feel, you know, I somewhat feel like.
It's him and Jensen, I think.
Yeah.
And he's, and he's very young still.
So I think that, you know, for him to take AI, and the thing about meta is it is super
set up to succeed at this.
They have the worlds.
They're like the first, second, biggest risk.
research lab, they have an immense quantity of compute that's only growing. I mean, I think he talked
about having 350,000 H-100s by the end of the year, something like that. Yeah, and they're going to have
total 650,000, GPU-like equivalence by, I think, the end of the year, which is crazy.
He's got an extremely ambitious AI research leader. I mean, that's a lot of GPUs. How many do you guys
have?
Not anymore close to that.
I mean, more than a thousand?
Not more than a thousand.
Right.
So it's just the crazy, I mean, speaking with service now also, which is there, 150 billion, 160 billion public company.
Like, they wouldn't say that they have in the thousands in an interview that I did with them.
So to have 600,000 isn't crazy.
Yeah, I just think that, you know, you combine founder with relentless ambition, with compute,
with the best talent, you know, to me, it's a recipe that is hard to, I mean, and then you compare
that with Google, you know, it feels a little like, you know, to me, it feels like they're forced
to be reckoned with in the next few years. Okay, let's talk about Nvidia. Speaking of Jensen,
I want to test an assumption here. I recently have found out that they basically are the software
that they, that they sell, along with their chips, is core to training.
AI models. And that makes switching away a lot more difficult. Is that something that you're
finding in your business, that you're using the chips and the software to train models, and you'd
have a hard time switching to like an AMD? Yeah, the software is called Kuda, and it's like their
platform for doing all kinds of the, it's their way of interfacing with their GPUs. And so,
you know, it has lock-in in the sense that there's like a huge developer community around it,
just like X-86 or something like that. You know, maybe there's, you know, software that's really
tuned and optimized for x86, so that's what causes people to kind of stay on it. With
CUDA, it's not CUDA that's keeping, I think, keeping a lot of us. It's actually that there is
nothing really dramatically better than Nvidia's GPUs. And so if there's nothing dramatically
better, then, I mean, the reality is the cost for training and inference are so high at companies
at scale that CUDA is not, is not like a big reason why you're going to stay there.
it's going to come down to compute costs.
And so if there were somebody that were really driving the costs down for the rest of us,
we would all flip because we'd be worth it.
So to me, it's not really, it's not just a function of kuda.
You know, I think that does, that is true to some extent.
But I think for the big companies or anyone spending a lot of money,
you know, we're just, we are, we all want there to be someone that can compete with
NVIDIA because one of the problems with NVIDIA was that they, you know,
released their H-100, but they didn't really reduce its cost.
You know, it is, it's, it's, it's, you know, 1.9x faster, but 2x costlier.
And, um, and it technically reduces your cost because you're getting more GPU compute per
node, like you have a server, server costs finite amount.
Now you can put more of the GPU dense compute per node.
So your costs go down, but they didn't really price their GPUs lower.
So that's somewhat disappointing because it would have been nice if it was the same price,
but double the compute, obviously.
obviously. So Nvidia knows what they have. Right. And so what about a company like Amazon? They're
obviously developing their own chip. They're making models available off the shelf. People are using
AWS compute, I imagine, to run models. What's your perspective on Amazon's place here? I mean,
they also have Alexa, which is like, you know, a sleeping giant. Yeah. I mean, I think, I think
AWS has significantly missed the mark, actually, on this.
I think that Azure and GCP are doing Azure better than GCP better than
AWS.
AWS is interesting.
And we were looking for compute last year, and AWS wanted to charge us five times more
for the same GPU than 10 different providers all around them.
Would you stop at a gas station?
that cost you five times more than the one that's right next to it.
I would speed buy it, throw an egg.
Right.
And that's kind of what, and you know, I think what's happening there is that,
I mean, this is my insights.
So I hope it's helpful for someone.
But my guess is that they have a scarce stockpile of GPUs.
And they know that they can price those GPUs internally.
They can price them to their internal customers at that price.
And the customers will buy it because the customers can't go anywhere else.
you know, maybe because they're not allowed to in their company, so then they can charge five times more.
And that's what the sales reps are doing.
But if you are a new customer, we have choices.
You're not doing that.
You're not going to do that.
But the sales reps will do it because it helps them reach, you know, quota.
That's crazy.
So I think there's a short term, there's a sort of short, it kind of feels like ever since Andy Jess became CEO,
AWS has turned very short term minded about how it's going to earn revenue.
you. And this is obviously bad because anyone that knows anything about startups knows that
the biggest companies are yet to be built. But they're definitely going to be not be running
on AWS if their compute is five times more expensive. Wow. Yeah, that's crazy. And I wonder
what that means for startups like Anthropic that are, you know, have billions of funding from Amazon
and are going there. Well, that might be a strategic disadvantage. Exactly. Let's talk Apple real
quick. I wonder what they're going to do with AI. I mean, they're hinting that they're going to do
something at WWDC is like, they're going to make a supercharged Siri or, you know, take the
search bar away from Google and then give up on all that money they're getting. They have some
incumbency advantage, don't they? Because if they really push, push hard on AI to, you know,
take up more room in the operating system, then they can crowd out some of the advantages that they have
today. Yeah, I think Apple isn't a really good position because their culture is already seemingly
like one where they wait and see. And their advantages are not easily eroded because they own
the hardware platform and all the network effects that are associated with that. So Apple seems
like they're in a really healthy position to wait and see and build the best things,
not just build kind of like aimlessly. And Google feels like it's just,
trying to build everything.
They're building ImageGen and Gemini and the coding and they're building an IDE and
then they're like asked all the PMs clearly to like integrate it this week into like every
imaginable product.
I opened Gmail,
open docs.
I opened so many different random Google things.
And they're all trying to convince me to use AI.
And I think Apple is super well positioned to just like let Google do all those experiments.
And then just pluck the ones that are the best ones and use.
massive install base and distribution power to deliver an amazing experience, not a rushed,
rushed one.
So I think Apple is behind, but I think that they are often okay being behind, and they
execute very well kind of from behind because they find ways to leap.
Yeah.
Yeah.
All right.
Let's talk it lastly about Microsoft and OpenAI.
You know, you gave a pretty strong statement about Amazon.
I'm curious what you would think, what you think.
about the current offering from those companies.
I mean, obviously, you're competing with them on the image gen front.
And also just like, from your sense, do you think that the Open AI situation is stable right now
or is, or they're going to be more fireworks on the governance side there?
Hmm.
I think that I think that the folks at Open AI really only care about one thing.
they and i think people don't fully internalize this because it seems it seems a little too
crazy it seems like sometimes when you read a company's mission you're like whatever but i think
that they i think that sam is genuinely focused on aGI entertaining that and i think he does
not care about graphics and you know video necessarily i think i think he those are stepping stones
and that helps research uh you know get to the next point but i think he is very focused on
that. And so I think, you know, broadly, we don't tend to worry about that because we're pouring all of
our energy into graphics. I can't say much about Microsoft, but I can just say that. I genuinely
believe that Open AI is trying to pursue that effort. I can't tell whether that'll be three years
from now or 30 years from now, though. In terms of what Microsoft's doing, yeah, I don't know. I'm not
sure. But brilliant play by Sethia, either way.
Yeah, no matter what happens.
To be the most valuable company in the world, preends.
But they seem like they're the tech chain that's in the best position right now,
which is, wow, given where they were, like seven years ago, eight years ago.
And also just very surprising because it's like if they had not done that,
they would have been maybe in the worst position.
Totally.
I mean, being aggressive sometimes, it matters.
They've learned their lesson, right?
They sat by and tried to ride windows for as long as they could.
And then people were like, yeah, we don't want to use desktop operas.
operating systems anymore.
And they're like, oh, that's interesting.
Okay.
And the person that led that shift from, you know, one error of computing to another
was Sightya in the server and tools division.
So here he goes again.
Yeah, you know, he has, he is doing something that I find that's even like slightly
even more brilliant, which is not just the opening ideal, but if you observe very carefully,
he is actually partnering with everybody.
He is bringing all the models into Azure.
Right.
And he's doing it very methodically.
And I think that he is really setting up Azure to be, to leap and be a lot more competitive.
So I just, I actually think that he's doing a really good job kind of playing every field.
Yeah.
And positioning himself kind of in the middle, positioning Microsoft, sorry, kind of in the middle.
of all that. So, you know, game-recognized game to Southia.
Totally. All right. Just to end, I want to say that I actually reached out to you initially
when you had a tweet advising founders that, you know, if you're going to speak with a journalist,
speak with someone who's independent. And I certainly am independent. And I DM do. And I was like
so. And you lived up to your words. So I appreciate that. And I'd also say that like these conversations
are super valuable, and I think that speaking with journalists inside,
we're probably not going to agree with on this one,
so it could be a different conversation,
but speaking with journalists inside some of the corporate media,
I don't think they're all out to get tech founders,
especially off-the-record conversations,
sort of like if there's a divide between founders and reporters,
then the misunderstandings will just grow.
But anyway, that's just my piece.
So I appreciate you being here, though, but go ahead.
Yeah, I think that sort of the real issue is not that, you know,
the individual reporter is a bad person.
Yeah.
I think they're all like well-meaning, well-intentioned.
So when you, if we have conversations with them, right, if you have a conversation over drinks
or dinner, whatever, they're obviously good people, a good, good, well-meaning to people
working hard.
That's not so much the issue that a lot of us, uh, that, you know, that basically think that
you should largely stop talking to the institutional media.
have. It's not that we think they're bad. It's that we think that their institutions are
bad. And their institutions create incentives that create bad situations. You know, where we're like
you know, like what is the cause? I think we should also be a little bit curious. Like,
what is the cause a reporter to write a story and then email you and say, do you have a comment
and then publish the story one hour later? What is the cause for that?
You know, is that person, a bad person? Probably not.
A person is under some kind of deadline or incentive or pressure that is causing this thing.
And this particular, I pick on this instance because it's a very obvious one that everybody knows is bad, is not well.
You know, that has really bad implications.
They're not giving founders to respond to something.
And it's happened to a lot of my friends, a lot of other people talk about this.
So, you know, I think that's the real issue.
And that's why, you know, you could just as well work at, you know, any of these media institutions.
But the fact that you're independent causes your incentives and your desires of what you want to write and what you want to do to be totally different.
And the reporters that used to work at some of these institutions that have struck it out on their own like you, you can see it.
All get cleaned up, right?
They completely change what they write, what their beats are, and how they work and interact with other people in the world.
So I think it's a lot better, more factual, more interesting reporting.
Yeah, it's interesting.
I mean, I obviously am competing against, like, the broader media ecosystem.
So I do hear you on that front.
Anyway, it's something that we could talk about forever.
It is good to hear your perspective on it.
And once again, I appreciate that you.
you put something out there in the world.
And then when I was like, all right, let's talk.
You said yes.
So I hope this isn't the last time.
I hope to have you back.
And so thrilled that you were able to come on and join and talk about all the new stuff that you're working on in the broader industry.
It's like, it's cool to be able to speak with someone who, like, you read their stuff on Twitter.
And then, like, you have a conversation like this goes longer than an hour.
And it could easily go to or more.
So there's a substance is there and appreciate you being here.
Thanks again.
Yeah.
Thank you.
Thank you for having me.
All right, everybody.
Thank you for listening.
We'll be back on Friday to break down the news as we do every week,
and we'll see you next time on Big Technology Podcast.