Big Technology Podcast - Meta's New AI Model, AppleGPT's Potential, Is ChatGPT Getting Dumber — With Aaron Levie
Episode Date: July 21, 2023Aaron Levie is the CEO of Box. He joins us for a special Friday episode to break down a major week of AI news. We cover: 1) Meta's incentives to open source its Llama 2 AI model. 2) Whether people act...ually want to interact with chatbots, no matter how well they perform. 3) Why enterprise might be the clearest use case for Ai. 4) Why Apple is developing LLMs and where the project might go. 5) Whether AI companies can actually build moats around their products. 6) Is ChatGPT getting dumber? 7) Levie's view on AI and jobs 8) AI's influence on creativity. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
The CEO of Box joins us to break down a wild week of AI news and the state of technology.
With meta, Apple, and Microsoft all making waves, you're going to want to hear about that and more, all coming up after the break.
LinkedIn Presents.
Welcome to Big Technology Podcast, a show for cool-headed, nuanced conversation of the tech world and beyond.
Joining us today is Aaron Levy. He's the CEO of Box. He is a friend of the show.
I think it's your fourth appearance, Aaron.
And every time you're here, it's always just huge news going on.
So this week is no different.
Welcome to the show.
I think you certainly chose a good time for a technology podcast, given everything going on the world.
So there's probably every week is pretty crazy.
Absolutely.
It's funny because we spoke, last time we spoke was like the week that Chat CheapT came out or pretty close after it was launched.
And now we have like AI news every day.
Something big is happening.
And all the big tech companies are jumping in.
And really making some waves.
So why don't we just start with the big news of the week, which is that, well, there's multiple
stories.
But this is the one that really caught my eye, which is that meta is open sourcing, it's Lama 2 model.
Now, I think that, like, you know, some people have, everyone's, like, looking for an ulterior motive right here.
Like, everyone's like, this is like a, you know, Mark Zuckerberg really sticking it to the competition.
I don't fully understand how that's happening.
But maybe you have a better idea of what's going on.
Well, so I'll give you the consensus theory about the maybe sticking it to the competition
and then a more, you know, just maybe philanthropic interpretation with some self-interest.
So maybe in reverse order, you know, Facebook formally Meta Now, I think actually is consistently
released open source technology and frameworks for the industry to leverage in their history.
We use a bunch of technology that came out of Facebook.
I think it's been a core philosophy of that organization to always kind of contribute software back or, you know, data center design, infrastructure design, you know, back to the community.
And the tech industry, I think, has always benefited from their open source approach.
Now, you know, that both is great for all technology companies.
It's also great for them because you tend to see a sort of a flywheel where the best talent wants to work in the companies where you can do leading research, you can open.
your technology, AI, in particular, you know, you can tend to attract the best AI talent if there's
an element toward a more research-oriented, open-oriented approach. And so I think that's super
attractive. They can get great talent. They can also, you know, make sure that they're, you know,
sort of, you know, providing more, more kind of technology, you know, shared practices across the
industry. So that's all, that's all great. Now on the more pure competitive dynamics, you know,
I think there's an element which is on the margin if you can sort of reduce the proprietary
nature of your competition's technology stack and this idea of sort of commoditizing that for
the market, then on the margin, again, you might be able to reduce, you know, some of the
value proposition or some of that competitiveness that your competition might have.
And I think it's probably a little bit of a stretch in AI right now for meta.
Like there's like to your exact point, they're not going to be monetizing the AI directly.
They just want, you know, ultimately probably more usage of their platforms.
And AI, you know, can be a contributor to that.
So I think that's probably the one take, which is, well, if they can commoditize something
that Google otherwise is going to make proprietary or charge for, there's some kind of game theory in that.
But I think for the most part, it's probably way more of the, you know, Facebook and meta ethos of open
source technology. And then, you know, if you look at like Yon as an example, you know, they've
always been at the forefront of AI research. And so I think it's more consistent with just their
their kind of research and open source orientation as an organization. So that's sort of my
best take. It, you know, brings in great talent. It puts meta at the top of the food chain in
terms of leading technology companies. It has everybody sort of, you know, studying their technology,
their models. They become more of a standards bearer for the industry. I think that that
all generally accrues good good things to the companies that do that right so if you're google for
instance this let's just talk about quickly the bad thing or the the dark use of this so if you're
google you're trying to sell your your version of barred or whatever to people and if meta releases
at open source there goes a business line yeah i think if you i i i would guess that they've spent
less than like an hour even thinking through that at at meta just because they're the high order bit
is the talent and just being a leading AI company.
But if you sort of draw it out and you say, okay, you know, Google makes their money on
advertising, Facebook makes their money on advertising, you know, you're competing for ad
dollars and attention, you know, on YouTube and Google search and whatnot, if Google's
AI technology is superior or, you know, delivers better results for the market, they'll make more
money. If you, if you kind of, you know, find a way to sort of make that be an open technology,
you know, you can kind of maybe bring down their competitiveness just marginally.
But again, I really, I'm not convinced that this is, this is really an affront to any other major
tech company. I think it's just net positive for, for meta as an organization.
So yes, let's think a little bit more about the constructive uses. I mean, is it that meta releases
this and people will then, I don't know, build a chatbot onto Messenger? I mean, I know that's like
the most rudimentary way of thinking about it.
But what value accrues to META's products in particular if, like, everybody's all of a sudden
building on their large language model?
Yeah.
Well, I think, so if you roll back the clock, it was six or seven years ago where, you know,
David Marcus, Zuck, you know, these folks were on stage saying Messenger was the future
of how you interact with businesses and commerce.
And you would say,
I want flowers delivered to me, you know, from this shop, and you would, you would do that through
a chat session. And it really didn't take off if we're, you know, if we're being honest in terms
of actual usage. But they were, they were really early to this idea of, you know, messaging platforms
being a way that you would interact with, with vendors and potentially people. And so I think
this wave of large language models probably gives them another chance to get back into that
side of the business. You know, I think right now the conversation is probably on chatbots.
and, you know, sort of AI friends, as it were, I mean, I couldn't guess possibly what, what, what, you know, how much meta would, would care or do that.
But, but certainly in interacting with businesses and becoming a leading interface for interacting with, with AI models, you know, WhatsApp, Facebook Messenger, Instagram, I think, you know, seem like natural platforms where, where one might do that.
if this is a modality that continues to persist in terms of how we communicate with these agents.
Right. Yeah. And it could be a crucial way. I mean, they still don't, they still have WhatsApp highly undermonetized because they don't really do ads there.
So it could potentially, I mean, I remember being at their F8 developer conference. I was a reporter for BuzzFeed and I saw them talking about messaging as the new platform and I really believed in it and I wrote about it for BuzzFeed. And now I kind of feel a little snake bitten because, wow, like,
Like, you know, it turned out that it wasn't just the technology that held people back.
Like, you couldn't full and well, like, book a, book a flight on the kayak bot, but just people hated the mode of interaction.
They'd much rather, like, tap a few things on kayak's app and then be good with it.
So even if these bots get so much better, like, what, what leads you to believe that this could possibly be different this time?
Yeah, well, on the snake bitten, to be fair, I don't, I mean, I think it was.
it was probably not a malicious snake bite.
No, no, I don't think it was malicious.
But they sold the vision.
And I like told readers like, hey, this is going to happen.
And also I had, I had access.
I think you must have also to Facebook M, like their digital, their messenger assistant that like actually had contractors in the WhatsApp building actually on the other end of it.
And that was so mind blowing.
It was people.
But it was assistive and very cool.
So I, unfortunately, I am, I'm like the worst person.
to ask on this because I have always been convinced that chat bots are just one, one iteration
away from happening. In 2003 or 2004, I started developing chatbots for AOL Instant Messenger. And so
there was this wave of chatbots. One was called Smarter Child. Oh, yes. You remember that? And then
you would, you would ask them questions. You get like movie theater times and whatnot.
So I've, like, always been, you know, sort of, you know, red-pilled on chatbots.
And, but to your exact point, the challenge is you're always competing with, with the powerful, you know, with a powerful graphic, you know, graphical user interface where we're just in two clicks without any, you know, verbose typing, you can get exactly that same answer.
And so I do actually, I think the jury's out to be, to be totally honest on, on what, you know, what the form factor ultimately will be, for,
a lot of these AI interactions. And, and, you know, you do see some examples where you'll see a
viral video of somebody interacting with some software via a chatbot. And you're like, holy crap,
that's the most amazing thing ever. But then you kind of like, you know, you kind of, you know,
you flip it and you sort of say, well, actually like, how many clicks would that have been on just
their normal website? And it's like, okay, that could have just been two clicks. And you could
have done the same thing. And so, you know, I think the one, the one challenge we're going to run
into on the chatbot craze is often this element of you don't know what the AI bot is able
to do. And that makes it, you know, sometimes a complicated paradigm because at least with a graphical
user interface, it's really the responsibility of the software provider to say sort of here are all
your options that are available. Here are all the buttons you can click. Here are all the things in the
menu. Those are the functions that we can provide you. And in chatbot, you can't really,
you don't really know in advance, what are all the sort of functions that that that's,
service offers you. And so you can have a lot of dead ends or a lot of incomplete experiences.
And I think that will be the fatigue that consumers end up having in some cases is when they go
and try and interact with one of these bots and it doesn't produce the thing that they wanted
to do. Are you willing to go back to it again and again if you know that there's a lot of
these kind of dead ends? So that tends to be the problem with these chatbot assistants.
I think that's probably what Messenger and WhatsApp and, you know, Facebook has sort of run into in the
past large language models, I think, you know, open up the surface area of what they can
ultimately solve. But it's, I would say we're just in the very, very starting, you know,
kind of period of this. I think it's been fun to watch like pie and inflection as a, as at least,
you know, one potential outcome of this. And, you know, I think, you know, if you listen to
Mustafa talk, right, there's this vision of you just be able to, you know, book a flight. And
and this sort of assistant would know exactly, you know, how to do that for you and what to do. And
I think that's an incredibly exciting, audacious, ambitious vision that would be amazing for that
to happen. Of course, there's going to be some questions, which is, well, a lot of people have
these very idiosyncratic preferences or things where you're going to have to go back and forth
with the bot enough times that, that, you know, another, you know, again, just a classic user
interface might be a better way to do that. But I'm excited to see all the exploration. I think
it's great that we're trying a bunch of, we're kind of pushing the boundaries of software and
and user experience. And I think it's only a net positive for all of us that there's going to be innovation going in lots of directions. Maybe we look back in five years and we say, okay, you know, we still don't really like doing chat for most of our interaction. But nevertheless, you know, these large language models are solving a lot of problems behind the scenes because of their kind of reasoning skills and logic that they have in them. And that's another kind of outcome that this technology could lead to. Yeah, it could be that it just kind of hangs in the background. And then you're like going to a certain page and it pops out.
with like a English language suggestion or natural language suggestion.
And then all of a sudden that makes it.
It's kind of interesting.
I feel like we're both and I think many people watching this are in the same place
where it's like, oh, wow.
Like this is like unbelievable technology.
And like now the question, it's going to do something.
And the question is like, what's it going to do?
Yeah.
Well, there's and I think there's a continuum.
So on the enterprise side, we sort of already have a very strong sense of what it can do
that we could never have done previously.
So like where in a lot of the enterprise use.
cases, you're not competing with a graphical user interface. You're solving a problem that quite
literally could not be solved previously. Like, you literally could not go to, in our business,
you could not go to a data set of information that was made up by, let's say, documents,
and ask that data set a question and have an AI model and then have any way to actually answer
that question from a large set of documents. It just was not possible. This was not something
that search could even do. And now these large language models with, you know, combined
bind with this idea of a vector database, you can now solve that problem. And so, and so in that
case, there's, you know, unlike the, you know, kayak inside of Facebook Messenger where you had an
alternative, you could just go to kayak.com and do that same experience. There's literally no alternative
to many of these enterprise applications with AI. And that, you know, we're 100% confident as a
breakthrough in, in terms of these, you know, what we can now do with software. On the consumer side,
Obviously, if you're really just replacing something you already would be doing today, but now just via
chat interface, that has a lot of other elements that have to kind of go right around sociology
and human computer interaction and all of that. And so, you know, that one will certainly remain to be
seen. So let's just talk quickly about Lama. I keep going to call it Lambda, but that's clearly
not the one. So Lama, have you heard anything? Like are you, are your developers in
interested in using it, have they used it? What do you think it can, like, okay, it's a big large
language model, it's open source, not really open source. We're going to get to that in a second,
but it's open source to a large extent, right? And so what are companies going to do with this thing?
Yeah, so first of all, I think it's incredibly exciting that we are at this moment in technology
where you have so many different, at-scale, highly capitalized, very, very, you know, kind of
intelligent organizations and individuals in these organizations that are all
advancing the state of the art of AI.
You know, I think nine months ago, you know, the common comparison, and I use this comparison
as well, because I lived through it was, you know, like, Chat ChbT was sort of this iPhone
moment of AI because it finally made interacting with an AI model, this sort of consumer
at scale value proposition. The thing I missed at the time that now is very clear is like, it's like
an iPhone moment, but like, but, you know, on steroids because, because not only do you have
iPhone and Android, but it's, you have 10 other operating systems and 10 other, you know,
phone manufacturers all vying and all competing, um, for, you know, the, whatever the state
of the art is, uh, and on a daily and weekly basis. Um, so, so we are, we're seeing a,
a level of, of scale of innovation that in a, in a platform shift that I've never, I've never seen.
It doesn't even compare to, you know, cloud computing or the early web.
This is a different kind of scale.
Now, the impact obviously still has to, we have to kind of see how this plays out.
But the fact that you've got meta, Google, Amazon, Microsoft, IBM, Anthropic, OpenAI, you know,
just an incredible amount of organizations, all building, you know, breakthrough either large language
models or diffusion models, I think is a pretty incredible, you know, moment.
And with Lama, I've only played with it for like five minutes.
So I'm not fully up to feed on how it would benchmark across GPD 3.5 or four.
But it feels fast.
It is the quality of answers that I've at least seen in my very rudimentary tests have been pretty, pretty, you know, high quality.
And I think the play and the value proposition of a sort of a commercially used open source large language model is that,
I as a developer could go take it.
I could change the weights of it.
I can fine tune it in the way that I want from my particular set of use cases
and I can kind of run it on my own infrastructure.
That's a pretty substantial value proposition for, you know,
a large number of developers and a large number of use cases.
But pretty akin to how we always think about kind of open source,
which is if you have the prerogative and the skill set and the need,
open source gives you some really great benefits.
And then there's a lot of people that say, you know what,
I actually don't need that level of, of, you know, customization.
I don't really need to own my own hardware.
And so I'm fine with a commercial approach.
And so then, you know, I could just stay on what I'm doing with Open AI
or another commercial vendor, Anthropic, et cetera.
And so I think, I think it's just great to have choice in this market.
And it's going to push the industry forward because you're going to see this sort of
constant leapfrogging, you know, between providers.
And it's just going to, again, dramatically.
advances the state of the art very quickly. So it's going to have companies building their own
applications on top of this stuff. But it's interesting because the way that you describe it,
it seems like if you're smaller, it might have less appeal. You'll just go with like something
off the shelf. If you're bigger, then all of a sudden it becomes interesting. But this is
where things become kind of complicated because this is from Insider. It talks about how it's
not exactly so open source, right? So the story says in meta's terms and conditions for those
requesting access to Lama 2, the company states that larger users won't be granted access to the
model in the same way smaller companies and individual developers will.
Any company hoping to use Lama 2 with a user base of 700 million active users a month or more
is required to request use of the model.
I guess that's just protecting like the direct competitors from using it.
You're smiling.
What do you think about that?
I mean, yeah, that filters out four companies on the planet.
So I'm not, I think, I think that that is a, that's basically just a nod to saying, hey, Google, please call us.
Just give us a ring if you're going to use the language model.
But, but I don't think that's a, I don't think that's going to be an issue for, you know, 99.999% of the world.
Apple is in the game also.
This is coming from Bloomberg.
Apple, Inc. is quietly working on artificial intelligence tools that could challenge those of OpenAI, Alphabet, Inks, Google, and others.
But the company has yet to devise a clear strategy for releasing the technology to consumers.
It's built its own framework to create large language models.
And some people inside the company are already calling it Apple GPT.
It's so strange because Apple, you know, it doesn't have any like really successful consumer products.
I mean, of course it has calendar and maps and whatever.
But like, and it has messages.
Okay, now maybe I'm talking my argument away.
But, like, it just doesn't feel like it has any, like, real consumery products, like a social network or chatbots or anything like that.
And I can't really imagine it, like, building these models and then licensing them, like an open AI would.
What do you think is happening inside Apple right now?
Yeah, well, so I would probably counter on the core, you know, supposition.
I mean, I basically rebutted myself as I was asking the question.
I mean, between maps and Apple TV and IMessage and photos and camera,
I'd probably use their software more as a consumer than anything else at the moment.
But you know what I mean?
It's the type, this type of app they don't do.
Utilities, yes.
Yes.
Well, well, that's actually, so that's an interesting question.
So is this a utility or is there, you know, like I would actually argue that the, you know,
their play here should be pure utilitarian and and you know it shouldn't be a social thing and
and to your exact point what they're maybe less you know apt to go and pursue and so um so i you know
to me you kind of just like step back and you say okay should you be able to grab your phone
and just say hey can you order me a pizza and then and then a little dialogue comes up and it sort of says
hey you know here's where we're going to order the pizza from click yes uh if you want it like
that that seems like the most obvious thing on the
planet that Apple should just do as a phone. Or, you know, hey, please write down a note for me to go
do this thing and remind me later. And it just like gets it right every single time. And obviously,
you know, that's just like Siri plus plus. But I think Apple has both the device, the software
prowess for it. And maybe they were kind of like late to large language models in this current
ilk by nine months or something. But, you know, the good news is that everybody's still buying their
phones. And I think people would switch overnight if there was an integrated application,
you know, right on the device that that could solve that kind of use case. So I think if they,
if they put their mind to it and the use case was a smart assistant on your phone that could
kind of do everything, I would be very optimistic about their hit rate and their success in
driving a strategy like that. I went right to Hacker News after the news broke. And it was amazing
to see the conversation. I mean, it's kind of what you expected. Like half of the engineers are like
dreaming up with these amazing products that Apple can build. And the other half is like,
How about fixing Siri?
I mean, they really have, it's, it is embarrassing how, you know, what they've done in terms of, like, the assistant.
They were first.
It seems like we're still using Siri 1.0, maybe 2.0.
And so, like, the question is like, yeah, are they even capable of doing this?
I think they, well, definitely they are.
And, and I think that, I think this is just one of those things, you know, Apple, Apple is willing to take their time and, and study a market.
and decide when is the right time to enter a market.
And I actually would say if in, I think if in, you know,
three months or six months or nine months or whatever,
when they launch, you know, Siri 2.0 based on a all new large language model or whatnot,
I think within, you know, a month of that happening,
we will have forgotten, you know, whether we thought Apple was late or not to that.
It just won't have mattered.
Like Apple will just, like, like, they take the time to get the thing right.
And I think we are in such an avalanche.
of innovation and change right now, that in a weird way, you know, there's a premium in,
to kind of leaning in, understanding the market early, so you understand kind of all the different
fractal, you know, ways that this is going to kind of take off. But there's not like a huge premium
in sort of like just being right out there with a product, no matter what that product is.
Like it's actually the premium is making the right decision on the right architecture,
on the right user experience. And Apple is not one necessarily to do iteration and beta testing
in public. And so I think by the time that we see a product from them, it'll be one that has
now learned a lot of the lessons that we're already talking about, about where these things
kind of get it wrong. I don't think, you know, if you even just take things like hallucinations,
right, you know, it's not really on brand for Apple to give you a product that's just going to
like hallucinate a ton. And so they probably want to figure out, like, what is the right kind of,
you know, approach that we should take where it's not just making up answers all the time for all
of your questions. And so what is that, what is that, you know, architecture, how should it
plug into into search? Those kind of things, I think, I think makes sense that they're taking
their time to get right. One of the things that was really interesting in the story is they talked
it talked about how Apple views this. It's getting into this because it doesn't want to miss a potential
platform change. And I think that just kind of says so much about where this is heading. I still
don't think it's necessarily, like we have to see this happen, right? Like there are still open
questions. Again, like we talked about in the beginning.
about whether people want to use the, like, the chat function to begin with.
I mean, I think they'll probably gravitate towards that, but it's just kind of interesting
to see how seriously Apple views this.
And my hunch is what's going on in Cooper Tino right now is they're looking around.
They got some vision pros lying on their desks.
And they're like, God damn, we just spent five or seven years trying to build for the next
computing platform, which was the Metaverse or whatever, ARVR.
and this chat thing just surprised us like a lot of people, right?
Maybe met it the same way.
And now they're saying, all right, let's get the, you know, get it going on that front.
Yeah.
Yeah.
I think, you know, the good news about Apple is, again, while they take the time to get things right,
when they then decide to turn on the engine, they see it through and then we'll commit to it.
And I'm pretty confident that they've got the right kind of prowess to do that
if they decide they want to.
But they haven't really been able to, like,
show that they can really excel in a paradigm shift of technology.
I mean, well, I'll just take the voice computing thing,
which is most recent, right?
Like, so you had Amazon and Google,
they did, I mean, obviously it didn't turn out to be,
you know, what a lot of people thought it was going to be,
but Amazon and Google took the lead.
They come out with the home pod and flopps.
So I don't know if they're going to really be able to navigate this change.
It's a, it seems like they're very good.
maybe this is an archaic argument, but it seems they're very good at refining the iPhone and less
about like thinking about brand new areas of computing.
Although the Vision Pro was impressive.
I still don't feel like they can do it.
Well, I would say this is, I mean, this sort of benefits from a lot of the iPhones,
this actually dramatically benefits from a lot of the iPhones kind of innate characteristics.
So, so, so I wouldn't, I wouldn't actually think about it as sort of like out of left
field or orthogonal to the iPhone. I think it builds on, you know, like, again, literally,
if you just, I mean, voice might even be the right way to interact with this for a lot of,
for a lot of the use cases, you know, versus like an actual chat interface. And so if you think
about just what they are built for, we, again, are all, you know, a second away from being able
to talk to our phones and ask our phone a question and get an answer back. And so, and, and
you know, at this point, you know, building an actual large language model is extremely well
understood. One of the interesting lines that I didn't actually fully realize or understand like a year
ago, but now I like I'm a lot more steeped in from one of my kind of top AI friends is there's
no secrets in AI. And it's because everything is such a research forward, research first kind of
approach in this space. And so if you have a space that really there's no secrets and actually right now,
The skill set that seems to be required is capital and then, you know, 10, 20, 50, AI engineers, but not a thousand.
That's something that I think Apple is extremely capable of putting together and then designing an incredible user interface around.
So I'm going to take the other side on this one and I would be bullish on their approach, whatever they decided to do here.
We're going to go to one of my favorite segments when we have you on the show, which is read Aaron's tweets or read his threads because this is applicable.
So this is talking about moats.
you say, I've probably spent more time debating where AI modes will be created than perhaps
any other tech trend to date.
There's something quite fascinating about AI where the deeper you are in the stack like
training models, you have risk that there's a technical breakthrough and that leapfrogs
your approach.
And the higher up in the stack, like thin wrappers, you're at the mercy of platforms not
competing with you.
There will be trillions in value generated, but it's too early to tell where.
It's very interesting.
I mean, we have all, I mean, you basically have all, all these big tech companies and small, you
smaller startups are all like what we've talked about up until this point we talked about
Facebook model we talked about Apple model we've talked about Googles um the question is like really like
where the points of differentiation are so you've had these debates about where the modes are
who's going to really come up on top and and profit from it what are you what are you leaning towards
now i mean i know we don't know yet but well so i'll give you the two obvious ones um and
i hate to bring them up because they're so obvious um so let's and and so the first one
really quick. So, you know, at the NVIDIA level, an obvious winner, because at the end of the day,
you know, all this stuff has to run on, on, you know, at least today, GPUs. And there's a very small
number of relevant players there. So you kind of know the infrastructure winners just automatically.
On the opposite end, you have, you have the existing software that already has data. It already
has users. It already has workflows. And AI becomes this kind of booster into the software.
that makes that software more useful, more valuable, more intelligent, more functional.
And, and, you know, for all intents and purposes, I mean, I'm sort of, you know,
I'm talking my book, but it's, it's going to be, you know, SaaS product probably in, in the
enterprise space. It's going to be the service nows of the world, the lasions of the world,
you know, we believe box and other players, because we can plug in AI into our software
and just make that a much better experience, be able to solve problems for customers
that they couldn't solve before. So, so those ones are,
are pretty obvious. The reason for the thread was, was on one hand, and this is actually, I mean,
again, like, it's funny how fast these things age. Like, this was before Lama 2 that I wrote that.
You know, imagine you're going down a particular architecture paradigm and you're an AI model
trainer. So you're at that level of the stack. Lama 2 comes out. The entire world all of a sudden
puts all of their energy, maybe temporary, but they put all their energy on.
Lama 2 and they're like, oh, this is now, you know, God's gift to open source, you know,
AI models very quickly, if you were training an inferior, you know, open source, you know,
LLM, you might already be on the wrong architecture. And now you have to kind of move over to Lama
as the new, you know, sort of, as the new, you know, model that, that we're all going to kind
of be building on. And so the speed at which that happens in this industry, I've never seen
before in terms of how quickly everybody will kind of just shift their lens on the underlying
technology. Conversely, all the way up the stack, I think we've already seen examples where a
company that maybe was building a very lightweight interface on top of the LLM of like,
okay, we're going to give you an interface to produce text for some use case. We're already
seeing that, unfortunately, the actual underlying product providers are kind of incorporating
many of those features into their products, whether that's ChatGBT or Bing or whatever.
and so then then it's very very hard again to survive as as kind of one of these kind of quote-unquote
thin wrappers so so i think that that was sort of the the reason i wrote that out is like i you know
it seems pretty tough to be an a i model uh trainer unless you are meta-google microsoft open
a i anthropic um and and you definitely don't want to be one of these just pure thin interfaces on
top you've got to really establish a a high degree of of value um and and probably workflow and
and kind of data moat associated with your product.
Yeah, and that thin wrapper thing,
I mean, it's really unbelievably competitive.
And the thin wrappers, I mean, when OpenAI came out with chat GPT plugins,
it was just obvious that like you're going to have companies that will build
and maybe they'll have some proprietary, like relationships that they'll be able to build
into the AI.
Like if you're like, for instance, I don't even know, like a kayak potentially.
Like you have the relationship with the airline so you could have this one bot.
But even that just like seems like it might be subsumed by AI.
But the things that really seem like in Jeopardy are like the character AIs where like you can, you know, chat with George Washington or whoever, whatever historical figure, like you can really just go into chat TPT and say, you're George Washington.
And next thing you know, you might have something on par.
Or like you're a Jasper, which allows people to like write better with AI.
And you could just go to chat GPT and say, help my writing.
Yeah.
So, you know, I might separate those two slightly.
I think, I think on the character.
Well, actually, actually, I'd say both of them have a path to differentiation, but I'm not close
enough to either to understand where that value would be. But, you know, in the character AI thing,
you know, I think there's probably to the user of character AI, there's probably something
psychologically different about going to chaty-T and saying, pretend that you're this person
and then interacting versus, you know, sort of the modality of character AI is sort of you're
jumping right into that. And there's a little bit of a community or network effect around
of those characters. But I'm not, I'm not close enough. So I'm kind of freelancing on that answer.
But on the Jasper thing, that was sort of one of the, one of the, you know, obviously, you know,
current case studies of, of, you know, all like they had this kind of, you know, lock on a market,
let's say a year ago, which is I want to write an SEO blog post, you know, on some topic. And
they built like the best workflow interface for doing that. Open AI comes out and or chat
HBG comes out and it's basically completely free.
You just don't have the workflow on top of it.
And so how do you kind of provide enough differentiation?
And so I think there's going to be some really interesting kind of competitive game theory lessons that these companies will have to figure out as we witness the dynamics in this market.
Aaron Levy's here with us.
He's the CEO of Box.
We're talking AI.
We've touched on Apple and meta.
To begin with, on the other side of this break, Microsoft, Google, and then plan.
Many more. We really have a lot to talk about. Let's talk about jobs on the other side of this break. Back right after this.
Hey, everyone. Let me tell you about The Hustle Daily Show, a podcast filled with business, tech news, and original stories to keep you in the loop on what's trending. More than two million professionals read The Hustle's daily email for its irreverent and informative takes on business and tech news. Now they have a daily podcast called The Hustle Daily Show, where their team of writers break down the biggest business headlines in 15 minutes or less and explain why you should care about them.
So, search for the Hustled Daily Show and your favorite podcast app, like the one you're using right now.
And we're back here on Big Technology Podcast with Aaron Levy, he's the CEO of Box.
Let's do some more news, shall we?
So Microsoft this week, shares rose 5.8% on Tuesday, just in a day.
This is from CNBC.
After the company announced new artificial intelligence subscription services for Microsoft 365,
the company will charge users an additional $30 per month to use generative AI tools
in Teams, Excel, Word, and other places in their product suite.
I don't know.
First of all, I think there's levels of insanity all the way down on this one.
The stock market sending Microsoft stock 5.8% to the north.
I mean, it just seems like you say AI, and I'm actually kind of curious how you feel about
this as a public company CEO.
You've got to be careful, right?
You got to keep the market's expectations.
Like, you know, now Satya, Del is like, dang it.
Like we did this one product now at my company.
Now I have to justify 6% more in expectations.
So that, to me, is bananas.
And then the other side of it is $30 a month to do this.
I mean, you can go over to the Google Office Suite.
And I'm part of a test group now where in labs, you can just hit, like, write it for me.
And Google Docs and Gmail will give them a prompt.
And there's your email or there's your document.
So it's like, wait.
a second. That's a lot of money for something, again, going back to our themes,
you know, no moat. What do you think? Well, so I would separate the pieces out. So on the
market's reaction, I'm, you know, if everything goes as according to plan for Microsoft,
actually the market's reaction weirdly makes, I mean, it just mathematically makes sense. So
let's say they added, I'm going to just make this up, you know, they added a hundred
billion dollars of market cap or something, whatever the number is, you know, from that or 150 billion
in market. Yeah. No big deal. But if you just kind of multiply sort of the expected, you know,
revenue generation from, from that uplift and price on their Microsoft 365 subscriptions,
you kind of have a certain expected profit margin. You multiply that profit margin by what they're
trading at at a PE ratio. And like weirdly, it actually probably just like checks out that like that is
sort of what they actually did add in terms of market cap, if they can actually generate the
revenue and profit that's tied to this increase. So that aside for a second, in terms of,
you know, will the market bear that price? That is like, you know, we'll find out pretty quickly
in the next, you know, six or 12 months as customers start to go through these upgrade cycles.
I think it's a pretty steep price, but also Microsoft is, you know, a company that has a very
strong command on the customer base and we'll see, you know, sort of what ends up happening.
I think right now there is a moment where, where IT leaders, CIOs have to continue to show that
they're solving AI problems. That's that sort of why co-pilot, I think, has gotten a lot of
support from the market. You know, this is why certainly our number one conversation with customers
right now is on AI and customers trying to figure out, okay, how can I use AI on my content
in a secure and safe and privacy-oriented way. And I think Microsoft will benefit
from a similar set of conversations across the productivity stack.
But, yeah, I mean, they are certainly taking advantage of this moment in the market.
Yeah, and what about the fact that you could go across the street to Google and get those
similar services for free?
Yeah, you know, in reality, any mid-sized enterprise are on up will not make the switch
on the basis of AI pricing.
So if you're a, you know, a 5,000-person company and you're already on exchange and you're
already on office and you're already on teams, AI will not be the determining factor as to, you know,
kind of switching out your entire infrastructure. And I think Microsoft knows that. And they're in a good
position to kind of benefit from that, that effective, you know, kind of stickiness of the,
of the platform. There's another story that I wanted to lead with that kind of, I had to push down
because, I mean, every big tech company is making news on this this week. But let's talk about it now,
which is the, it seems like people are saying that chat, GPT,
or the GPT models are starting to degrade.
I mean, this is from insider.
It's very interesting because there's so many different ways to read this story.
I'm very curious what you think.
So it says in recent weeks, users of OpenAI's GPT4 have been complaining about degraded
performance with some calling the model lazier and dumber compared with its previous
reasoning capabilities and other output.
It's so interesting.
Like I have this theory that these chatbots have spent, you know, they were spending their
time talking to Open AI engineers and they were freaking smart.
and they learn from the people they speak with
and then they got released to the general public
and yeah, they're dumber.
So that's one possibility.
The other side is that maybe they don't seem as amazing to us
the more that we chat with them.
What do you think?
Well, I don't remember where I saw this.
I'm pretty sure it was,
I'm pretty sure it was from an Open AI employee tweet.
But if I'm wrong on that, then then obviously don't go off this.
But I'm pretty sure an Open AI employee claimed the,
the second explanation, uh, which is, you know, we're now used to this. And so, you know,
if you kind of imagine, if you imagine like going from no chat chachypte, like it's like that
level of jump. And now, and then, and so now this is our new baseline and we're kind of
watching it over time. And our, our brains are not having that same instant kind of reaction of
like, holy crap, this is the craziest thing of all time. We're now in this sort of expectation
mode of like, yeah, you're going to, you're going to produce text for me with my question. And I, so I think
now we are a lot more, we're a lot more critical of now the information we're getting back
because the core novelty is now over. And now it's actually, we're like fundamentally in the,
you know, are you telling me very useful, how the, you know, reliable information. And so I think
our level of criticality has just gone up because there's no kind of novelty factor that's sort
of overweighing or outweighing the critical, you know, kind of, you know, maybe component that
we wouldn't have had nine months ago as the technology was new. So that's, I'm actually,
more probably in the camp of the latter, but, you know, who knows? Maybe they did some kind of
training update or model weights update that made it dumber. But it seems like it might just
be our expectation level is now off relative to, you know, where these models are. Yeah,
and there has been like a decline of interest that you see in Google search trends. And there's
been a decline in use, at least according to the slowing of growth, according to some third-party
reports. We have a professor from Wharton coming on on Wednesday, Ethan Mollick, who is going to
talk about the impact of this on education.
I'm just going to plug that.
But it is, there have been some people saying it's just, oh, it's the kids.
They're not using it to cheat anymore.
I'm curious what you think, what you think we should make of these reports.
And what has your personal use been like?
Have you been using these bots less in the past few months?
I know I have.
Yeah.
I mean, so I'm probably, my usage pattern has actually been pretty constant other than
the first week where I was just testing everything.
And I was just typing in the dumbest stuff.
I mean, I would say I am a regular user for business, like some category of like business
brainstorming.
So if it's like 1030 at night and there's nobody, you know, in the company that I can kind of
quickly run an idea off of or ask a question, I might go to Chatsubit and, you know,
brainstorm a product category or a name or, you know, help me kind of synthesize some of my
thoughts. And so I've been pretty much commonly using that for for the entire existence of the
product. We had a we had like a go. We had what we thought was a gopher problem in our yard
like like two months ago. And I was, I went down like a gopher rabbit hole.
Oh my gosh. Was it? You thought it was a gopher problem? It wasn't it was like a hedgehog
problem instead? Well, we never validated actually what it was. But I learned a lot about
go for holes um yeah at chabit so i i really hope it was not hallucinating you went down a go for hole rabbit
hole no i'm sorry exactly exactly um so so i think i think it you know what what's interesting is
it very clearly is it's it's different than search right and so and so like i don't know that
we even i still you know in july of of 2023 i still don't know that we've got a perfect term
for what category these things are in it's it really is
is this new emergent, you know, kind of, you know, technology use case where you have this
intelligent, you know, thing that you can go and ask questions of and, and help you kind of work
through, work through your own thoughts or, or help you sort of figure out what, what things you
want to then go and dive into. I've done a lot of things where I'll go to chat GPT, ask the set
of questions, learn about a set of things, and then go to Google to then go dive really deep into
the research on whatever that topic is. And I, and so it, it,
clearly is not a one-to-one replacement as much as a new complement to sort of working through
information. And I think what we're probably finding is actually like, like, you know, that is not,
that's not a use case. Maybe you do five times a day in the same way that you're Googling,
like, what time is this restaurant open till? And like, and so that that's just like a different
behavior that, that we don't, you know, we don't do these like broad information, you know,
sort of understanding type, type scenarios, you know, you know, five times a lot.
today in our in our personal lives at least there are a lot of companies that i think are trying to
pull a fast one on everyone in terms of saying you know we're in a moment of you know companies
trimming down and doing layoffs and they're and obviously another moment where if you say
a i the market will reward you and they'll reward they have been rewarding companies for layoffs also
and some companies are trying to like blend the two into one i mean you had ibm for instance
that said like oh they're not going to hire anymore because they're going to give these tasks over
to AI. And I had people from IBM or who had been in IBM, say, LOL, like, that's not going to
happen. This company is not able to pull that off. And even one of your competitors, Dropbox, right,
they had to do a layoff. And they said that they're going to pivot to AI. And this is from
Drew Houston, the CEO there. Our next stage of growth requires different mix of skill sets,
particularly in AI and early stage product development. So, okay, it's people building AI. But,
you know, there's been this meme. Chat, GPT, is going to take your job. It hasn't really
yet. Why not? Yeah, I'm, I'm firmly, until proven otherwise by some breakthrough I haven't seen
yet, I'm firmly in the camp of this is all net positive to jobs. I think that, you know, first of all,
we are so early on any form of kind of like multi-operation task string together with any level of
efficacy to be able to replace even, even like 10 minutes of what a real person does in their
job. So right now, I mean, you really got to think about these things as they can take,
they can do like one discrete information oriented task basically at a time before they need a
human to kind of review what they've done and then move to the next thing. And again, I don't,
I don't really see an architecture paradigm that would change that anytime soon. And so, and so basically,
they can do tasks where they take some information, they go and look across other information
and they can produce something. And that thing that they produce can go somewhere, certainly
automatically. But usually you very quickly need a human in the loop to review or kind of
help streamline whatever that was. And so that's just not that many jobs are relegated to only
that kind of thing. Even in the scenarios where we've kind of like tried to say, okay, well,
paralegal job, you know, could be automated or whatnot. It's just simply not true. There's
just too many things that these job functions are pulling together across, across email and
another communication tool and a manager and going into some other system and then reviewing
something and then writing something. And the AI is just not, we're just nowhere close for the AI
successfully to wrap all those scenarios together. And I'm just not seeing anything on the horizon
and that would change that, because the cost of an error is so high that that no one is willing
to put that, you know, the liability on the line would be required.
I mean, look at just even that one example of the lawyer that, you know, used Chatsypte,
it hallucinated cases.
And now they're like, you know, they've been sanctioned or whatever the, you know,
consequence was, you know, you can't have paralegals running around everywhere being replaced by
AI, just producing, you know, all this stuff that doesn't get reviewed.
and then we just put it into the legal system.
It just simply won't happen.
So I'm not convinced that I've, so, so that's like the, that's why the, the, the,
the dumer scenario I just don't buy right now.
And now, actually, you flip it to the optimistic scenario.
And I actually think more, more of our economy is constrained by either talent or the cost of, of,
that talent, not by the demand for the, for the, for the talent.
And so I think in most areas for the, of the economy, if you could make something 20%
cheaper or conversely, 20% faster, you would probably use more of that type of resource or
service than you would, then instead of kind of capturing the savings. And that is, you know,
there's sort of, you know, there's sort of two complementary economic, you know, rules or or fallacies
or principles that are at play. You've probably, you know, written or or talked about both.
you have this idea of lump of labor fallacy, and you have this Jeevan's, you know,
paradox fallacy, which is, which is both of them basically amount to, we tend to think that
if we made something more efficient, jobs will go away. And actually, by making that thing
more efficient, demand rises because we've been able to actually get more output out of that
thing. And actually, the cost of that thing was, was the main reason why we didn't use it
previously. So, so, you know, even in the example of like an engineer, let's say, if I can, if I
could ship 20 or 30 percent more code, then we're not going to have 20 or 30 percent fewer
engineers. We'd probably hire more engineers because our main rate limiter right now is how
fast we can ship software to generate more revenue to hire more engineers. Right. So like this is
where we got it all backwards, which is most areas of our businesses, we would actually accelerate
growth. We'd accelerate our business if we could make the underlying thing that we do faster or more
efficient. And so I'm much more in the optimistic camp. I basically think the pessimistic scenario
I'm not seeing, and I'm, and I just, I don't think it's going to happen within this architecture
paradigm. 20 years from now, sure, let's do a podcast and find out where we ended up. But right now,
this is not something that I feel we have to worry about. Okay. I'm just going to ask you two questions
to wrap up. These are two I definitely wanted to get to, so I appreciate you taking them. The first is
about where Amazon sits in all this. They've been very quiet on the consumer front. They're actually
building GPUs, which no one talks about. And their sort of position is that they want to
make this something where they can work with their customers to help them build like those
custom off-the-shelf models. So I'm curious, I'm just going to ask you both of them.
And then we can get to them. I'm curious what you think there. Second thing is you had this
thread that said, if Sarah Silverman ends AI, that's going to be absolutely wild, she's sued
for the use of her content. And I'm working with an editor. And he said that I should ask you,
aren't AI companies just vampires sucking value out of human creativity, putting people out
of work and returning very little in terms of value to humanity? Why should we tolerate that and
who will benefit from it? So it goes back to, I think, like the intellectual property thing.
Okay, that was a lot. And feel free to answer as succinctly as you can. I want to make sure
to be respectful of your time. Sure. Well, on the Amazon piece, I would say I probably can't
fully describe where they want their role to be. I can describe where they want their role to be. I can describe
where the role is now, which is they're clearly going to be one of the at-scale,
you know, hyperscale or infrastructure providers hosting models from a variety of vendors and
providers. And I think that's a great, great spot to be in. They could host Lama. They can host
Anthropic. They can help customers fine-tune Lama. You know, there will certainly be, you know,
a thousand times more AI models that could run on Amazon that are open source than
than sort of ones that are commercial and proprietary to one particular vendor.
And so their strategy is just to say, hey, you know, bring on all of the AI innovation in the
world, let 1,000 flowers bloom.
We'll host all of that.
And that's, I think, a very practical, you know, strategic move.
You know, do they go deeper in their own AI models?
Do they help customers train models with additional services, you know, probably?
But I don't know beyond kind of them as an infrastructure player where they want to sit.
But even in that position, I think they're really good.
go to that. And I think they'll continue to kind of profit and build a very large business on
just that role alone. Great point. Sarah Silverman.
Sarah Silverman. Well, so I guess a couple things. One, you know, most people that are at least
in the commercial AI side, I think are at least contemplating or trying to figure out a way
for the Sarah Silverman's of the world or the content producers to get paid something.
At least that's the high-level commentary.
I don't quite know technically how anybody accomplishes that.
But I think that there, you know, I don't think the intent is to be a vampire as it were.
Now, I think the other thing is, you know, it's unfortunately just a repeat of the prior conversation.
I just don't think they take the job of the screenwriter or the joke writer or the animator.
I think they act as a way to boost or amplify or accelerate the work that one of those individuals are doing.
You know, I was chatting with someone in the AI space doing kind of video AI.
And, you know, you are, you're a director or a cinematographer,
and you have to go kind of figure out what's the shot you want to go do for some film.
And, you know, you spend X amount of time thinking that through.
and, you know, maybe, you know, storyboarding it, this, you know, the capability you now have is you could
look at the simulations of that and you can look at a thousand of them or a hundred of them and then
pick out exactly the kind of shot you want to go with. And now your creativity combined with
the automation benefit of seeing lots of options and testing different scenarios, we will actually
just get better content from that amazing cinematographer or director, not because of AI,
because AI made them more efficient in something that would have taken, you know, months or
quarters or whatever of time now is sort of shrunk into a much shorter period.
So I think we sort of imagine AI is going to go replace those types of jobs.
And it won't.
It will be another tool in their arsenal just as, just as every new technology kind of
breakthrough has become another tool for the John Favros and the Stephen Spielbergs.
This just becomes yet another way for them to continue to make better art and better content
than they did previously.
So I'm pretty optimistic on all of the kind of creative flourishing that we're going
to go see from all of this.
Not to mention just, I think we'll see this as a multiplier of content.
And anybody who argues that it's, you know, that it wouldn't be good to just have more
people be able to produce good content, then you're kind of just gatekeeping.
And you're sort of just trying to constrain who should be able to make a film, who should
be able to be creative, which obviously doesn't, I don't think it's,
is practical or reasonable.
Aaron, every time we talk, it seems there's just this avalanche of interesting AI news.
So let's keep it up.
I mean, great speaking with you as always.
Thank you for being here.
Good to see.
Appreciate it.
Awesome.
Thank you, Aaron.
Thanks everybody for listening.
We'll be back on Wednesday with my conversation with Ethan Mollick, professor at Wharton,
Wharton College and University of Pennsylvania.
Thanks again for listening.
And we'll see you next time on Big Technology Podcast.
Thank you.