Big Technology Podcast - OpenAI's Business Model Problem, Apple's Mixed Reality Vision, Ed Sheeran's Copyright Win — Techmeme Ride Home Crossover
Episode Date: May 5, 2023Brian McCullough of the Techmeme Ride Home Podcast joins Ranjan Roy and Alex Kantrowitz for our weekly news recap show. We cover: 1) OpenAI's massive losses 2) Whether generative AI has a business mo...del problem 3) The battle between open source AI and proprietary research 3) AI fears go mainstream 4) The White House hosts AI leaders amid worries 5) The AI PR industrial complex 6) The WGA strike and AI's potential to replace writers 7) The Fed pause 8) Apple earnings 9) Apple's mixed reality device 10) Ed Sheeran's copyright victory 11) How Sheeran's win might influence AI 12) Welcome to Brooklyn, Ed Sheeran. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
Welcome to Big Technology Podcast Friday edition where we break down the news in our traditional
cool-headed and nuanced format.
We are running this show as a crossover episode with the TechMemeRite Home podcast, so we're
going to run it on Big Technology Podcast feed.
We are going to run it on the TechBeam Ride Home feed.
I'm Alex Cantowitz.
We have a great series of story topics to cover with you today.
We're going to talk about the Open AI business model.
The fact that it lost a lot of money last year.
and whether its business is sustainable.
We're also going to talk about the proprietary versus open source battle
and whether big tech is going to have a chance to compete with the open source community
when it comes to artificial intelligence.
It's actually a more competitive fight than you might imagine.
We'll also cover some of the concerns about AI that came up this week, a lot of AI this week.
But hey, I guess that's the big story and also the White House and other government initiatives
to try to rein in some of the potential risks of that technology.
And then stay with us, the second half.
We're going to cover the economy.
We're going to cover Apple earnings.
And we're going to cover, of course, Ed Shearin.
Yes, Ed Shearin has been exonerated.
He won his copyright trial.
I actually think this is a big deal,
not only for the entertainment and music industry
for the precedent that it sets for generative AI
and all the different types of content
that's going to come out of this new technology.
Let me introduce our guests.
Ranjan Roy is here with us, as always on Fridays.
Welcome, Ron John.
And we also have Ryan McSheran.
And we also have Brian McCullough, who is the host of the TechMeme Ride Home podcast.
Yes, every weekday, I do the Tech News in 15 minutes.
So Alex and I always like to say that it's a good compliment.
We sort of, I'll give you the background every day and then on the weekends or whenever,
Alex will give you the in-depth hard-hitting analysis.
So, yeah, thanks, TechMeme Right Home.
Yeah, thanks for being here.
Yeah, I agree.
Like, you can do TechMeme Right Home Monday to Friday.
and you know if you need a deeper or longer podcast maybe go on a big technology podcast feed on the weekends or on Wednesday when I dropped the flagship interview and potentially you know you could really round out your media diet with those two shows let's touch on our first story of the week open AI just reported or information reported that open AI lost 540 million last year that's double what it lost the year before and this was before it became a household name like it is today I mean that is a lot of money
And it just leads me to believe that maybe this revolution that we're seeing with chat GPT and Dolly and image generators, I mean, is it actually going to even be a sustainable business any at any time soon?
I mean, it just seems like they're spending a lot of money. They're not making a lot. And the story picks up that even as revenue picked up, reaching an annual pace of hundreds of millions of dollars, weeks after OpenAI launched a paid version of chat GPT in February, those costs are likely going to keep rising as more customers use.
artificial intelligence technology, and the company trains future versions of its technology.
So, like, here's the real question about this tech, right?
Is let's say it replaces search?
Is it so expensive to operate that it actually becomes a bad business that won't make money?
And, you know, we might not even see it reach its potential because it's that costly to run.
I mean, if that's all of last year, what you got to figure the last six months would be is,
what six times that number like what's it was 10 bt was only released november 30 right for one month
of last year that the entire public was using it right so what maybe they've blown through the
entire 10 billion dollars yeah although remember that's the point is the investment was not
necessarily money it was like kind of credits cloud credits yeah so um i mean that's an open question
i mean one of the things in that piece is that um sam altman also says that um he went
he thinks they might have to raise a hundred billion dollars eventually he told uh somebody this
week that this might be the most capital intensive tech startup of all time but remember what he
is doing is going after um you know computers being sentient right um so he has like a different sort
of uh if if what the money that they're talking about is is training models to get to the the
the artificial intelligence of everyone's dreams like i can see that that would be super expensive
what you're talking about now is the call it a product call it a gimmick of chat gpt right now
is that sustainable and i yeah i i think this is the biggest issue i had with the
microsoft investment that you know it started out sounding it was one billion dollars and it's
ten billion dollars and then you realize they're essentially just paying for the compute for every
ridiculous chat gpt query you make you know let's make us uh article in the style of ed
shearing lyrics that would cost some money and open a i is paying for it and maybe microsoft's
paying it for it on the back end so i think we are so far from understanding the actual unit
economics of any of these queries any of these searches and and i think this is one of those things
that even with google right now when they're going after this what is it going to look like is this
going to be like ride sharing where no one ever figures out the economics of any individual actual
No, we do, but we do have some numbers, right?
So this is from Sam Altman himself.
He was talking, I think, to Elon Musk, he said the average is probably a single digit cents per chat,
trying to figure out how more precisely, but how we can optimize it.
Or per query.
But you were saying per chat, but even if it's like just a conversation, I mean, it's a lot of money.
It's a lot of money. You multiply that times 1,000, and you think about like what people might pay for an ad rate.
By the way, the business model here, we've talked about this in the past, completely not figured out, right?
So if you don't have a business model, you're spending more than a cost to deliver its
people from search. You know, it does start to, and by the way, I still think there's a lot of
potential here, but people have made it, I made it out to be, this is the inevitable future. And I'm
just saying, maybe the companies that are running this don't have enough money to support these
type of actions in perpetuity. And it might end up being that they will only be able to do,
for instance, paid, which actually limits the amount of scale they can ever reach. Well, you got to
figure that the unit economics on this are going to come down. We assume that always in technology,
However, there are other costs that are going to be coming in the near future.
We also know that to run, to train these models could be like $50 million a model, right?
So that's also a huge expense that's in the background.
But there's another one coming because they've trained all these models on data sets that they were able to get without paying for them.
And as we've seen, people are aware of that now and they're going to start charging.
So if you want to do the next GPT5, 6, 7 or whatever, and you need data sets that aren't Reddit,
that aren't the freely available internet,
you're going to have to pay up for the people to give you that data.
So that side of the equation is also going up in terms of that.
Well, yeah, I believe Reddit now made their API paid for this exact reason
that they don't want people, which I actually do believe Reddit
is probably one of the most valuable text data sets in the world.
Now you have to pay for it.
I mean, you commented on it earlier.
When Sam Altman is saying we might need to raise $100 billion,
dollars proudly saying we're the most capital intensive startup in history, that's a red flag
for me. I mean, those are the statements. I think you look back on years from now and you're like,
what were we all thinking? Which is amazing given the moment that we're seeing right now with
every tech company, every startup is actually trying to go the opposite way, which is to say we're
not capital intensive and we can actually run a business with a profit. I mean, that's what we're
looking at right now. And if you can't do that, then where does this lead to down the line?
And I think this is sort of like the place where we should wrap this segment, but it's definitely
a question that we need to ask. And that is, if this becomes so expensive that these companies
can't run the businesses anymore, do we end up seeing this rollback from ChachyPT being what was
the fastest growing consumer application in history to something that simply cannot continue
to be supported by these companies and actually like that would be the biggest argument for us
being in a bubble right now is that these companies are giving away or subsidizing
these chats to the extent that simply they will have to roll back or they're going to go out
of business. So this might be a seg into the next segment, hopefully to play a little bit of
host here. But I was talking to someone deep in the AI space this week and they were saying
it's amazing how chat GPT has quickly become irrelevant, at least in their eyes to the space.
And one of the reasons for that is, is who needs pay when you have all of these free models and
open source models running around. And there was a big piece that I did on my show today about
there's a leaked memo from a Google employee from April where he's essentially saying internally
to other folks at Google, we have been looking over our shoulder at chat GPT. Meanwhile, what we
really need to worry about is this open stuff. There is no moat here if all of the stuff is out
there and can be run on people's laptops. Yeah. So this is a memo from a Google senior engineer
named Luke Sernau, and I think that this is one of the most important memos that we've seen
come out of Google, maybe in its history. And I know that sounds like big terminology and hyperbole,
but as I read it, I was like thinking like, oh my goodness, this is actually...
I was reminded of the Microsoft memos from the early 90s about, you know, getting the Internet
religion and stuff like that. Exactly. So, I mean, it's definitely what it sounds like.
And let me read a little bit for listeners and viewers so they can get a sense as to what Luke was saying.
So he says, while our models still hold a slight edge in terms of quality, the gap is closing astonishingly quickly.
Open source models are faster, more customizable, more private, and pound for pound, pound more capable.
And they're doing things with $100 and $13B per Ms. that we struggle with with $10,000,000,540B.
And they're doing it, they're doing so in weeks, not months.
This has profound implications for us.
And so he goes on to say, we have no secret sauce.
People will not pay for a restricted model when free, unrestricted alternatives are comfortable in quality.
And giant models are slowing us down.
In the long run, the best models are the ones which can be iterated upon quickly.
So if I'm getting it right, what Luke is saying to Google is that so much of the technology behind this current wave of AI is already out there, open sourced, and there's a community that's building upon it.
And Google might think that it has a research edge because it's really helped push forward the status quo.
so much. And maybe that's something that you can look at retrospectively, look at the history and say
should Google have open source that transformer model? But basically what he's saying is the game is
over right now. This is open source. You cannot win if you're going to hold things back because what
you need to do is collaborate with that community to end up building the products. Yeah, I think this
is the single most important thing in the industry right now because what we're going to see is
business problems that actually you need to solve. You don't need what is 175 trillion parameters
and the largest models, the Da Vinci models that Open AI has necessarily.
Even in work I've done, fine-tuning smaller models,
you end up with similar results when they're fairly straightforward business challenges.
So I think what the actual overall monetization model looks like,
so much of the work that will be done will be able to be done by smaller open-source models,
just go on the hugging face community and find models that have already been trained
for very specific use cases.
And I think this is exciting.
I think this is probably maybe the dream of technological innovation kind of actually being dispersed throughout the business world in an equitable way.
Maybe this is actually going to be it.
So I also run the right home fund and starting in December, basically everything inbound was like 90% AI stuff.
And you would talk to people and they'd be like either the big guys are going to own this so this is not worth investing in.
or you see a flood of take this image and put it into the AI and you can make a web page coded for you or you know architecture plans or whatever and and that feels like investing in a website in 1996 or something right like every but what I've been seeing lately like is this is where the energy is at like these are the companies that I'm interested in investing in because it is sort of like this 1,000 flowers blooming where it's like if you
because then it becomes like varietals,
I have to credit Chris Messina for this,
but the idea that like,
there's these large language models out there in the wild
that people are tweaking,
the way that varietals for wine,
you know, wine tastes different based on the valley.
It's grown in and like the soil, you know, et cetera, et cetera.
And so like, what if that is like,
if that's the case,
if this stuff has escaped the lab
and people are just jamming on it,
then this could be,
like the greatest like explosion of software and tech creativity of our lifetimes,
certainly since web 2.0 because it's like if it's not going to be owned by the big folks
and it's not just adding a chat bot to an Excel spreadsheet,
then it is like it's figmas all the way down where it's like people can just tweak existing
software models or blow new ones up. And he says, he says this is an important quote and I'll let you talk
in a second around john but he says we need them more than they need us and he says um we can try
to hold on tightly to our secrets while outside innovation dilutes their value or we can try to learn
from each other when was last time you heard someone from inside of google or a facebook or a microsoft
or an apple say we need them more than they need us maybe they've been saying us the whole time
and those memos never got leaked because uh i mean no i think the tide is definitely shifted
and i think that's exactly the point that right now as open a i losing from
$540 million and making bigger and bigger models internally, until people are actually
building on those and using those in their day-to-day life and build, you know, for actually
really specific use cases, it's not going to realize its value.
So exactly, they need the entire developer community working on top of it.
On the varietal wine thing, I kind of love this.
I'm definitely, if I was a thread boy, I think I would have to make which model would be
which wine because that's amazing.
But get in touch with me because Chris and I need to write this up and do it like that.
I think maybe we'll do it if you do guest posting.
Let me give you the counter argument because I did give a counter argument on my show today.
I couldn't bring up the tweet, so I can't give credit to the tweeter.
But the counter argument is this, that what Open AI is doing is what AWS has done and what Microsoft did in the 90s, which is own the developer ecosystem, right?
And so that's your moat.
Yeah. If you can get people in your tooling ecosystem, then it kind of functionally doesn't matter.
Because if they're all creating, they're creating within your moat, right? So the thousand flowers
blooming are still something that you, you know, can monetize in theory if you can make the
economics work. But so I, again, I'm coming around to this idea that it is a wide, it might
be the greatest white open blue skies that we've seen in years. But there's a counter argument here
that there are playbooks where people could still lock down their advantage.
The counter-counter argument would be that developers could just use chat GPT and
displace the functions that maybe open AI or Microsoft would play.
No, I'm being a little bit facetious here, but I mean, this idea that you can use
some of these tools to end up coding some of the backends or whatever you need.
Of course, people will opt for convenience, but it'll be very interesting to watch this
software end up participate in the creation process itself.
and it'll be very interesting to see how how that impacts the developer ecosystems and the
infrastructure that they need to build on top of these tools yeah i think it was one more point on
that this is even more where the bigger the model that open ai is trying to train and build and the
more expensive it is if people don't need that or even know how to use it it's just going to become
more and more of a waste and he says in his piece like that's not the point bigger doesn't mean
better if you can have a 20 billion thing that runs on your laptop yeah like you could take it to an
office and you could you know train it on the the drawings of an architecture office and you don't need
the biggest model in the world so isn't this this is a very interesting moment because what this also
does is it puts into relief some of the big questions that we've had about AI safety right
like how many discussions have there been i remember when dolly first came out for instance people
were just like well open a i is being very careful about the type of images that you can make
from dolly and then all of a sudden what happens the image and this is one of the examples that uh
the google guy gives in that luke gives in his piece is that the underlying technology to make those
images to do stable diffusion becomes available widely the next thing you know you're in a discord
with mid-jurning and you can create faces of donald trump and all different sorts of situations
and it does it's kind of interesting to see this and you put it in contrast with what happened
this week where jeff hinton who was at google has left and his stated reason is that he's a little
bit nervous about what the a i is leading to and he wants to speak out about it so it's very
interesting because i want these two very interesting stories come out of google one we really
can't control it two i'm worried what happened if we can't control it like all of the discussions
all of the discussions about how you do good governance around this how you set boundaries around
All those discussions seem to go out the window if it becomes something that is so openly available that anybody can get access to image creation, video creation, audio manipulation, and text.
What do you guys think?
This is probably a dangerous analogy, but when people say that AI could potentially end humanity life as we know it, they make the analogy to the atom bomb to.
to nuclear energy and things like that but you don't have the ability like you need a nation state
you need entire apparatus trillions of dollars to create a nuclear bomb a person a 19 year old can't
do it in in their mom's basement right problem with this stuff is it's out there and anyone can use
it and so like we need to be thinking about this i think in different ways which is it's not
just that the the horses are out of the barn it's that this is trivially easy for
for any 13-year-old hacker to use or exploit
or create a great tool or business out of.
So it's like, again, it would be great
if it's not controlled by the gatekeepers,
but also be prepared for if it's controlled
by 13 year olds as well.
I think the problem I have in the danger conversation
is how, I mean, and I think Sam Altman kind of represents
the epitome of this of walking around saying
they fear AI taking over the world while trying to push
and build the biggest most capital intensive startup
and invest hundreds of billions of dollars
into the development of it.
I still feel that the leaders that try to push the narrative
around AGI is going to kill us all,
given they're the same ones controlling it,
implicit in that is we are the only ones that can save you.
Because otherwise, why would you be working on it?
Why would you be building a business on it?
Jeff Hinton leaving, I think is interesting
because at least it's someone who is saying,
I will no longer take part in this
and I want to be able to speak openly about it.
But at least reading the Cade Mets and the Times piece,
like a lot of his concerns are almost more on the mundane side.
Yes, disinformation will likely increase significantly.
Yes, the inherent biases in these models
are gonna continue existing, but I mean,
we can address those issues,
but the idea that artificial intelligence
will become more powerful than humanity
and kill us all, I think, I don't know.
I'm still struggling this one.
I'm pretty cynical on a lot of things.
There's a weird cultural thing to this where probably the three of us have been in these forums since the early 2000s,
you know, where you're debating like the Fermi paradox and like what happens when AI takes over and Nick Ostrom's books and things like that.
Like this has been the favorite parlor game, sorry, circle jerk of technology circles.
All right.
Now we're putting explicit on the podcast.
for 20 years. And so I'm with you in the sense that I get annoyed that this is the thing
because people have been talking about this in forums for 20 years, that as like a,
oh my God, like let's get high and imagine the worst case scenarios. And now it's almost like
it's the it's the savior complex where it's like, okay, now that's here, I've been telling you
for 20 years it's coming. And so I was right. And also you should only like I feel like it fits
into some of the worst cultural, not even stereotypes, but like frameworks of tech folks and
how they think of the world. But don't worry because the White House and our politicians are here
to save us. This week, the White House, according to the New York Times, gathered, and basically
every other publication that covered this, gathered Silicon Valley chief executives like Sam Altman
and Sundupichai to limit the risks, push them to limit the risks of artificial intelligence.
and they call it the administration's most visible effort to confront rising questions and call to regulate the rapid, the advancing technology.
And Joe Biden had some choice words for the group.
Ron John picked them out on Twitter.
He says, I hope you can educate us on what you think is most needed to protect society.
I mean, obviously it's going to be regulation that benefits the companies in the room.
But it is interesting to see this start to take hold in Washington.
and governments around the world.
And in one way, I'm just like, okay, well, maybe there's something there.
I mean, there's $140 million on new research devoted to AI
that the National Science Foundation is about to get.
And then I also wonder, like I have, with most of the conversations around big tech,
whether the government is actually doing anything.
And, you know, it did seem to me in this case to definitely be more of a photo op
than something that, you know, might actually.
because whether or not, that might actually help us,
because whether or not these, like, you know,
getting high in the dorm room conversations
are actually going to manifest.
Like, there's obviously stuff that we need to worry about
when it comes to this new technology.
And the question is,
is the government up to the task?
I think while I was triggered by Biden,
you know, going to a room full of CEOs
and saying, you guys are going to have to direct us
to prevent us from the dangers of AI.
And in a way, maybe that shows
that their narrative of AGI killing us all is working.
when the president of the U.S. is asking you to help save us.
But Lena Kahn's an op-ed in the New York Times, I was very happy with.
I think her very explicitly laying out, here's what happened with Web 2.0,
here's what happened with social media.
Because we were not ahead of the game, we fell behind, and then things got out of control.
And then we need to be proactive on this.
And I think it has to be civil society, researchers, academics, leading the conversation
just as much as the CEO of Open AI or Microsoft.
The UK regulators, I think it's the CMA,
came out with a same thing the same day.
So this is not a case, I think I said on the show,
that the regulators are sleeping on AI.
They see the revolution is coming like the rest of us.
Let me do another history hat for you, though.
And I'm going to use the atomic bomb analogy again.
One of the reasons that we have the atomic energy agency
is when all this stuff was happening,
World War II was happening.
So the government had extorting.
wartime powers, you know, the entire economy was on a war footing and like, you know,
Ford was creating tanks. They weren't making cars, right? So if people really believe that this
is an atomic bomb level moment, I don't know that there's certainly not the, the appetite for
that high level of regulation, like no one can do AI stuff or atomic stuff but the government.
And I also don't think that there's any sort of laws in place.
Like, this will be led by academia and corporate America.
So the government kind of doesn't have a place right now.
I mean, think about copyright, I think, is going to be the first battleground in this.
And it will be decided soon.
And there will be definitely substantive legislation around it because it's huge.
And we said maybe that becomes one of the biggest changes in the unit economics.
Once Open AI cannot scrape the entire internet or take every,
image and then recreate in the style of artist X like I think this is where it's such an actual
realistic thing to deal with and go after so I think that's going to be the first place
people look at obviously understanding the bias within models and trying to come up with
some frameworks around this I think isn't going to be important to because we all completely
like that stuff is so clear and obvious and we've been talking about this for years now
so when this kind of technology becomes embedded in more and more
day to day parts of our lives, I think the government's going to have to get involved. And I think
they are. And obviously, having a bunch of CEOs sitting around and asking them, what is your
thought to save us is not necessarily the right idea, but at least they're doing something.
Yeah, I just hope they can do it without playing politics. And, you know, they very noticeably did
not invite Meadow, which is pushing the standard forward. In fact, one of its models is part of this
big open source movement that we're talking about. And when they asked the White House, why Meadow
wasn't there. Fighthouse says Thursday's meeting was focused on companies currently leading in the
space, especially on the consumer facing products side. I don't think that's political. I think that's
just savage. Well, maybe both. But, you know, a little both there. So it brings me up to this one
question that I have, which is that we're in this moment that I call the AI PR industrial complex,
which I wrote about on big technology this week. And it is quite interesting where like we are
substantive technological development but we've also seen this flood of whether
it is corporations or threadboys or thought leader grifters or politicians and
regulators who are using this moment to say okay we are about to start to tackle
these issues or get involved in these issues and they make it seem like you know
it seems like all you need to do is throw AI in your announcement and much of
the press just gobbles it up like completely without asking like what the
heck is going on here for instance there was a
bill from Senator Ed Markey and a handful of other members of Congress, including Ken Buck
on the Republican side. They introduced legislation to prevent AI from launching a nuclear weapon.
That was the headline of their press release. And it's just like, you know, first of all,
it's completely implausible that, you know, that any government would allow AI to make that
decision. Actually, it will never happen. Like, if you know anything about government, and I think
these guys do because they're in the government you know that that's not going to happen and then
if you're trying to get like a press release out to say hey listen we don't want AI itself we're
trying to instruct the AI not to launch the nuclear code i mean i'm sorry guys but if you're relying on
this bill to prevent AI from taking nuclear weapons and wiping out the world you're not going
to get anywhere well i mean separate from nuclear weapons you covered this too i think the more
hilarious uh let's call it a press release was the report about IBM their CEO are
Marvin Krishna said that they would pause hiring for back office roles, 7,800 roles
because the type of roles that AI might replace.
Right.
Now, everyone kind of approached this as, oh, look, here's one of the first signs that AI is going
to kill jobs.
I am looking at this as, where is IBM in the AI race?
IBM should have won it years ago with the amount of marketing they did around Watson.
I honestly...
Well, marketing doesn't win technological battles.
My call is this is almost a marketing announcement because by saying we're pausing, hiring in the future on these roles because you're trying to imply we are so good at AI that we already know exactly which jobs are it's going to replace and we can already afford replacing them.
So I have had a little problem with this announcement.
I don't think necessarily the point of this was to get that headline and put IBM's name back in the running alongside.
Right.
It's like, all right.
congratulations. You got some more headlines about AI. But like what was astonishing to me is that on
IBM's Watson page, by the way, Watson is still a thing. The company says that, let's see,
it says that Watson helps free up employees to focus on higher value work. And now it's trying to
tell people actually what we're doing is going to eliminate the need to have employees. So do you
want employees to do the higher value work? Or do you want to eliminate employees? It's completely
confused, which is just like total evidence, in my opinion, that what this is is just
part of this AI PR industrial complex, which is that companies, politicians, think
influencers, et cetera, will just say anything to get their name in the headlines and potentially
a little bit of that AI shine. And I think that there is a latent understanding that this
is going on. But I also think that it's worth taking a moment to pause to think about and identify
just how insidious this is going to be and how prevalent it will be across the tech and
industry and government. Same as it ever was, but worse. A few months ago, it was Web 3.
Yeah, but I think this, yeah. And 25 years ago, though, when the internet, again, putting the
history hat on, when the internet came about, there was a good four or five years where it's like you could
excuse any decision, any layoffs, any new division that you're launching or whatever,
well, we need an internet strategy. And because most people, including journalists, including
lawmakers, but also the general public and shareholders, didn't really quite know what the internet
thing was yet. No one, none of us in this room quite know what the AI thing is yet and how it's
going to change our life. So you get a lot of cover for using buzzwords right now. Yeah, but I, for me,
the biggest problem I have with this, and I'm very glad you wrote this piece, Alex, is because
I am incredibly bullish. And I've been doing a lot of work with generative AI. And I can be cynical
about a lot of things, crypto. But I think for me, the generative AI,
face, the most important part is there is so much potential to do real things and this entire
hype cycle might completely destroy that. And I've written about this in the past that can
innovation actually be prevented from hype? And I worry because when everyone's expectations
get so high that no, I don't need this to just do some simple task for me. I want to completely
change my entire business. And that doesn't happen in six months from now. You get a little
disillusion. When open AI cannot come up with a simple business model and will lose tons and
tons of money and Google realizes that we can't embed LLMs into our search, then it makes people
even more disillusion. It makes investors more disillusion. So I do worry that the PR, AIPR industrial
complex could, uh, rolls right off the tongue, doesn't it? AIPR. I try to find a better term for it,
but I'm sorry, what you're worried. Well, no, I mean, is it's going to hold us back. Like, I've actually, I've been
laughed at when I brought up the idea by friends that like they argue that technological innovation
is inevitable if the technology is the right one and a good one. I genuinely believe that if the
hype cycle overtakes the technology itself, it can prevent it from ever being taken like going
off the ground. And AI is in that moment right now. We are going to, but it's very interesting because
like every now and again, the theory gets tested with reality. And we'll just do one more story to end
this segment, which is that the writers are on strike in Hollywood. And, you know, we had Sharon Waxman
on a little bit a while, you know, go saying, could this stuff eventually replace the riders? And obviously,
it's just not up to the task yet. Like Hollywood is saying, we're going to replace you with
chat GPT and all the riders are like, come on. And it's obviously clear that AI isn't going to, even the
best generative AI tools, you can put chap GPT 4.X or GPT 4.X up against the writers. It's just not going to do
as good of a job, even if you have studio executives bashing their head against the wall,
trying to figure it out.
I don't know about that.
Both of us are looking at it.
I don't know about that.
Think of this analogy.
I did a story this week about how one writer was saying, what I fear is someone's got a
rom-com script, and they're like, we need to make this more Nora Ephron.
So you put it into the Nora Ephron bot, and you get that snappy, great, you know, 30-screwball comedy
dialogue or whatever.
And also, like, so, like, what if.
what if it's doing passes on like let's make this more exciting let's make this funny or whatever
and then the writer's job is to just come in and clean it up and like just do edits on top of like
it like so it's it's almost like this comes back to what we're we were just saying the writers
are using this buzzword as a fear tactic as a negotiating tactic um in theory i also think that
they have legitimate complaints in terms of the business model in the streaming era but there i i
I would argue that there would be a huge disintermediation between the creative process
if all of a sudden the studios are like, well, here, you do this, put it in the machine,
and then your job is to polish what the machine does.
Yeah, but I think the more important point you just made was the business model.
And I think trying to distinguish the, or look at this as purely a creative process
or a content output versus the overall picture.
Because what's happened is think of every Marvel movie that comes out is derivative.
It's just, I mean, in terms of what AI should be able to do is create the script for the next Marvel movie, a couple of witticisms here and there, some, some like, you know, recitations of character references, whatever else.
You can make a good living just polishing scripts.
Like a Jason Manzugas makes a good living going into these Marvel movies and adding a one joke that makes it into the movie, but it gets made six figures.
Yeah, yeah, exactly.
Go ahead.
But that's why to me, that's more indicative of the larger problem is derivative content.
become the center of a lot of the media business.
And that has put writers trying to create original content in a worse position.
And this is the thing is that, again, if you're creating truly original, genuine content,
AI is not going to replace that anytime soon.
And that's where, again, the bullish case, you can use AI in your creative process to make you better
and make you do more, which I think, again, I think things should move in that direction.
But if all the studios are looking for is just the same thing over and over again, which is kind of where the movie industry is gone, yes, AI is going to be right in the middle there.
And in the meantime, it's going to be a bargaining chip. And that I think is important because the studios and everyone negotiating with, you know, content union, for instance, will always say, well, we could just use AI. And Mike Isaac from the New York Times wrote about this yesterday on Twitter, actually, he said, after talking to folks smarter than me, I wonder how much of a fixation on AI is a head fake.
from studios using it as a bargaining chip that is to say they don't care about it while chipping
away at the bottom line, which is the basics of the pay scale for the streaming era. So I think
that that might be the case. Well, definitely will be the case in negotiations from now on.
We're here on Big Technology podcast and the TechMeme Riot Home podcast feeds doing this as a
crossover show. Really glad that you're here with us. We're going to go to break. And when
we come back we're going to talk about
the latest in the economy looking at the Fed and jobs
we're going to look at Apple earnings we're going to see if we can touch on this
verge story about Andreessen Horowitz and we're going to wrap
up with the big news that Mr. Ed Sheeran has been
held innocent
and the accusations that said that he was copying music he was not
at least scoring to the courts back right after this
hey everyone let me tell you about the Hustle Daily show
a podcast filled with business tech news
and original stories to keep you in the loop on what's trending.
More than 2 million professionals read The Hustle's daily email
for its irreverent and informative takes on business and tech news.
Now, they have a daily podcast called The Hustle Daily Show
where their team of writers break down the biggest business headlines
in 15 minutes or less and explain why you should care about them.
So, search for The Hustle Daily Show and your favorite podcast app,
like the one you're using right now.
And we're back here on Big Technology Podcast with Ron John Roy,
who writes margins on substack, Ryan McCullough, GP of the Ride Home Fund,
and then also the host of the TechMeme Ride Home podcast.
Let's talk about the economy.
Why don't we?
So it looks like finally the Fed is going to pause its relentless campaign of interest rates.
They raised another 0.25% or 25 basis points, if you like talking in Fed speak.
And this might be it.
Jerome Powell said in the recent meetings that people did talk about pausing,
but not so much at this meetings,
but he says we feel like we feel like we're getting closer
or maybe even there.
And of course, this has a huge impact on tech valuations
and the business world as a whole.
Ranjan, is it time to celebrate for the tech industry to celebrate?
Have we done enough?
I think this is where I don't know if I'll ever know
what a good, stable economy looks like again
because right now we should be excited
that the idea that the Fed finally will pause
should be the best news ever.
But the Fed will likely pause the banking crisis is at the center of all of this.
The idea that the economy is still relatively unstable.
So part of the reason to pause is to maintain economic growth.
It's not because inflation has come down so much that we can just very safely say, we're done.
And I think that's the reason they're not looking to pause because everything's great.
They're looking to pause, in fact, because everything is not great.
They must have no choice here because we've already seen some.
and Valley Bank, and now First Republic is gone. I mean, First Republic is a big bank.
And this was the quietest, I mean, eerily quietest bank failure I have seen at least. Because,
I mean, if you can remember SVB when we were down at South by Southwest, that was just overtaking
everything. First of Public just went down. And I mean, I guess when it's telegraphed and you've
already seen one, it's a lot, it's a lot smoother. But still, to me, this is a reminder. And I mean,
the Fed gets it, that things are a lot more unstable than, you know, the headline numbers show,
like the jobs number today.
And the question is how that cascates.
And you, like, look at it.
You're like, okay, maybe it's, we're going in a good direction, right?
Like even some of the big tech companies are like now, now, you know, they're showing growth again.
Meta, for instance, showing growth again for the first time, you know, a handful of quarters,
Apple actually not having the sales decline, which we're going to get to in the way that it was
expected, but you also have bank failures, the need to pause rates.
I mean, the jobs number was good.
actually unemployment rate is now 3.6%.
3.4% it was expected to be at 3.6%.
But yeah, there's just dealing with all these factors at once.
I think you put it perfectly.
Like you sort of throw your hands up and you say,
well, I hope this is fine because there's so much,
it does feel like they're balancing those plates on a stick.
Do you buy that narrative?
Because I've seen that as well,
that one of the things coming out of this last earnings week
for the big tech companies is that
it's not that all of a sudden they turned around
but maybe people are like well it's bottom
we know where the bottom is
and we should see things turning around right now
do you guys buy that narrative
that maybe it was just took 18 months
after the COVID hangover
and now cycles are returning
you know people are starting to buy laptops again
and things like that
not necessarily for me
because I actually think
what the last earnings roundup
showed us
is that things are getting concentrated again because I mean the big five big six did very well
but other companies are continuing to get slaughtered in the market you know like the medium
size to small size the small cap tech firms are at still at the lowest that they've ever been
what about like the leaders of the last thing like all the SaaS companies like the the shopify's
yeah so so Shopify is an interesting one like I think there's I mean they're still well down
from their high, but the stock was up 24% yesterday. And the thing is, in a good way, Shopify,
they announce they're going to be cutting their logistics business completely. They're laying
off, I think, 20% of their people. So they're making huge moves and the market rewarded them
for it. But it's a reminder that the companies that were able to actually adjust themselves out of
the COVID-related overinvestment could be positioning themselves well. And that's, I think,
the winner, the big winners are going to continue winning, and that's what the last few weeks
have shown us. Whereas if you're, again, medium and small cap, I think you're in even more
trouble than before. And the AI conversation fits into that too, that investors are still
leaning towards a Microsoft or a Google or a meta because they think that they have potential
in this whole massive economy that could happen. So yeah, I think concentration, as opposed to
like everyone is doing well again. So yeah, let's talk about Apple. I mean, it's sort of a perfect
segue. So their results were not as bad as feared. They were the last big tech company to report
this week. They were expecting 5% decline in revenue. They only showed 3%. But lots of interesting
details in that earnings report. And they have inventory of $7.48 billion at the end of the
quarter. I mean, most companies on the stock market would be happy to have a valuation that high.
That's unsolved devices, yeah.
And so this is from Alexei Arskovic, who's an editor at Fortune.
He says it's the highest level in at least five years.
And he wonders whether it's the new VR and AR headset waiting in warehouses or unsold max piling up.
I mean, there's definitely some unsold max.
But it is a great mystery.
The headset is $3,000.
You wouldn't need a lot of them to add up.
Yeah.
So yeah, I'm just curious, like, so let's say it is the headset.
I mean, obviously, that would anticipate that they're going to sell a lot of these things.
Brian, what do you think is actually going to happen?
Because we're really close to seeing, hopefully, Apple, give some more information about what these headsets are all going to be.
What are you anticipating?
So I put this in the notes because I haven't been able to use this anywhere yet, but I have a new theory about what they're going to do.
And believe me, I'm not a gruber.
I'm not one of these Apple people that knows Apple very well.
But it occurs to me recently that, you know, everyone's like, well, eventually Apple,
wants this to replace the iPhone. This is their 20-year play, right? However, it's a $3,000
product. And some of the things that we've been reading from the rumor mongers lately are about
how you're going to be able to use it for work. Like I could sit down here, put on the headset
at work and an immersive, like, full screen. What else is a $3,000 device? A laptop or a
computer. So I've been thinking lately, what if next month, at least for right now,
Obviously, they want to bring the price down eventually.
They want it to be more mainstream.
But what if that's what they sell it as right now for the launch is you're willing to pay $3,000 for a laptop.
What if you could have a laptop that you could take with you all the time and do all your computing and sit on a park bench and have a completely immersive screen?
Oh, and by the way, also you can do games and exercising and things like that.
What if that's what they lean into is a laptop on steroids?
That is quite a theory.
but you know what I wrote about this I think in like 2018 when I first used the magic leap
it was one of the coolest technology experiences I had like I remember using the magic leap
headset and looking around me and just kind of being I mean it's funny because all the talk
about the metaverse and the idea of us sitting around a conference table with no legs and
with a VR headset on was always ridiculous to me because I was like the technology I already
experienced something that was so groundbreaking if you tried to end real that like that's another one too
and the number the letter n real um it's i think it's a chinese company have you similar yeah yeah
it's it's almost i mean they're thicker than glasses and it has to be connected to a thing but it's a
similar thing i'm i'm watching tv over here i'm scrolling um instagram over there but we're still in
the same room and we're still talking to each other yeah yeah to me i think and i kind of hope
And if anyone can do it, it's Apple, that the kind of promise of the metaverse that can be realized.
The idea that, like, being in an immersive computing environment, but that's also connected to the real world, that is kind of exciting.
Like, I mean, if you take the Pokemon Go idea and then extend that times a thousand or million, I think that's kind of cool.
Every time I hear Pokemon Go, I just get this image of Hillary Clinton in my mind being like, Pokemon, go to the polls.
it will always be my association with Pokemon Go
is Hillary Clinton trying to get the boat out
she might have and by the way if Apple ends up being the company that does the
metaphors that I mean how much would that suck for Mark Zuckerberg
all right let's wrap this with Ed Shearhan News
so Ed Sheeran was on trial for copyright violation
and I don't think it was a criminal offense but it was certainly civil
and he is accused of using Marvin Gay's Let's Get It On and copying that in his thinking out loud song.
And if you listen to the songs, there are some similarities, but ultimately what the judge ruled was that basically you are copying a stock chord progression and a normal rhythm pattern.
And that is so standard in music that we cannot allow artists.
to sue each other over this
because I would imagine the logic is
that ends up
stopping the progress of music.
And I think it's so interesting
because art of course emulates and improves
and we shouldn't have artists
that go out and steal each other's music.
But the legal question of like
whether this is a copyright violation
is actually rather important
because I mean first from a music standpoint
you have to draw a line legally
of like where
artists are allowed to borrow and improvise and build on top of existing music.
And then, I mean, obviously, tech show, right?
Like, that has obvious implications for the use of AI, which could actually,
generative AI could actually argue it's doing exactly what Ed Sheeran was doing with this
song and with these chords saying that we're taking what's been created in the past
and improvising on it and potentially improving it.
What do you think the implications are, Brian?
I mean, look, it is the a million monkeys on typewriters thing.
Like, there's only so many tunes in the world.
There's only so many stories in the world.
I mean, ever since, what, the 70s and 80s, we've had this sort of legal debate
or even cultural debate about, like, what, is there anything new under the sun?
Or is everything just a remix of the things that inspire you and the things that came before you
and things like that.
I'd almost say that
we were talking about the Marvel thing, too.
Like, we're so stuck in this era
where all we're doing is retreading what's familiar.
Like, I'm happy
for Ed because it's like,
it wasn't exactly, like, you would be able to,
it would be an open and shut case if it was,
you could tell that note for note, this is the exact same thing.
But like, he had a different spirit and a different song
from a different time with different inspiration or whatever.
So, like, you know, let's be able to jam and mix on things.
I still love the idea that if Ed Sheeran makes his way into the center of the generative AI copyright annals of history, that will be kind of amazing.
I think that for me, I don't know, going back to just the idea of copyright and generative AI in general and whether everything will be derivative, I don't know, I feel artists in general have to
it's such a tough one because it's like
how do you get new content coming out of this?
How do you get new content that actually sounds different
when everything is derivative of itself?
When anything can be fed in,
you can have something in the style of Ed Shearing instantly.
Okay, I got one to throw at you.
So one of the many AI clone songs
is Kanye doing Adele's hello, right?
Doesn't work because Kanye doesn't have the range of Adele.
right now we know that whitney houston took dolly parton's song and made it her own like the whitney
houston version of i will always love you is probably we would all agree the canonical version
what if in the future you find something like that where it's like maybe there's somebody you know
i could be like you know who would really do hello better than even adele is this person or whatever
like does that make it worse it would be worse for the artist if someone comes along but that
already happens. Dolly Parton made a lot of money from this, so she's not sad about it. But
Whitney Houston owns that song now. This is also where AI law could become very creative in the
sense of like the estates of artists being able to actually license out the voice gets very
interesting. I saw a version of it was Biggie doing a classic Nas song and it sounded great.
But imagine if he actually, the state actually got paid for it and was able to easily license
a voice to Adobe or whatever other other points.
platforms are being used, that's kind of awesome for everybody.
Well, Grimes, actually, yeah, Grimes said, if my fans want to use my voice with AI and make a
song, and if it's good, we'll split it 50, 50, just like any other guest that comes on
and collaborates with me.
If I was a musician, I'd make that deal right now.
So, I mean, Nate, can I try to play this from my laptop and see if there's a resemblance?
Is that microphone, is it going to destroy?
Let's do it.
We're here live on, we're making Nate shake his head.
He does not look at it.
We're here live on the podcast.
Let's ruin the audio that he's worked so hard to portray.
So here is Thinking Out Loud from Ed Shearin.
Darling, I will be loving you to 70.
Sorry, Nate.
And here's Let's Get It On from Marvin Gay.
Do you guys see it?
I hear the math of it.
I hear how the notes are similar, but it's not close enough to me.
not even close. I mean, it's close, but not even close. Right. And I, it'll be very interesting. I mean,
this is kind of like, I think it'll be a very interesting precedent case for what we see with
generative AI as this stuff moves forward. I'm more curious. In the stories, it talked about Ed Shearan
showed up in court with a guitar. Tussled red hair and a suit and played with a guitar. Was this all
just to get a free Ed Shearing show? I mean, if you're trying to woo a jury over to your case, like you
definitely and you're at sheeran you come in with that guitar and that's what you do not be if i i would
object i would yeah strenuously object your honor on the other side you cannot play that guitar
you can't play that guitar yeah it's it's very very interesting and you know it's sort of like
kind of caught me by surprise how big of a case this was but as i followed it towards the end
i was like oh wow this is very very interesting and uh obviously the music industry was was watching
it closely and i think it's something that tech industry should watch as well and i do want to note that
After Ed was exonerated, he is exonerated even the right word to use for a civil case.
Anyway, after he left the courthouse, someone was like shouting Ed Shearin lyrics at Ed Shearin.
And I think that it's a great welcome to him for New York because there was another New York Times story that said Ed Shearin is leasing an apartment in Brooklyn Heights near the Brooklyn Bridge for $36,000 a month.
And I knew that New York real estate was getting out of control.
but ed come on man you're making this difficult for the rest of us you can rent apartments for
way more than that you could there's new york city apartments that are six figures well maybe what
ed was doing was just trying to anticipate losing this trial and now he he he can upgrade from
his $36,000 a month Brooklyn Bridge park apartment but whatever the case welcome to Brooklyn
ed it's it's great to have you here um let us know if i'm sure you're a listener of the podcast
either big technology or tech meme right home let us know if you want to hang out with us we'll have
you on the show just kind of the the bike lane that's all along the water here exactly yeah it's
two miles from here and uh bring your guitar and we might find it in your favor yeah we do need this
demonstration on the podcast well thank you so much everyone for listening thank you trustless media
for having us here in your studio it's great to um get a chance to do this in person with ron john and
Brian, and great to, if you're watching on LinkedIn or YouTube, it's great to come to you
in high definition and a great studio with great folks like Nate Skid here, who's sitting with
us, sitting with us and hopefully will, will forgive me after I played that music into the microphone
without prior authorization. So thank you for that. Thank you, Brian. This is awesome.
Thank you, Alex. I appreciate you asking me to do this. This is better than when we do my show at my
kitchen table with bad love mics.
Hey, whenever you want to do
it, we'll do it. And whether it's kitchen or
whether it's down here or you come to
my house and we'll be right
above a pizza place. So there'll
be lunch. Ron John, great to see you again.
Good to see. Thanks for doing this. We skipped. There was one
story that we skipped, which was the
fate of Andreessen Horowitz's
media investments, clubhouse substack
and betting on Elon Musk at Twitter,
which if you're
listening still, I would say
you're going to, we'll cover next week.
when Ben Smith joins Ron John and I.
You've probably heard Ben Smith on podcast if you're...
Andreessen invested in BuzzFeed News.
Oh, they did.
Right.
So that would be something that will cover.
I think it'll be a great story to cover with Ben.
He's coming on with Ron John and I next Friday.
So if you're just listening for the first time, hit subscribe.
And if you're a regular listener, make sure to tune into that one because it won't be
one that you want to miss.
Ben's done a lot of interviews, as I was saying.
So we're going to make it different.
We're going to cover the week's news with him.
We're going to cover some of this.
Andresen Horowitz stuff.
Talk a little bit about his book traffic.
But Ben was also on recently talking about the book before it came out.
So hopefully it'll be a little different.
He's going to be on my show tomorrow Saturday, if you're listening to this.
All right.
Awesome.
That's great.
And my former boss.
So once again from Tressless Studios here in Brooklyn.
It's been great being with you for Brian McCullough and Ron John Roy.
I'm Alex Cantorwitz.
We will see you next time.
on Big Technology Podcast.