Big Technology Podcast - Is Generative AI Plateauing?, Booming Bluesky, Apple’s Smart Glasses Play
Episode Date: November 15, 2024Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover 1) Jake Paul vs. Mike Tyson 2) Why researchers believe Generative AI training methods might be plateauing 3)... Is the application really what matters? 4) Will reasoning save the day? 5) Writer raises $200 million 6) ChatGPT defeats Chegg 7) Should ad agencies bill by the hour in the age of AI? 8) Ranjan reflects on our interview with Gustav Soderstrom 9) Gratitude for listeners 10) Yes, we're launching video interviews on Spotify 11) Bluesky's longevity potential 12) Apple's smart glasses move 13) Z-Pain makes us unhappy --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Want a discount for Big Technology on Substack? Here’s 40% off for the first year: https://tinyurl.com/bigtechnology Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
Is generative AI plateauing as training methods top out?
Blue Sky is booming as an alternative social network and Apple looks into smart glasses.
All that more is coming up on a big technology podcast Friday edition right after this.
Welcome to Big Technology Podcast Friday edition where we break down the news in our traditional cool-headed and nuanced format.
We have a great show for you today covering everything happening in the world of AI.
Very big news.
There's concern that training methods that have gotten the generative AI field to hear,
are not going to continue to scale, and that's really coming to the full right now.
We're going to talk about that.
We're also going to talk about the rise of blue sky, whether it will be sustainable,
and Apple's smart glasses play, which is quite interesting.
Joining us, as always, on Fridays to break it all down is Ron John Roy of Margins.
Ron John.
Welcome to the show.
Great to see you.
Scaling laws are here, Alex.
They finally come for the industry.
I know you're excited about what's going to happen with this,
but I know you're even more excited about the Jake Paul, Mike Tyson fight.
who you got neither neither i think netflix is genius in promoting their live programming by just
bringing out two people that no one wants to see when but i'll still take tyson if i have to
all right i'm taking paul we can put it on the prediction markets and see what happens my friend
group has been saying that this is an exhibition match basically and a lot of betting sites are
looking at it the same way the folks are saying that it's rigged do you think this is rigged or
a real fight i think this is a real fight i don't think netflix would
going to this fully rigged or making it reality tv but i think it's a good reminder the blurred
lines between reality tv and uh actual live programming but i think it's real i'm going to go rigged
okay so we got a lot had to cover that on the outset market contracts to to set up right now
absolutely and now we can talk a little bit about what's happening in the ai world where shall we
say there's another fight going on between the purists that believe that large language models
will continue to scale if you add more data and compute and power to the mix, and those that
say eventually these models are going to hit a wall. There's been a long brewing battle between
these two factions. And this week, I think has been the week where both sides have started to
put their positions in the ground and say, you know what, I've won. And that's really come
on the back of this great information report that talks about how open AI has basically found that
it has hit the limit of improvement when it trains with more data, more compute, and more power.
Here's from the story. The number of people using chat GPT and artificial intelligence products
is soaring. The rate of improvement for basic building blocks underpinning these products
is slowing down. The challenge that Open AI is experiencing with its upcoming flagship model
codenamed Orion shows what the company is up against, while Orion's performance ended up
exceeding that of prior models, the increase in quality was smaller compared to the jump
between its last models, GPT3 and GPT4. Some researchers at the company believe Orion isn't reliably
better than its predecessor in handling certain tasks. This could be a problem as Orion may be more
expensive for Open AI to run in its data centers compared to other models. Basically, the idea here
is that open AI has been training subsequent foundational models with more data, more compute,
and it's reached the point of diminishing returns to the point that this grand next model that
it's supposed to release, we don't even have a sniff of GPT5 yet. It's calling this Orion is going to be
just a bit better, but more expensive. What do you make of this battle? I am happy about this.
I think I've talked a lot about how I don't need GPT5 just.
yet. I think the amount of opportunity there is around actually productizing the current models
is so massive right now. Again, like there's so many little magical moments even with Claude,
with chat GPT, with any of these tools that you see so much potential, but then actually
translating that into helping you do your job better or create certain things better. I think that
there's just so much work to be done there that everyone competing to kind of create this next
massive foundational model has never made a ton of sense to me before they actually just got
a GPT4 or Claude 3.5 opus working, you know, kind of pushing it to its limits and making it
work as well as possible. So I'm kind of hoping that this actually moves people towards making
tools people use rather than just saying AGI and GPT5.
and whatever else.
Now, look, I hear you, but I also have to disagree.
I mean, the field's potential is so much more if these models continue to improve.
And while they're good today, they're not where they've been promised to be.
And if this is the limit, then it severely diminishes sort of the potential of these models
to change everything we do as the AI industry has been promised.
And it's, by the way, it's not just this information report.
Lots of people have been saying this.
So this is Elias Schaeber who's talked about it.
He says to Reuters results from scaling up pre-training the phase of training and
AI model that uses a vast amount of unlabeled data to understand language, patterns,
and structures, it's plateaued.
Ben Horowitz and Mark Andreessen talking on their podcast.
Horowitz says we're increasing the number of graphics processing units used to train AI,
but we're not getting the intelligent improvements at all out of it.
And And Andreessen has been saying that lots of smart.
people are working on breaking through the asymptote, figuring out how to get to higher levels
of reasoning capability. I mean, shouldn't we just put your concern about building practical
applications aside for a moment? I don't think anyone's going to disagree that it's a time
to build practical applications, but isn't this like fairly concerning for the progress of the
AI industry if, let's say, this is about as smart as they're going to get? No, I think, first of all,
I love Elia Unleashed right now going on, going to Reuters, and now being able to say things like
the vast amounts of unlabeled data to understand language and patterns and structures have plateaued.
But, yeah, I think the focus being kind of distracting by focusing only on these step changes and
quality of the models has distracted from practical applications.
Yes, we should be able to have both.
But you even see it in the way that an open AI is structured as a company and where they invest their resources.
We talked a lot about this with Corey Weinberg from the information that the cost structure of open AI is still much more heavily towards the R&D and improving the actual models versus building out a good sales force and a sales enablement and customer success team.
And these things might sound boring.
but if you actually want these technologies to be adapted by corporations and companies and just
everyday people, they have to be easier to use and more practical.
Like I get the idea that there's a lot of times that if you're using one of these tools,
it doesn't work perfectly the first time.
And everyone, the kind of natural reaction is, okay, I guess it's not good enough.
But then you learn, the better you prompt it, the different you structure your workflow, you can get it to do what you want it to do.
But instead, I think a lot of these companies are promising the model will get so smart in the next iteration that you don't even have to do that work around prompting and workflow building that it'll just figure it out and it'll be okay and AGI will be here, et cetera.
But isn't that what they're trying to do?
I mean, aren't you ignoring the business story here that opening I just raised?
the largest VC round in history, $6 billion.
Isn't Microsoft or Amazon or hooking up to nuclear power plants?
Anthropic is out in the market, trying to raise billions of its own.
Just from a business standpoint, if these companies cannot advance this anymore,
isn't all that money going to come do and sort of crumble the industry?
I'm not ignoring the business story at all.
That is the business story to me.
I think like overraising for the R&D side of things,
rather than the actual like operationalization
and building out businesses on top of the existing technology.
I mean, again, we've debated this plenty.
I think that is a huge mistake
in that it actually, you know,
potentially hampers the long-term development of the industry.
So if it actually means this slowdown, you know,
puts a little cold water on the promises of GPT-5
and whatever else,
and people just get back to work
in terms of actually building things that solve problems.
I'm happy about that.
I'm trying to pin you down here a little bit on the technology question.
And you keep wiggling your way out, which I respect.
But I have to ask, like, isn't there just a tad of disappointment on your end
if this is sort of the end here, the end of the road in terms of where this is?
Not at all.
I mean, the things I've already been able to do.
I just made a little game in Claude the other day.
I saw some video of like, it's kind of like,
space invaders type of game.
I coded a space invaders type game with these like custom images myself in an hour
and then hit the cloud limit, which a lot of listeners probably do and it's kind of annoying
even as a paying customer.
But like that was magical to me and that exists on the existing technology.
It's possible and there's so many other applications I can imagine if I'm able to do that
for fun in an hour that are not being properly explored because all the attention
and hype is on the much, much bigger thing.
So if the clawed business gets built is, you know, actually teaching people how to use
the existing technology well, I think that has, again, much better, longer term potential
than the entire bet on the entire industry is the technology will get step change better
in the next year or two.
Yeah, I don't know about that. I mean, they have that has been the bet though.
It has. It has. And I don't think it's the right one. And I think something that
pushes us away from that kind of strategy is going to be a good. It'll shake things up. It'll
definitely shake things up. But I think it's it's healthy for the longer term world of AI.
You know, you've really not played into my game today where I wanted to evoke feeling of
one single feeling of disappointment or sadness from you and you say yeah it would be tough
if this is how I'm feeling gosh like if this promised uh you know AI revolution ends here then
I don't know how far we get and then I come in and say well actually maybe we're not done you
know sort of like that Walter White give we're done when we say I'm done all right well I'll give
video generation is the one area that I do think we are severely, we're not even close to
anything interesting and we've been promised things that are interesting, i.e. SORA, but we're
very, very far away. Even the runway MLs and other tools that I've tried, we're so, that's one
area where I see a huge need for technological improvement. But for any content generation, any
decoding, even data analysis right now, I think the models are pretty damn good at doing what
most people need them to do. We just don't know, most people just don't know how to use them
correctly. Well, Ron John, thank you for playing along. And I have to inform you. We're not done.
We're done when we say I'm done. And that is because, that is because, yes, these research houses
might have hit some sort of wall. And the reason why they're hitting the wall is obvious that
they're using synthetic data because they've run out of data and it's offering less good results.
And this has sort of been the issue with these training these new models.
However, in recent times, there has been a development of a new discipline here, which is reasoning.
And we talked about it back in the day.
And that's what sort of freaked Elia out.
And he left Open AI.
And that really might be the near future of this field where the models now,
such as OpenAIs 01 are prompted and they think.
And the more they think, the better they get.
And this is, again, from the information.
In OpenAI's case, researchers have developed a type of reasoning model,
O1, that takes more time to think about the data that LLM trained on before spitting out an answer.
This means the quality of O1's responses can continue to improve
when the model is provided with additional computing resources while it's answering user questions
even without making changes to the underlying model.
And Casey Newton from Platformers cited one example from one of Open AI researcher talking about it.
This Open AI researcher says, and this was out of TED AI talk,
it turns out that having the bot think for just 20 seconds in a hand of poker step by step,
got the same boosting performance as scaling up the model by 100,000 times
and training it for 100,000 times longer.
So I think what we're about to see is a pivot.
it in the AI research field, or yes, they might be applying practically some of the models
that exist today. But it seems to me like everybody is going to go completely in on this
reasoning format. And that is going to be where we see the improvements. And that's why I want
to highlight this post from Dan Chipper that I saw this week. He says the message that the
information headlines conveys is at odds with what people inside the big labs are actually
feeling and saying. It is technically correct.
But the takeaway for the casual reader that AI progress is slowing is the exact opposite of what I'm hearing.
So this might be a combination of spin and reality, but I'm curious how much stake you're putting into reasoning when it comes to being able to advance the status quo.
Yeah, no, I think both reasoning and how synthetic data is used matter.
And I think actually are in almost more promising direction for the industry than just raw processing powder.
power and size. I think first on the synthetic data, like we're going to be talking about a
company writer.com in just a little bit. But they, one of the things they did was like create their
own foundation models and they apparently train them for $700,000 total by using really
targeted synthetic data to create different models for different kinds of problems. And I think
in the coming months and years, we're going to start to see some awkward headlines around
size matters because smaller will be better in terms of actual models being used. And like,
again, one model should not be reliable to solve every problem for everyone at all times versus
maybe there is a model focused on financial data analysis. And it's actually much better at
solving problems around that versus writing poetry or generating images. So I think using really
targeted synthetic data for more targeted models is actually a really interesting space.
In terms of the actual reasoning side, I think that's really interesting.
Like rather than coming up with new ways of actually generating the answers using the
existing information, that could be incredibly promising and solve so many of the ideas,
so many of the challenges facing the industry, like cost for any kind of new foundation model,
like just, you know, viability of these things actually succeeding. So I think, again, today,
today I'm positive. Today, these are all good things for me. Right. And the cost really matters
because if you're using a reasoning model, a lot of that can happen in inference versus in the
training, which is, I think, less expensive. Before we move on from this, I just want to talk
quickly about this AGI thing that we talk about so often, but rarely define and rarely talk about
in context, right? That all these labs are trying to push toward artificial.
general intelligence or human level intelligence.
And it seems like some inside these organizations are like full-fledged trying to get there.
And others, I don't know, probably like see it as useful marketing so they can sell products
today.
Actually, I think that a lot of the productization that happened has kind of been an accident
in places like Open AI as they've pushed, you know, the research forward.
But why don't we just take this point in time to just talk a little bit about AGI?
Do you think, A, it all along has just been this marketing term, and do you think that if we're not going to get there through these current methods, that the magic of that marketing falls away a little bit, making it harder to sell into companies, making it harder to fundraise if all these companies are doing are just sort of productizing what they have today?
And I guess B, do you think we'll get there?
I think I'm going to go with A, and it's because, I guess, how would you define AGI or artificial general intelligence?
I think Jan Lecun's definition is really good, which is basically that it's human-level intelligence.
It can handle a variety of things just the way that a human can.
What is human-level intelligence, though?
Because there's a, I don't know.
I think ChatGPT can already do.
a lot of things better than plenty of people's almost like yes it can answer you know questions about
philosophy the way a philosophy professor could but it's almost like the little more nimble things
that it really struggles at chat chit you can't tell chat chp tpt to like you know go uh you know
write a bunch of emails to people you need to communicate and it does it for you well it's not really
able to do that it's not really able to switch very well between tasks it's not very well it's never
really able to, you know, learn in the context and get things right the next time.
These are all things that I think make human intelligence special as sort of the adaptability
and the ability to be, as we say, general.
And I don't think AI is there yet.
Okay.
That's a fair definition.
And using that definition, I actually think it's fine for the industry to not be on the direction of getting there.
Because even what you said, writing a bunch of emails to different people,
I think that problem could and should be solved soon in really targeted manners, like, you know, take your entire existing email history, train something on that, use that to generate new emails and build like a process or workflow where you actually validate them. Like, I mean, really practically, I think solving that problem could be possible pretty soon. And it's just not getting solved because we're still all trying to chase the dreams of AGI.
And I think for me, and what exactly it is, the human level reasoning makes some sense.
But it's amazing to me that it's always brought up, but there's not one like clear accepted definition or one clear vision that's communicated by the biggest people in the field, the Sam Altman's and everything else.
So it remains this murky kind of like dystopian robots taking over who knows what it is, will be a line item in a.
a contract with Microsoft's investment in order to, like, change the profit structure.
I mean, it's such a nebulous term that that's why I think it does represent a distraction
from progress.
I don't know what it says about my life, that you're like, imagine the strongest form
of artificial intelligence possible.
What does it do?
And I'm like, yeah, just writes a bunch of emails.
Like, oh, my God.
Imagine a world where I'm worried about robots.
about takeovers and you're just trying to go to inbox zero here honestly if an AI could get me to
inbox zero it would be a true a true miracle I would really believe in the power of science it would
have to get through 12,000 unreads but it is interesting that they have built a GI it initially
was like a sort of like what I was describing human level intelligence adept able to generalize
and now I think it's talked about really in a way that's akin to super intelligence
something that's smarter than humans in almost all fields and can perform things that
humans can't. And that's when you hear like the messaging coming out of open AI that it can
lead scientific discoveries in these type of things. And it's like, okay, that's not really
general. That's super intelligence. And, you know, I think that that has led a lot of the
investment and a lot of the hype around this that will eventually get there. But it just
doesn't seem like it's going to be through the traditional scaling of LLMs. I guess that's
my point here. Yeah, I agree on that. And you're like, I don't care. That's good.
I don't care. That's good. That's my new philosophy on scaling laws with LLMs.
It's a good tagline. Okay. Last thing about this, did you hear that Google? I think this is worth
watching. They have a new experimental Gemini model. It's called Gemini EXP 1114. And do you know about
chatbot arena where they test, which is the best LLMs? It's currently sitting at the top of
chatbot arena and kicking the butt of chat GPT 40, 01 preview, 01 mini, previous Gemini's
all the clods, maybe Google's got it figured out whatever they're doing there seems to be
working. That it's, and this, by the way, folks, this arena is where people compare responses
to different models and pick the best one. And it's been voted on by 6,000 folks at this point
and is at the very top. So it's quite a moment for Google that I don't want to glance over and
I think we'll probably be coming back to it when we talk about Google's prowess in the field.
Yeah, I think, and for listeners, I mean, check out chatbot arena.
It's honestly a fun thing.
And it's a blind comparison test.
So you don't know, you're given two answers.
You select which one you think is better.
And then you find out what's the actual model behind it.
So it is essentially an unbiased test.
Gemini or I have a question.
Are you using Gemini in?
day to day life. No. I'm still all in on Claude and I'm looking at Chatbot Arena and I'm like,
I am not doing it right. I mean, it's interesting because maybe these models can give a better
response, but it's also just like it matters. Like UI matters. I mean, this is sort of me agreeing
with your argument. Are you agreeing with me? Personality matters. Usefulness matters, even if the model
is smarter. But this is making me think it's time to give Gemini another shot. How about you? I, I don't
use it very often. I have it bookmarked, but I mean, still, clawed perplexity, chat GPT,
steady rotation, all for different use cases. I think perplexity is almost one of the best
examples of like UI completely transforming how nice it is to use. And by this point, I really
thought Gemini should be my entire travel booking, given it's connected to Google travel,
Google flights and everything, Google Maps.
Like, it should be my starting point, and it still isn't.
And I still think the answers in the existing form just are not very good.
It gets a lot of stuff wrong.
It doesn't answer a lot of things as well.
And I get trying to be a little bit more conservative and risk averse.
I still think Google is incredibly well positioned just given their ecosystem.
but it still has not gotten there yet.
And I mean, it's on, maybe the next experimental model
once it becomes reality will kind of cross that chasm,
but we're not there yet.
Yes.
And I have to say I have become a bit of a perplexity guy.
I'll admit it.
Perplexity is pretty good.
It's so good.
It's my, it's become my kind of like companion for other things.
Like I think chat GPT is more when I have like,
I'm sitting and doing something focused,
is for a lot of work, a lot of like more on the coding side and kind of like really
using artifacts.
Perplexity is when I'm watching a movie, a sports game of any sort, like it just is so good
in just giving you quick information in a really nice format with additional links to
keep exploring in questions that, I think for that like, and which makes me think it's
almost, and it still is the biggest competitor to real search.
Yeah, I find it to be really good for research.
Imagine trying to sort through a bunch of programs and figuring out what they offer.
Yeah, yeah, I am just making the decision on the epic pass versus the icon pass for winter skiing.
If listeners are making the same decision and all done in perplexity and asking like specific questions, which resorts in Vermont, which resorts in Colorado are there.
And like, it was so, so good in doing that.
I do hope that perplexity finds a way to get me into China this winter, which I'm hoping to
stop in on the way back from Australia.
So fingers crossed.
Where are you looking to go?
Beijing.
I want to see that wall.
See that wall?
I was there in 2009.
I went to the Great Wall.
It was definitely, it was a good time.
It was lived up to the billing.
Nice.
So, okay, so let's just take a minute and go through three stories that the two of us kind of
found this week that are talking about what you really are interested in, Ranjan, which is
the practical application of AI at this stage and how even if we stop right now, which I don't
think we will, we're going to have an extremely powerful technology that's going to disrupt
industries and really be practically useful. So why don't you kick it off with this writer
fundraising that you talked about right before? Yes. So writer is a generative AI startup. They just
raised 200 million at a nearly $2 billion valuation. What's interesting,
interesting about them is they built their own foundation models. We had just talked a little
earlier about how they build more kind of targeted models that are really focused on
solving enterprise business problems. And the entire kind of differentiation that they're focused on
is kind of exactly what I've been talking about. Like they have a lot of big name enterprise
customers going in and like, remember, these companies have messy data. They, they
have like lots of really heavy processes that you're not going to just call and make an open
AI API API call and solve like there's so much other work that needs to be done that I think
it's it represents like more the Salesforce service now world of enterprise software versus
open AI being I don't know just like a more a pure heart pure tech company more like and I
starting to see more companies like that that represent the actual utilization and application
layer of generative AI is going to be a good thing it's going to be a very good thing
when rider Lee does its job well talk about like what you could see at helping a company
with writer yeah it's uh I think yeah I'm adding the Lee at the end of a startup like we're
in like it's like it's 2013 again what a time what a time um I
I think what it would look like is going in and actually taking one large enterprise
and then recreating hundreds of existing processes and just making them better.
Automating some stuff, adding a generative AI layer to other stuff,
like maybe keeping some stuff manual, like really rethinking every existing process at a large
enterprise and then like actually asking how does generative AI fit into this?
and then making that happen and actually creating the kind of frameworks and software that allow you to do that.
I think if any company is going to be able to win on that, that's where the, I mean, the value that's going to be accrued there is going to be massive versus just, again, I was shocked when I saw OpenAI's kind of like vision around its business is still chat GPT plus subscriptions.
because I still believe the companies that crack enterprise
are going to be the ones that really accrue value in this.
That's pretty cool.
Okay, so mine is a little bit different,
and that is Chegg, what AI has done to Chegg.
This is just an example, again,
of how current AI is going to change things no matter what.
And for those folks who don't know, Chegg is an online education business,
and they actually started with textbooks
and then built up like a pretty serious online education business.
And when kids would like want to research stories or problems, they would, they would say
that they were checking it.
And this is a Wall Street Journal story, how chat GPT brought down an online education giant,
basically saying that instead of checking, kids are using chat GPT now.
And check stock is down 99% from early 2021, erasing some 14.5 billion of market value.
And there are bond traders.
They have doubts that the company will continue bringing in enough cash to pay its debts.
So even today, we're already seeing this stuff start to really change education.
And I think that's like the most obvious place, but we've already had a few years to see this run its course and look at what it's already done with Chegg.
And I think that's like a sign of where things might go with the rest of industry.
Not that every other incumbent company is going to lose 99% of their value, but it does kind of show how standard it.
it's become already in education.
Yeah, I think also seeing that 99% drawdown brought me back to Chegg was definitely
one of those 2021 extrapolating into the future, the pandemic, and like anything with the words
online education in it just exploded in value.
So I think on one hand, some of it is related to just that not being the case anymore.
But also, I think this is actually a really good example.
If you think about it, like students.
are the ones, they will take whatever existing technology there is and make it work for them.
They will do that work, and they will figure out how to answer their homework questions
or maybe write a paper in some cases or whatever else it is.
And I think, like, this is a perfect example of, you know, a space where the user is actually
driving the innovation themselves because students like free things or cheap things that
help them do better more quickly. So that's a good, that's a good use case.
User driven innovation. I like that. And it's not just students. It's it's ad agencies are
starting to use it as well. Like talk about like trying to find the answers to the test.
The ad agency, the ad industry is a place where this happens. And this is the last one of the
three stories that we're going to talk about in terms of the practical impact today.
But again from the Wall Street Journal, AI saves ad agencies a lot of time should they still
charge by the hour. And this is a story basically that ad agencies have been charging by the
hour to clients. And now all of a sudden, they have chat chip PT that's made them far more
efficient. For instance, you know, if your job was to write headlines for a brand, now instead
of having to come up with 50 unique ones, maybe you can write five unique ones and ask chat GPT
to extrapolate out or do the same thing with creative. Like creative resizing is now becoming much
easier with generative AI. And all those hours that ad agency spent doing that work, which was
really repetitive and not value ad, now has become pretty automatable with artificial intelligence
and they're trying to find a new way to charge. And some are going to charge now based off
of specific results versus hours. And maybe we see that in places like law, right, and other
disciplines. So Ranjan, I'm curious, do you think that ad agency should still charge by the hour?
given that they were probably not charging for, you know, such valuable work a lot of the time.
And now that chat GPT has all of a sudden made them efficient.
They realized that like a lot of the things they were doing or weren't really additive to the client.
I mean, what do you think the best solution is?
And what do you think the story tells us?
I think first, it's incredible that you just associated innovation and ad agencies.
I think every ad agency out there would be ecstatic that someone else.
Have advertising listeners. Shout out to the ad listeners. Yeah. I think this story is much, much bigger than
just ad agencies. And I loved this one because I think the pricing of everything that we got used to
could change. You just said it, whether it's ad agencies, law firms will be very similar, like outcome-based
pricing. And in healthcare, this has been a conversation for years, the idea of outcome-based pricing
where like the actual results are where you bill rather than the treatment itself is a much,
much better way to potentially approach this. So I think for so many of these industries,
the way the entire pricing structure changes, it's going to change. And I even think in SaaS,
that's going to be the case. And there's been a lot of talk around this with even Salesforce's
AI agents or in many others is that seat based pricing doesn't make sense in a lot of ways.
like if you're automating a bunch of workflows how many people are using it is totally irrelevant so that there's definitely going to be some new pricing structure that's something around outcomes around the amount of compute that is consumed like like it's it's actually kind of exciting again like on the productization side of this like it's going to completely change the way and different industries price and it'll be better i think okay so after me like sticking up a whole fight about the
practical applications of this technology at the beginning of the show, I'm starting to see it
your way. I do think that like there's a lot of room ahead in terms of whatever we have today
to apply it practically. And I think maybe we should have flipped this. Like that's actually
the big story and where the models go is sort of, now that I'm talking about it out loud, I still
care more about the where the models go. Actually, no. Are you saying it's time to build?
It's time to build, Ron John. It's time to build.
to build some products. It's time to break through that asymptote, man, and just get going.
Just break the asymptote, man.
So on Monday, I spoke with Gusev Sotostrom of Spotify and managed to fit in a question about parent mode.
But we also spoke a lot about whether generative AI will replace music and whether that is something that can touch someone's heart, whether it was developed by a human or a machine.
And, you know, I'm looking through.
our doc and I see that you have inserted that story back into the conversation and I'm ready to hear
your reaction to what happened on Monday. All right. So first of all, and I had asked Alex to ask
Spotify CTO, CPO about parent mode. And my problem is, since I've had kids, my Discover Weekly has
been destroyed. I had screenshoted my most recent Discover Weekly when I opened it. And the first song
is the poop poop poop poop song yes and basically everything in there is uh is just something
number one hit it's it's kind of a banger but uh basically like not being able to separate out
what my kid is listening to versus what i'm listening to makes it it just destroys the algorithm
and there's no like i want to hit a refresh button he had made an interesting comment that uh like
well, making different profiles is actually really bulky and switching.
It's kind of a pain.
I could make playlists for my kid and then say,
do not add these to the algorithm.
But I think it's like a reminder that the most complex advanced recommendation
system in the world with basic UI problems does not work.
And this is another, I think that's another good example of that,
that like you could have, and I've read stuff over the years,
Spotify, how they populate, Discover Weekly, and they're very early to machine learning
recommendation, but a simple UI problem makes it so I end up with the poop, poop, poop
song as number one. Yeah. And I do think that they're good, it's interesting that they're going
to look at these signals and try to like get better at figuring out where your listening doesn't
match what you usually listen to and try to exclude that. But it seems like a problem that's
going to take some time for sure. So enjoy the poop poop poop poop song and wheels on the bus. I was like
We were talking about it on LinkedIn.
I was like, oh, enjoy wheels on the bus.
You're like, no, it's much worse than that.
It's much worse.
But actually, did you, maybe what could solve it,
did you try the new AI generated playlist feature?
No, but that's pretty cool.
So talk a little bit about that.
Because Gustav and I were talking about that,
then it came out this week.
So it's basically you enter a prompt and you get a playlist
and you get a bunch of recommended songs
and you can kind of like plus plus plus
and choose a bunch of the songs you'd want.
And so I literally was like, one of mine was, you are a frat boy in the year 2002 in Atlanta who wants some party songs.
And it literally recreated my early college, my college experience.
And it was so good.
It got the most cheesy, but actually correct and beautiful stuff.
So then I made a running playlist and I gave it a couple of examples.
And again, it nailed it.
So it had me start to start thinking, like, imagine if it really can get it to where you just, depending at that moment, are in a particular mood and a really, really, really specific mood, and you just tell the system that, and it creates this playlist for you.
And I think this is going to be big for them.
Because I think not everyone is the kind of music listener, which I am, like, who spends time making playlists.
So this could really solve this problem for a lot of people.
Yeah, and this is what I was trying to speak with Gustav about.
It's like, what if you write your prompt and you actually get AI-generated music
that will speak to you more than the human-generated music?
And you actually also dropped this in our document.
I was like, what am I looking at here?
And it's a bunch of drone video with this really lovely song in the background.
And the song, you later led on, totally AI-generated.
Yeah, I got a drone recently and have been having some.
fun making some videos with it and i was up in hudson valley in a town called cold spring new york
and literally just with suno made a prompt uh it was like write a song in a foxy acoustic style
about a town named cold spring in hudson valley and talk about the foliage so i made this video
put this song as the backing music shared it with my family and an apple photos shared
album that we use and my uncle was like this is a beautiful singer who is she and then there was that
moment of i'm like do i divulge and then i did i was like yeah it's a i which which blew some minds i
think the song was genuinely good yes it was really i enjoyed it very much all right and also now that
we're talking about spotify i'll just note that we are now doing our wednesday shows uh via video on
Spotify. So if you've recently found the show, this is how it goes. We do Wednesday interviews
with folks in the tech industry or outsiders trying to change it. And then on Fridays, Ranjan
and I talk through the news. So these shows will be audio only across all platforms, the Wednesday
shows video on Spotify. And if you're new here, we appreciate you come in aboard and giving the show
a shot. Definitely seeing a bunch of new subscribers come in and we appreciate you all. Before we go to the
break. Just want to say, share some gratitude to a couple of our listeners. First of all, Context
1930 shared a comment on Trump in the reviews of the podcast. And it was a critical review,
but it was five stars. We're taking it into account and we appreciate the way that you shared
that feedback. It helps us and it helps the podcast. And I think that's the best way to do it. So thank
you, Context 1930. Also, Luke Squire made a comment about our discussion about polls.
polling versus prediction markets on LinkedIn, basically in favor of polling versus the prediction
markets. We've got a couple of those. And that's another great way to share feedback and thoughts
on the show that's shared on LinkedIn. And critical or not, we love to hear what you think about
the show. And it obviously gets the word out to others. So we appreciate that. Thank you, Luke.
And then Graham High emailed me on our email address for feedback, which you can find in the show
notes and made a very interesting point. So we talked a couple weeks about how the government
should build its own Starlink. We talked about that a few weeks ago. And Graham pointed out,
and I'm embarrassed to admit that I didn't know this, that the Department of Defense has actually
already started work on its own satellite internet company or communication system working with
SpaceX. It's called Star Shield. Ranjan, did you know about it? Here's from one story about it.
it's a militarized version of space x's starlink internet satellites with enhanced encryption
and other security features and unlike starlink which is a commercial service the star shield
satellites would be owned and controlled by the u.s. government so the government is actually
building this i did not know that but we need to do more space coverage that's right i think uh i think
space for 2025 is going to be a good topic all right bezos put us in a spaceship get we'll take
our mics and we'll do it and everybody will be happy. Podcasting live from Blue Origin. That's
right. Life in space. We know you listen. So just do it. All right. Let's take a break. We're
going to talk about Blue Sky. And if we have time, we're going to talk about Apple Smart Glasses right
after this. Hey, everyone. Let me tell you about The Hustle Daily Show, a podcast filled with business,
tech news, and original stories to keep you in the loop on what's trending. More than two million
professionals read The Hustle's daily email for its irreverent and informative takes on business
and tech news. Now, they have a daily podcast called The Hustle Daily Show, where their team of
writers break down the biggest business headlines in 15 minutes or less and explain why you should
care about them. So, search for The Hustle Daily Show and your favorite podcast app, like the one
you're using right now. And we're back here on Big Technology Podcast Friday edition. Just a few
minutes left, but I definitely want to talk quickly about this blue sky surge.
So blue sky is now up to 15 million users, and it is, it's really soaring in the wake of
the election.
I don't know about you, but I've definitely noticed myself and lots of other folks have
talked about how they've seen mass amounts of followers delete their Twitter accounts.
And I think blue sky in threads, which threads has added 15 million users and just since the
start of November have definitely benefited from this.
So do you think that this just has staying power or is it a flash in the pan?
I think it does have staying power this time.
So I went back to the blue sky account I'd created like a year and a half ago maybe.
And it was interesting.
I actually saw people who I would engage with on Twitter all the time who I hadn't really processed had left but just kind of hadn't thought about or noticed in a while.
And suddenly it was like, oh, wait, they're alive and kicking and just.
having those same conversations, especially around a lot of like economics topics,
finance topics, even in tech as well. I found a lot of the, a lot of tweeters from my past
in there. So I think, because again, from a product standpoint, from before even like
creating an account, signing in following was kind of a pain. And then now when I went back,
it's it's pretty much on par with Twitter slash X. And so I think there's there's staying
here. Because again, the actual technology behind any of these apps is not that complicated. It's
purely about the content and the people involved. So I think it does represent a risk this time.
But we've said this a few times now. So yeah. And I'm about to pour some cold water on this.
Max Reid, who writes Reid Max on Substack, he says, from what I can tell, the users who've been
joining Blue Sky and Mass recently are members of the big blob of liberal to left wing journalists.
academics and lawyers and tech workers, politically engaged email job types who were the early
Twitter adopters and whose compulsive use of the site over the years was an important force
in shaping its culture and norms. But he says, Blue Sky is really acting more like a large
discord server, a place to socialize bullshit banter and kill time than a proper Twitter
replacement. So basically what he's saying is it's inhabited only by those people and it feels
a lot like the old Twitter, but it just doesn't have the user number.
that it used to have, and therefore, the blue sky boom might be an illusion. What do you think about
that? No, so I think when I had gone on it way back, it was like the extreme version of the
anti-Elon Musk, anti-Twitter types. This time, there's a lot of sports highlights on there,
which could be my, I mean, there's more kind of normy content on it this time around. And I pretty quickly
was able to find a lot of good follows. So I think that that's still kind of how things were
and maybe that's his specific feed. But I think it's different this time. Let's see. I don't
think that it's going to work. One thing we can say for sure is it doesn't look like Threads is
working. I mean, threads added 15 million people since the start of the month. And Michael
Learmouth, who I work with as an editor pointed out to me, he's like, does it feel like that?
No, it feels like the same thing is just people complaining about threads.
I can't with threads.
I opened it up again.
And, yeah, I mean, it's so odd in terms of like, and I've tried to follow a bunch of people on it.
But I don't know.
It just does not deliver kind of more real-time, interesting conversation.
I will admit, like, Blue Sky, I actually moved it to my homepage.
on my iPhone and moved X off of it.
And then in New York, this week on Thursday, we were looking out our window and you saw
smoke coming out of a building in Midtown.
I don't know, did you even hear about it?
Yes, of course.
So there was a fire in Hudson Yards.
Apparently it was like a mechanical room, blue operas, all like that.
But, and no one was injured or anything.
But I actually tested.
I went to blue sky and searched NYC fire.
nothing. I went to threads, nothing. I went to Twitter and got all the info I needed right
away. Yeah, that's what I think Twitter is going to be the one was staying power. It's the network
effects. It's very, very, it might be the most difficult to replace social network. Like we've seen
like blue Facebook, at least in the U.S., start to lose a lot of interest. People are on Instagram
now. I just don't see it happening with Twitter because it is just the group of sickos that have
been on that platform and the network effects there is very difficult to displace.
Okay, last story of the day, Apple is thinking about smart glasses was in our doc for like the last
week, but there's been a lot of politics to talk about.
This is from Bloomberg.
Apple was exploring a push into smart glasses with an internal study of products currently
on the market.
The initiative code-named Atlas got underway last week and involves gathering feedback from
Apple employees on smart glasses.
And it's been led by Apple's product systems quality team, part of the hardware engineering division.
So it's very interesting to me that smart glasses are already becoming a thing.
Meta has a great pair out with the Raybans.
And Apple has been beaten to the punch here.
And I think it's not going to be too long until we see Apple build a product like this of their own,
if not one with an enhanced Siri to hit one of your most favorite thing.
What do you think?
Yeah, I think I told you a couple of weeks ago that I'm testing the Snapchat, the snap
Spectacles, which is their new augmented reality glasses that you can get as a developer
versus Orion from Facebook, the actual AR glasses are not available for any kind of like general
release.
But after using the smart the spectacles, it's, they're amazing.
And I talked about it like even my son can use them.
instantly my mother like anyone of all ages and kind of like technical technological proclivity can just
pick them up and use them and i think this is the this is the form factor of the future this is like
what we're we are going to all own some kind of glasses and apple's got to get on there quickly
and the vision pro was not that and vr i do you think it says about apple that they haven't been
able to do this. It's not a good sign. No, it's not a good sign. I think like Apple intelligence,
I mean, if you think about it, we have a bunch of misses in a row. Apple intelligence, maybe it'll
come around, but it is so, so far from anything we have seen even remotely close to useful. The Vision
Pro flop. I mean, and I'm still upgrading my Macs and AirPods and iPhone. iPhone.
and all that but it's just it's I mean and again at their scale they need to find that next big
winner we yes everyone knows it and it does not feel maybe Siri will work in a few months I think
Apple's best chance is that Mark Zuckerberg gets so ahead of himself on his rebranding campaign where
now he's like you know cool MMA zuck with the chain and the big t-shirt and the long hair that he distracts from
the mission and then gives Apple an opening. And, uh, you know, I'm a fan of a lot of Zuckerberg's
side projects, but there was one this week that I just didn't think hit and raised some
emoji red flags for me. And that was a collaboration that he did with T-Pain, uh, to sing a song,
uh, low. It's called low. It's, uh, one of the hit songs back in the day. I believe it's get low.
Alex. It might have made it into my Spotify AI playlist from 2000s college party music.
And he worked with T-Pain to record a version of this song. It's quite X-rated. And T-Pain wasn't even involved in Get Low back in the day, but this is from Business Insider.
The duo, which calls itself Z-Pain, released the Slowdown, Not Safe for Work track on Spotify.
And it features a heavily auto-tuned Zuckerberg, sinking a original.
lyrics about going to a club and getting confronted by a security guard and it features Zuckerberg
singing some lyrics that I really never wanted to hear him sing. Oh, this was awful. I mean,
what I kind of love is thinking about like trillions of dollars of market capitalization,
potentially swinging on Mark Zuckerberg sitting down with T. Payne.
with an acoustic guitar, I can't remember, does he actually play it in the video or, but singing.
I'm trauma wiping it from my head.
Yeah, and singing about sweat in the nether regions.
And with, and to me the most ridiculous part is that as you said in the Business Insider article said, T.Pain didn't even sing get low.
It was Lil John and the East Side boys back in the day.
So, like, just how this came to be and what this could mean, like, you're going around laying off people telling them this is the year of efficiency.
And then you're trying to call yourself Z pain and come up with some weird alter ego and Swiss, oh, man, I can't with this one.
This one was too much.
Ron, John, I think there's only one thing that's left to do at this point.
What's that?
that is to
cue up the song
and play about as much of it as we can get away with
without being kicked off of the podcast platforms.
So thank you for coming on the show, Ranjad.
Thank you everybody for listening.
And now to play us out, Z-Pain, Mark Zuckerberg,
and T-Pain.
You'll see you next time on Big Technology Podcast.
3-69.
Damn, you're fine.
Hoping you can sock it to me, baby,
one more time.
Get low, get low, get low, get low, get on, get on, get on, get low, get low, get low, get low, get low, get low.
Come the windows to the walls, till sweat drops down my balls, till all these bitches crawl.
Oh, skeet, skit, motherfucker, oh, skeeep, god damn.
Oh, skeets, skit, motherfucker.
Oh, skit, skid, god damn.