Big Technology Podcast - NVIDIA Takes The Crown, Return of Ilya, Vision Pro Stalls
Episode Date: June 21, 2024Reed Albergotti is the tech editor at Semafor. He joins Big Technology Podcast to break down the week's news. We cover: 1) NVIDIA temporarily becoming the most valuable publicly traded company 2) Is N...VIDIA a bubble? 3) What might disrupt NVIDIA? 4) Ilya Sustkever founds Safe Superintelligence Inc. 5) Who's funding Ilya? 6) Will SSI amount to anything? 6) OpenAI might become a public benefit company 7) The state of DeepMind's merger with Google Brain 8) Mustafa Suleyman's entry to Microsoft and how it impacts the relationship with OpenAI 9) Apple Vision Pro hits a speed bump 10) Vision Pro & Apple Intelligence similarities and differences ---- You can subscribe to Big Technology Premium for 25% off at https://bit.ly/bigtechnology Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
NVIDIA passes Microsoft momentarily to become the world's most valuable publicly traded company.
Ex-opened AI chief scientist, Ilius at Skever, has a new AI company focused on superintelligence.
Apple is managing its ambitions, shall we say, on the Vision Pro, and plenty more coming up right after this.
Welcome to Big Technology Podcast Friday edition, where we break down the news in our traditional cool-headed and nuanced format.
Big week of news looking at the rundown we have so much AI to talk about, also an update on the Vision
Pro, and we've a great guest for you.
Returning to the show is Reid Albergotti, the technology editor at Semaphore.
Reid, welcome to the show.
Thank you.
It's good to be back.
It's great to have you back.
Another historic week, shall we say, here on the show, because for a minute, really, on Tuesday.
And right now, the market is not helping it sustain this moment.
But who knows, by the time this goes to air, could be back in pole position.
But Nvidia became the most valuable publicly traded company in the world.
We had talked about when this might happen, whether it was going to happen this year, somewhere down the road.
whether it was even possible and now we have it right so a a company whose product effectively is
the chips the software and the networking requiring to require to train and run AI models became
the world's most valuable most valuable publicly traded company for a moment it's significant looking at it
this is not a company that people had on their bingo card you know so to speak to take the
poll position on the public markets two years ago a year ago even in January and yet here
it is. What do you think this all means? And do you think that it's a bubble? Because that's obviously
the question that comes up. Yeah. I mean, I don't think AI is a bubble. I think, you know,
they, Nvidia has been now for like a decade. They've been an AI company and they're,
they've timed this perfectly. And they are now kind of the infrastructure layer, both in
hardware and software for AI today. And I think the question for investors is just, you know,
can they be disrupted? Every single company, it seems, that's in this, in this field is trying to
disrupt Nvidia, even as they are also trying to sort of kiss up to
Nvidia and get as many of their chips as they possibly can today.
They're trying to figure out how to build their own silicon, and they want to
replace the software layer, the, you know, kuda.
So I think Nvidia has everybody gunning for them.
And, you know, I think for a while now, the smart money has been saying,
okay, this is, Nvidia is a great company, but it's not quite sustainable.
and it's i think at some point they'll be right but who knows how far how far this can go i saw a great
meme this week where it was like you know one of those people said that uh invidia buyers will
eventually become fatigued and then it was like invidia buyers and it's just like a thousand
pictures of david giggins just like it will not be uh fatigued but i think that what you
what you brought up is interesting that there are companies gunning for it and
To me, it's like, I don't think anytime soon we're going to see a company like AMD, which is also making chips and software, I believe, to run this stuff.
I don't think that they are going to be the ones that are going to potentially cut into Nvidia sales or even slow them down.
But what you mentioned, when you said, everyone's trying to do this, my head immediately went to, like, are the big tech giants going to be able to make this work on their own?
They're the ones that are the biggest buyers, I believe 45% of Nvidia's data center revenue, which includes,
the chips and the software and the networking for AI is coming from the big cloud providers
and probably Facebook as well talking about Amazon Microsoft Facebook you know if you want to do
Google if you want to work on AI today you need to use in video but they're all working I think
to a T on their own chips and if they're not if they haven't announced yet there's probably
interest inside the buildings to do it because these chips are expensive goes like what 30,000 to
40,000 each. There's a new generation coming out.
It's not just the chips though, right? It's yeah, they're working on their chips.
Software networking, but they're trying, they're all trying to figure out ways to essentially
make AI development hardware agnostic. And that's a software problem, right? They right now,
everything around, you know, building these AI things, it all works better on these,
on these, on these Nvidia, on video chips. There's so much work going on now.
that is honestly beyond my, you know, it's super, super complex stuff and like, you know,
would bore your listeners, I think, to not to make that, that mistake.
I probably shouldn't use that term.
But like, we're not going to get into all the details.
But basically, if you can, if you can figure out how to allow AI models to be trained
and run on basically any piece of hardware.
then you don't necessarily need the most powerful chips.
You could just buy more of less powerful chips, right?
And right now you can't.
Right now, it makes economic sense to buy the absolute most powerful chip
that you can possibly buy, and Nvidia makes that.
But that may not be true a year or two, five years from now.
It may be that you could take, you know,
it may be that you could take 10 billion raspberry pies
and put them to string them together and train up to a nuclear plan and train AI models
because if you solve the sort of this interconnect problem right the problem of like the loss
between connections of chips right now you want as few you want as few chips as possible right now
because you lose so much by connecting them all together and so then you want the most powerful
chip does that make sense I mean yeah and that's just one example it's so interesting talking
about how this is a software problem. One of the most interesting tidbits I had after spending
a full day at NVIDIA on Thursday of last week at the headquarters, which was very interesting,
was that they have, I think, they definitely have more software engineers than they do have
hardware engineers at the company, which I think would sort of boggle people's mind.
Like, people think the surprising thing about them is that they design the chips, they don't
manufacture them. I think the real surprising thing is that the true edge they have is
I mean, of course their chips are better, but the software and the networking as well talking about
that loss problem is world class. And that's why it's been so difficult for companies to come up
and grab that lead from that. Yeah. I mean, I think that's the other point. Yeah, for sure.
They were smart. I mean, they started working on that and building essentially that moat because they
knew that the hardware, you know, could become commoditized. And they started building that, you know,
years and years ago. So I have a lot of respect for them. I think it's a, it's a, yeah.
It's a great company.
They are a very, very impressive company.
And here's like a few stats talking about, like, people have talked about this
NVIDIA FOMO, right, where it's like, obviously, it has elements of being a meme stock
and the rewards that some investors have gotten for sticking with it are just unbelievable.
So this is from a Bloomberg report.
By the way, the stock today is, or a stock right now, looking at it, is $3.29 trillion.
Not bad.
It's up 170-something percent this year.
It's data center business.
rose 427% this year from year earlier, which is nuts to 2.26 billion. And that's 86% of
its sales. That's all the AI stuff. So it really is, I mean, it does a lot. Like you see that
there's a real breadth that the company does, but AI is really pushing it forward and has 80% of
the AI chip market. But here's what I want to talk about with the investors that have stuck
with it since the beginning. This is from Bloomberg. If you bought Nvidia at the IPO, your return
is 591,078%, including reinvested dividend.
That's nuts.
And this is how difficult it's been to stick with them.
So this is from the Bloomberg story.
Longtime investors in NVIDIA have had to stomach three annual collapses of 50% or more in the stock.
Sustaining the current rally will require customers to keep spending billions,
a quarter on AI equipment, whose returns on investment are so far relatively small.
So first of all, if you bought Nvidia at the IPO and stuck with it, kudos to you.
You are a steel-stomicked investor.
But it also is interesting because who says, you know, it's almost like the past is prologger or whatever they say.
And maybe this is the case again where, you know, we don't have huge ROIs on AI spending right now.
And that could eventually, I mean, I'm not thinking, I don't think it will happen this year or next year or the year after that.
but that might eventually be what cools this off and not competition.
What do you think?
Yeah, it could be.
I mean, to the chip market, it's just always cyclical.
It's just always a boom and bust.
So you know there will be a bust.
I mean, it's just there's no way around it.
I mean, I think the question is, the smart question is more like than what happens after that.
Is there another, you know, do they just keep soaring after, you know, these inevitable dips, right?
And one of the things, Pete, I thought was smart just to look back and compare it to like when Cisco,
was at one point the most valuable company.
That's a big comparison.
Yeah, I mean, that's sort of an interesting one
because they're more on like the infrastructure layer
and when the internet was relatively new,
you know, they were building all this networking equipment
and it was the hot, you know, the hot commodity.
And then eventually this stuff becomes commoditized, right?
I mean, it's just, it's the way it goes.
Yeah, but I think that kind of building off what you're saying,
it seems like a pretty good bet that
what we're seeing in AI is not going to slow down in the next decade at least and so if you're
in video you're really sitting pretty right now totally and Cisco's still around doing fine yeah we love
webEx um no i'm just kidding i hate that thing but um but yes Cisco still alive they so they are an
advertiser with seven before so you know i guess i'm so you're gonna yeah shout out webbx
okay i won't uh make you shit tuck your advertisers no i'm just i'm totally joking i i there's a
there's a there's a there's a there's a there's a there's a wall there just to be totally clear yeah
but before we move on to uh what ilia said together the former open a i chief scientist is doing
for listeners this is obviously a huge story like the invidia story there's almost like so much
more to talk about and we will be doing that next wednesday when michael battenick from ridholt's wealth who
It's been a longtime holder of Nvidia, the second half of the Compound and Friends podcast, along with Josh Brown.
Batnik and I are going to get on the line Tuesday to talk about what underpins and what the Nvidia trade and the risks there.
And we're going to have an entire Nvidia stock-themed show for you on Wednesday.
So stay tuned for that.
I know there's a good chunk of you that are going to like it and a bunch of you that are going to really want the hardcore technology episode.
So don't worry.
I'm also interviewing Nick Bostrum, who's the AI Doomsday.
philosopher who now believes we might see AI Utopia, just an hour later, and that will be on
the feed within the next couple weeks as well. So a little bit for everyone. Okay, speaking of a
little bit for everyone, the guy who basically helped build Open AI Ilius at Sgever is launching
one of the weirdest, and I'm sure this means he's never going to come on the show, but I have
to say what I believe. And that is that it's one of the weirdest companies I've seen
launched in a long time. So it's called Safe Super Intelligence Inc.
It's a startup. This is from the Verge.
It's a startup with one product, and that is creating a safe and powerful AI system.
I think that the verge is really underselling it.
They're the only product they will ever release, according to Ilya, is superintelligence,
which, you know, for just to give the definition, it's AI that's smarter than humans
and not just smarter, but way smarter because it just multiplies its intelligence once it hits
human level intelligence.
I'm just going to say straight off, here's why I think it's weird.
This is from an interview that he gave in Bloomberg.
Since Geber declines to name safe superintelligence's financial backers or disclose how much he's raised.
I got to say, I got to be strong here.
I think that's total bullshit.
You cannot come out to the public and say you're creating a company that has one product, safe superintelligence and not tell us who's given the money.
I think it's a completely unacceptable way to roll a company out.
It undermines credibility in the project itself.
This notion of dark money is, I think, terrible.
We should know who stands to make money from the building of safe superintelligence
or superintelligence itself.
And if you're not going to say that, sorry, I can't take your project seriously.
What do you think, Reed?
So you're not an investor?
I take it.
Oh, no, definitely not.
We obviously passed on the deal.
I'm kidding.
I think you're, no, it's an interesting point that it's not the direction I thought you were going to go with it. I, you know, think you're making kind of a moral argument here, which is this is such important technology that the public deserves to know, right? And more than that, I think that if you're promising that you're going to create safe superintelligence and you're going to go out there being high-minded, talking about how like you're not going to have the pressure to ship products because you only have one product and you'll release it when it's ready and you won't when you won't. And by the way, like, this is the thing that you have to be safe.
about that you have to make sure that there are guardrails that it doesn't turn us all into paper
clips you know wink wink you cannot get out there and not tell us who's funding this i just think that's
that's no it's an interesting i think it's an interesting point i mean i was sort of i read that and was
kind of like well why like why are you not is it because you don't actually have any investors is it
because it's dust and moscovitz and you know you know it's not people are trying to avoid there's
controversy around opening eye. Maybe, you know, maybe the investors don't want to seem like
they're, you know, taking on opening eye or something like that. I mean, there's probably
some interesting reasons around that. I think the market will probably take care of this
problem, right? Like you're, I think is, I think the other thing he said to Bloomberg or maybe
it was in his statement was he doesn't want to get into this competitive rat race. I think that was
the quote. I mean, I just, I don't, I don't see what investors like. This is the quote.
He says it will be fully insulated from outside pressures of having to deal with a large
and complicated product and having to be stuck in a competitive route race.
Sorry, go ahead.
Right, right.
So it's like, yeah, let me invest in this one super genius AI researcher with maybe not a lot
of business experience.
And let me, you know, he doesn't want to be competitive.
It's only probably the most competitive industry in the world right now.
I just just the idea of making a return on this investment seems very low to me and I think if he wants to develop superintelligence he's going to need to recruit the very smartest researchers in the world and take them away from companies that are guaranteed to make them very very rich and then join this company that doesn't want to be part of the competitive rat race I think I think it's a tall order so I think you're more
point is a good one, but it does, it's sort of, it sort of will be taken care of by the market
here. Wait, hold on. No, I really, let's dig into this. What do you mean the market is going to
take care of it? I think no, no, he's not going to be able to recruit the talent or the investors and
the whole thing is going to fizzle or it will just be a forever passion project of his and he'll get
some enough funding to kind of just like get by and from people like, you know, maybe it's, I'm
just totally guessing. I haven't done the reporting on this, but like, Dustin Mosk,
Yon Talon, the Skype founder, maybe Eric Schmidt's foundation, one of his foundations are just
going to give him a little bit of money and he can sort of work on superintelligence for the
next 25 years. You know, and meanwhile, so is open AI and Anthropic and, you know, every other
big Google, every other big tech company. Or he builds the AIs that help him do the, you know,
do the research and he just disintermediates things.
need to recruit. I mean, I think that's unlikely. I think you're probably right. But I guess where
I'm going, I basically, it's so interesting, like you're starting from the beginning and sort of
charting like the path and I'm starting from the end, which is like, all right, let's say they do
it. Who stands to benefit from that financially? Because if you do develop superintelligence
and you have investors, right, it's not self-funded. It's like, okay. So like now we're going to
learn who's going to, like, who's going to make money off of this? And who's going to like how
priority or something or i mean no no i just i i guess i question the um veracity of the claim that
you're going to create safe super intelligence if you're also doing it on behalf of investors
and you can tell us that it's safe but it's also like it's not it's not i think we got to deal
with this like i'm saving humanity's rhetoric and to me the rhetoric is more honest if you're like
i'm doing this on behalf of some people who are trying to make a financial return yeah
And then we can start to have a discussion about what your aims are in the real world versus this make-believe world that it seems like he's in.
Well, now you're getting into this other issue, which is like, you know, what's this safe?
I mean, every company is saying they want to build safe superintelligence, you know, anthropic, open AI, Google, who else?
I mean, those are the big three probably right now.
I mean, they're all saying we're going to build safe super intelligence, right?
And ultimately, they all have commercial motives, right?
And I think at some point, you know, safety becomes very subjective.
We don't really even know what safety means at this point.
I mean, I think you basically said that.
So I think all the, I think I would just, I just sort of put them in the same bucket as everyone
else that's just sort of, you know, saying, yeah, we want to build, we want to build
this stuff safe.
But at the same time, we're going to go as fast as possible.
try to make everybody rich.
It's just, you know.
I trust you.
It just doesn't, you know, it doesn't seem plausible to me.
I think at some point, like I was talking with someone the other day about this.
They were saying they were at this, this, it wasn't public.
There's a meeting recently with a bunch of people from all the big foundation model
companies and they're all talking about AI safety and these employees are sort of,
you know, like, like talking about how, like even criticized.
the companies that they work for, you know, saying, well, I don't think they should
done this and that. It's like, yeah, but ultimately, like, those types of people are not
going to be the ones with power at these companies. I'm not, you know, not to, like, pick on
any one company, but they're all kind of the same. Like, ultimately, those companies are
meant to make a profit. And when, if they develop something that is incredibly valuable, like,
quote, unquote, superintelligence, it's coming out, whether it's safe or not. I mean, they'll
try to make it safe, I'm sure, because that's not good for business either. But like, that stuff is,
you know, it's, we're, we're going to see it, right? Right. And there's like a whole slew of
news effectively along this line this week. So here's another story. So this is from the
information. Open AI CEO Sam Altman says company could become benefit corporation akin to
rival's Anthropic XAI. So basically the story says Sam Altman recently told some shareholders that
the AI developer is considering changing its governance structure to a for-profit business
that the nonprofit board doesn't control.
And this is interesting about benefit corporations.
I think this is like a pretty good structure.
So benefit corporations are legally protected from minority shareholders who might
otherwise sue the company for making decisions that don't prioritize shareholder returns.
I don't know.
That seems like a more healthy way to do a company, but maybe that's a question for another
day but it does seem like like on the theme of like going from the world of make believe to the
world of real life like actually this move from opening i would be a more of an acknowledgement of
reality we have billions of dollars from microsoft we are product focused even though we have
research we do want to grow and maybe this is the beginning of an acknowledgment of that yeah maybe um
you know i think i think it's still strong maybe still profit motive i mean even anthropic you know
I mean, Anthropics, by the way, their sales pitch right now when they're going out and, you know, selling to companies, right?
They're all competing to sell their API, you know, API access to, like, corporations.
Their pitch is like, we're the safe option.
We're the safe version of this.
But also, like, it's a for-profit company as well.
I mean, yeah, it's a benefit corporation.
But, you know, they're raising billions of dollars and they're racing just as fast as anyone else.
and I think, you know, yeah, it protects you from like the minority shareholders, but I, I don't see that as like changing your overall motive to make money.
Like you're, you've got a lot of employees there who were there because they know they're going to get rich and, you know, that's kind of like, I think that's the major thrust still, even with those.
I agree.
I think that like it's an admission that that effectively, even though it's like this benefit public benefit company, it moves,
my move, if open AI moves from this like kind of weird nonprofit where like the CEO can get tossed
if the board members don't like, you know, the way that he speaks to them to, you know,
effectively like a more mainstream corporate structure, I think that's good for open AI and probably
good for everybody else because it's like, all right, we're sort of dealing with an entity that
knows what it is versus diluting itself into what it should be. Right. I also don't think when
we talk about super intelligence, I mean, it's really kind of this sci-fi concept right now.
And I don't think it's going to look the way people think.
You know, it's not like one thing that becomes this all-knowing super-intelligent, you know, AI.
I think there's going to be lots.
AI is just going to infiltrate everything we own.
It's going to be lots of different types of AI models that are specific for different use cases.
And, you know, it's not, I don't think, I don't think it's like a winner takes all.
Like, I came up with super intelligence first and therefore, you know, I rule the world kind of thing.
I don't see going that direction.
Yeah, I mean, probably not,
but this is the way that a lot of people are building
is that they want to build,
I mean, this is sort of the line of thinking, right?
They want to build AI that builds more AI
and accelerates the progress.
So, I mean, there's amazing stuff coming out.
I'm not sure if you're seeing this.
I mean, it's mostly like secondhand that I'm hearing it,
but just the ability of this technology to code itself
is getting way better.
And are we going to get to?
the point where like most code is going to be written by by machines versus humans seems
likely and if not if not imminent but you're right that's not super intelligence like super
intelligence you're right is this more sci-fi kind of i mean if we live in a world with super
intelligence that will be a little bit insane yeah and i don't even i just i still don't think
people like really i mean they use terms super intelligence use term aGI i everybody kind
defines the stuff in different ways, you know, I think, I think it's going to get a lot better
and a lot smarter, but I don't think it's like this, I don't view it as this like imminent threat
to humanity. No, same here. Existential way. I mean, I think it's a net good for humanity. I think,
I think we should be excited about the way this technology is developing generally. And we should
pay attention to safety and all that stuff. I mean, it should be important, but I wouldn't expect
any for-profit company, whether it's a benefit corporation or not to be the one to sort of
decide what's safe or not. That's exactly. Like the more companies talk about, are aware of what
they actually are, the better. And here's another one where actually this is less, it's an interesting
story. It's less coming to terms with what it does and more just like this pragmatic move that's
happening within Google and how it's being registered. So this is from Bloomberg. Google deep mind
shifts from a research lab to an AI product factory. And this is from the story, according to
people familiar with the lab, some researchers are frustrated with having to follow roadmaps
that they feel have been imposed from above. Some of its biggest advances before the change
when they actually brought deep mind and Google brain together came from small teams that
banded together informally. And there's a feeling that this effort leaves less room for
experimentation and the pressure is already leading to a sense of fatigue.
It hasn't helped that Gemini has suffered one snafu after the other.
So do you think this is a wise, let's take this story at face value?
Do you think this is a wise strategic move from Google to sort of take its like world-class
AI research house and make it more focused on product?
Because, I mean, it seems like there are some rumblings from inside that like, if we're
not like kind of wasting time and experimenting, I mean, I'm putting wasting time in quotes, right?
But if we're not wasting time in experimenting, then there's less of a chance that we'll be able to develop the next breakthroughs that will put Google the next leap ahead. So where do you stand on this one?
Well, I mean, first of all, to take the story at face value, I think is, I'm not saying, not criticizing the story. I mean, John Victor wrote this story for the information a few months ago as well. It makes sense. And I think there's probably a bit of a bit of confirmation bias when you go out to report.
a story like this. I mean, you and I have both done this. Like we, you know, you want to do
that story about how the company feels. And it's like, well, there, you have no way of
surveying every employee at a deep mind. And I think you could, you could probably
write this story, um, you know, based on whatever, maybe six, 12, 20 employees and
and still not really capture like every, how everybody thinks. I've met people at
deep mind to we're working on things like, you know,
alpha fold, an alpha go and those wasting time sort of projects that you talked about that
turned out to be amazing. And they're working on product roadmaps now. And they're super excited
about it. And I think there's a, is it wise to do that? I mean, A, does, you know, does Sundar
even have a choice, right? I mean, this is now, like, he's going to get fired if he doesn't
go full steam ahead. And like, you obviously have to take your smartest people and put them on
you're smartest people in AI and make them work on these AI products now.
But there's there's something interesting about like you have all these people who probably
would would otherwise be in academia working for this deep mind company within Google.
And it's kind of this like ivory tower within Google and it's great.
They're doing great stuff.
And now you're like actually now I want you to make products.
Some of them are probably going to be really good at it.
And you're going to you're going to end up with some really interesting.
interesting stuff by basically like letting these people loose on something that they've never
really done before. And some of them are probably going to be annoyed about it. And maybe they should
go back to academia, you know, and make less money. That's a hard, that's, that's a hard thing
to tell your, your, your family, right? Like, I'm going to go make less money back in academia
to pursue my passion. But, you know, I mean, academia is traditionally where that kind of research
happens and has happened and probably you know probably should happen so I in a way it's kind of like
to me it's kind of like restoring the order in the universe where these companies are like okay
we've been just like vacuuming up all this academic talent and having them work on these projects
and you know now we're going to actually be companies and develop this stuff was that even
coherent at all I don't know absolutely but I have a counterpoint for you okay so it's so I think
that it sounds logical to say go to academia to do this work, but academia is so far behind
in terms of bringing our conversation full circle in terms of having the hardware software and
networking to be able to do this. This was a stat that kind of astounded me a while back,
but I feel kind of cruel doing this, but I'm going to make you guess. How many GPUs do you
think Stanford has? Stanford University. How many do they actually have? Yes. Oh, the
Stanford NLP group.
The NLP group.
Like,
their natural language machine learning group.
I don't know.
I mean,
I know they probably get a lot of help from,
from the private sector there.
Like,
I know Stanford has access to some of these clusters.
So I guess it's probably like,
how do you really define it?
But I don't know.
They probably don't own that many.
I mean,
I don't even know if they have their own cluster,
to be totally honest.
Yeah.
So they have 64.
64.
And that's like nothing.
Compared to 650,000 that meta is going to have by the end of year.
Yeah.
So, which makes sense.
And that's why, you know, the, I'm sure you're, well, you've probably talked about it.
In the podcast, this, you know, research cloud, the national research cloud, right?
No, talk about it a little bit.
Actually, it'd be good for listeners to hear.
Well, there's a pilot.
There's a pilot program to build essentially resources for academia and startups,
essentially like a gigantic cluster, supercomputer that you can, you know, and it's nowhere near
what it needs to be.
It needs to be funded by Congress.
It needs to be much, much bigger than it is.
And this is happening in the UK.
It's happening in Europe.
There are these national labs.
And hopefully we'll get there, right?
I mean, I think it should be, like,
government should be on par with the private sector
or even surpassing it when it comes to building these things, right?
If you look at how all technology has been developed,
going back to like you know nuclear weapons it's just it's been massive government projects and
so that's i mean it is that is an absolute real issue that you're talking about and a problem that
i think people are working on and hopefully will be solved yeah if the government goes on par with
meta in terms of uh invidia GPUs man we're going to hit four trillion pretty quickly yeah i mean
that's a good argument i'm for for invidia i mean i think it should i mean if you look at like
if you think of this is like a really important advancement in our you know in our society
our understanding of the universe like I mean the space program at one point I think the Apollo
mission was like taking up like over a percent of GDP alone if you go back and look at the
numbers and I think that turned into a good ROI I mean that was worth it yeah like we didn't
make a bunch of money on the moon landing but we made a bunch of money on the the technology
that was developed um around that and america became the leader and you know developed the internet
and you know basically like the modern computing industry right came out of out of those space
missions so i think you know it would make sense like i think we we should you know every country
should want to be a leader and it makes sense for the u.s to be that leader yep okay i yeah and so just
to wrap up the deep mind piece yeah sorry i took it off course no no that was a great
No, it was good.
We allowed that to breathe, and I think it was the right decision, just like going to space.
But I do think that, yeah, it might take some time for researchers to be able to do this in the academic setting.
But in the meantime, I think there is some credence to, like, what the argument that Demis makes,
the head of DeepMine, Google DeepMine, which is that, like, the cutting edge of research is also the cutting edge of product at this point in some areas, right?
And if Google throws, takes the sort of GPUs that it was going to give to DeepMind and the GPUs it was going to give to Google Brain and combine all that computing power and the best researchers and says go develop a cutting edge model and they do it, that does push the status quo forward.
Maybe not in like the strange and like, you know, ways that might pay off in 10 years.
But in the moment, it's still very advanced research that's going to have a serious impact probably most likely bigger.
than any of the other side product side projects yeah we've we've definitely hit a point in the
development of this technology where the private sector is is pushing things forward very very
quickly and developing amazing products amazing technology that was built on the back of decades
of academic research and so i think if you think that like the private sector will develop
super intelligence and that's the end like that's the net that's like the last
the last thing that we ever need to develop because then the superintelligence can do it all for us
okay well then that makes sense but if you think like i think that there's probably a lot of
a lot of things that need to be developed um you know technologically to get us to super
intelligence still it's going to take longer than than people think then yeah you still need to
be investing in academic in government funded academic research for things that will come down
of it will be products 10, 20, 30 years from now.
That's kind of how I view it.
Yep.
Okay.
So speaking of deep mind, you also have a very interesting story in semaphore
about how Mustafa Soliman, who used, who was a deep mind founder, is now the head of consumer
AI products in Microsoft and getting a chance to look at the open AI code.
So let's dig into that right after the break.
Hey, everyone.
Let me tell you about the Hustle Daily show, a podcast filled with business, tech news,
and original stories to keep you in the loop on what's trending.
More than 2 million professionals read The Hustle's daily email
for its irreverent and informative takes on business and tech news.
Now they have a daily podcast called The Hustle Daily Show
where their team of writers break down the biggest business headlines
in 15 minutes or less and explain why you should care about them.
So search for The Hustle Daily Show and your favorite podcast app like the one you're using right now.
And we're back here on Big Technology Podcast with Reed Albergotti's,
the technology editor at Semaphore.
You can find it as the URL semifor.com slash tech.
Is it Semaphore?
Yeah, I think it is semifor.
No, Semaphore.com.
Hold on.
Let me double check that.
Just Google.
Semaphore tech and you can find, no, it's not.
It's slash vertical slash tech.
Yes, slash vertical.
I just looked at up.
Get on the line with the semaphore webmaster man.
How do we not have that sorted out?
Yes.
Just Google, just Google Semaphore tech.
It will be the first.
results. Yeah. It's the good stuff. And so you tell us a little bit about this recent story that you
had talking about this really interesting moment where Mustafa Suleiman, who used to be a deep mind
founder, who's now inside Microsoft and is basically getting to peek into the open AI code and that's
making people get the willies. Like what's going on there? Yeah. And this is another story where
let's like, you know, like similar to the other, the deep mind story you were talking about. I mean,
this is not everybody right and I think I think people at the top of Microsoft the top of
opening I mean they'll tell you they've got a great working relationship and all this stuff
but I think there's something sort of I think there's an irony in this right opening I was
created literally to compete with deep mind and now the co-founder of deep mind is like hey
sending his people in there to go hey let's see what you guys are working on here what's
going on in there and it's just there's an irony there that's sort of worth pointing out
And yeah, it makes some people a little bit uncomfortable who see it happening.
But what's going on really is Microsoft is developing, you know, co-pilot products.
And this stuff is going through all of their products now.
And Suleiman is being brought on to kind of give those products like his kind of put his spin on it,
which is if you followed his inflection AI pie chatbot, it's all about sort of empathy
and relating to people.
And his vision is we're going to have this empathetic chat bot
and it's essentially going to become your operating system
for everything you do.
It's going to do all your computing,
all your internet shopping and reservations for you eventually.
And so that's probably what Microsoft sees.
There's like a value there in looking at that in that way.
That stuff is a long way off.
another thing that's probably a long way off is that Microsoft you know they're not
contrary to sort of how this gets reported they don't own open AI and that relationship
you know may have an expiration date at some point they need to keep those muscles working
that the AI development muscles working now they have Suleiman and I think he'll be probably
good at helping to recruit researchers and people like that and one day not today but
One day, I think Microsoft will want to probably be able to, if they need to, develop their own huge foundation models that would compete with a, you know, a cutting edge company, like Open AI.
And Open AI would want to do the same thing.
They want to one day not have to rely on Microsoft's data centers to train and run their AI.
So they're kind of like frenemies in that sense.
And, you know, people have pointed that out.
but I think my take on it is like they're not they're not like at odds right now but there's this there's this part of their brain if you think of these companies is like a big brain like part of the company is like has to be thinking ahead down the road when they might have to compete with one another if that makes sense but yeah and working together is is really working out for them yeah this is you guys have in your stories like your own you know you write the news and you write what you think about it and I love scrolling down to that so this was kind of
of an interesting perspective that you shared in your Reed's viewpoint. So you said if the
relationship does start to cool, Open AI could hold back on cutting edge research. Microsoft has the
right to open AI products, but only when they're efficiently launched as products. And it can't
look at experimental research far from commercialization. That's interesting. I didn't realize that
that was the case. And then, and you knew I was going to talk about this because there's a connection
here. With the Tour de France coming up, here's a bike analogy. Open AI in Microsoft or
a two-person breakaway far ahead of the Peloton by working together they might be able to keep
the lead. If we either go solo, they may both fall behind. And I think it's a perfect description
of the way that the companies work together. And I'm not only saying that because I was watching
the 30 for 30 documentary on Netflix about Lance Armstrong this week. And up pops a Wall Street
Journal article and first byline is yours read. And I was just like, I was watching with friends.
And I was like, holy shit, he's coming on the podcast this week.
Anyway, that was pretty cool.
So obviously, you reported a lot on Lance in her previous life and lots of tort of front stuff, but it was a good description.
But it's interesting that the two are, the two are, they can keep this lead if they work together.
But if they do go it alone, there's weakness on both ends that will not necessarily translate to strength.
Yeah, it would be too early for them to say, screw you, I'm going for the win.
And that would not be, that would be not good for either of them.
So I think, yeah, I mean, I think for now it works.
And I think there will be just like an awkwardness there.
And there's certainly tension like on a micro level.
Like if you talk to employees at both these companies at Microsoft, like lower level
employees at Microsoft and Open.
I like they'll bitch about each other.
I mean, you know, it's like the Microsoft people, some of them think the Open AI
people are pushing them around.
The engineers have disagreements.
I mean, but it's like, it's not, I mean, you could play that up and you could try to say,
well, these companies are at odds with each other.
But I think that's just only natural, right?
Like, that's, you know, that would happen even internally between different product teams
or different parts of the company.
And so I don't think that necessarily is a sign of anything.
I think it's just more, the Suleiman thing is more interesting to me because it's like this,
I think a long-term strategic.
question for both of these companies, if that makes sense.
Yeah, and so, definitely.
And so is super intelligence one of these models like hanging out in the back of the bus
with enriched blood being injected into its veins so it can finish first?
Okay, just for the record, I'm not the one who took the cycling analogy too far.
You know, I knew I was going to kill the joke and it was probably one of the worst ones I was going to make on the show ever, but I had to go with it.
No, I appreciate it.
I think that...
I have no regrets, listeners.
I think that, yeah, I mean, at some point, like, yeah, superintelligence is looming there,
is looming there as the, like, I don't think doping is the right analogy for it, but...
Yeah, there's no way to fit doping in here, but I still, I still try it.
Maybe part of Microsoft thinks maybe we'll develop the super intelligence.
Maybe, you know, maybe someday it's us, not opening eye.
As long as it's right, super safe, right?
wink wink it'll be it'll be safe just like blood transfusions safe yeah yeah no one ever was harmed
this is not a medical advice podcast okay so before we move on um we have one more story to cover
this week which is interesting so this may have been something that like i don't even know
how to title this the vision pro stalling because it's something that has been reported and then
kind of mold over and there's obviously been a lot of buzz about how the vision pro hasn't met
expectations in the short run or at least hasn't, you know, flown off the shelves. You can go
into the Apple store and get an appointment to demo it like at any time and don't have to wait. At
least that's been my experience. This is what I do on the weekends, folks. But this is from the
information. It says Apple suspends work on the next Vision Pro focused on releasing cheaper models
in late 2025. And so the story Apple has told at least one supplier that suspended, it has
suspended work on its next high-end vision headset. The pullback comes as analysts and supply
chain partners have flagged slowing sales of the $3,500 device, and the company is still working on
releasing a more affordable vision product with fewer features before the end of 2025.
What do you think the implications are here? I mean, this was launched, you know, a little more
than a year ago with great fanfare. Is this a scaleback of ambition or is the bigger picture just
that this stuff is hard? I just, I feel very vindicated because I think I said on another podcast or show or
something that I think Apple should probably just not release this thing at all, you know,
maybe like a year before it came out because it's just like, you know, why? I mean, it's not if
you, okay, if Apple had some breakthrough technology, sure, okay, come out with this thing. But the
tech is not there yet. What they really want to build is AR glasses, right? Like glasses that look
like normal glasses. You can't tell that there's anything going on there, but actually it's projecting
images onto the real world and it's very sci-fi. The only problem with it is it's impossible.
You can't build it. So they kind of like, they're like, yeah, we're good, let's just do like a
VR headset. But we'll make it the nicest VR headset out there. It'll be $3,500. And it will be
maybe marginally better than other headsets out there. It's like a little bit. I mean,
if you tried this thing. It's like, it's there. Oh, yeah, multiple times. I mean, I think the problem
with it is like the field of view, they spent all this money and all these features.
and the field of view is smaller than the meta thing.
And field of view to me is like, that's like the only metric that really matters.
It's like at this point.
It's like, just make the thing wider.
And so there's so many things that are not perfect and great about this product in a industry
where it's like crowded.
I mean, there's lots of VR headsets out there on the market already.
And like, people are not that into them.
They buy these things and they use them for a couple weeks.
and then they sit in the desk drawer because they're just not all that useful right now
until the technology gets better.
And so I think the problem with Apple is they have this giant ship that's really hard for
them to turn, you know, in short notice.
They've been working on this thing for a decade.
And all of a sudden, it becomes very clear, you know, in November of 2022 that this is not
the technology they should be working on.
They should have been working on AI the whole time.
And they need to reverse course.
but they came out with this product anyway,
and I think they're probably just hoping everybody forgets that it exists, honestly.
But I, you know, but who knows?
I mean, maybe they'll just plod on with the cheaper version and,
or maybe people will buy the cheaper version or whatever,
and then eventually the technology gets there and catches up.
But right now, and this is another area where, like, academic research needs to kick in,
like the optics technology is just not there yet.
to make this stuff good yeah we really have to wait till this stuff can be beamed into glasses
and then it has a chance to work i mean i have i have the rayband metas uh the uh sort of
voice slash camera glasses from meta and they're pretty cool uh but you know i don't think i think
they're just much more applicable because the form factor is normal versus like this the headset
let me ask you one last question about you know to build off of this um obviously you watch the
WWDC presentation last week from Apple. And I've talked about this on the show a couple
times, but I'm curious to get your take. Last year, like, obviously we knew it was going to
be tough and it was going to take a long time, but people were enthralled with the Vision
Pro presentation and guys were wearing them on the street and we were talking about spatial
computing. Is there a chance that the enthusiasm over the AI announcement has the same sort
of life cycle as the Vision Pro? I think Apple's presentation
on its AI products was like too good
like everybody loved it
basically that the demos won't won't work or
I don't know I don't even know if they won't work
but I mean I think they should have set expectations a lot lower
because I think even if it does work it's going to take a while
and everybody I everyone loved it
I mean I heard very few people say
I'm not that excited about Apple's you know
AI presentation it was just
universally hailed as amazing
And I think consumers are now going to expect amazing from them.
And if you look at this kind of stuff that Apple's tried to do, you know, the problem with
Apple is their ecosystem has become so complicated now.
It's just, it's sprawling.
It's this huge thing.
It's not quite at Microsoft Windows office level, but it's getting there.
And it's starting to not like work all that well.
You know, there's, there's cool features that they have that work sometimes and then sometimes.
don't work for some odd reason. And that's the kind of stuff that people got annoyed about
with Microsoft, you know, back in the day. So I'm just, I think that like this is going to be
really hard for them to do to get like all of these AI products to work across all these
different apps that they have. And I think it's also a little bit too focused on inside their
walled garden. Whereas I don't think that's how most people operate. Like they're not just using
Apple products. They're using like a combination of products. So they're going to need to
allow the APIs to work really well with other developers, which they don't always do very well at
either. So I think there's a lot of challenges that they have to overcome with this. I don't think
it's impossible. I just would have set expectations lower if I if I were out. Right. So not to put
words in your mouth, but you're basically saying that yeah, there could be a vision pro
life cycle maybe not as sluggish but the the sale might have been better than the outcome well the
difference is that this isn't a product people will have to buy they've already bought the product
it just be on their phone so everyone will have it and everyone will will either use it or not
um so it's the question is like i think the big question is like does it actually give them a
competitive advantage over right over people um at the same time they're facing
all this antitrust scrutiny that's the Europe and others are probably forcing them to kind
of like open up their ecosystems a little bit so right now I think Apple's walled garden really gives
it um the ability to like lock in customers right the green bubble situation is like probably the
biggest one and they I think will have less ability they're going to have to compete on the merits
more in the future like on the merits of the how well the product works and then
And then I think you're talking about, like, okay, well, our consumer is actually going to start, like, looking at, you know, Android devices, right? Because they have better AI. And so Apple's going to have to, I think of it more in that way. Like, are, is this going to get people to buy more iPhones? Or, and, and as the ecosystems open up, will they have to compete, right? Will they have, will there, will iOS AI have to be better than Google's Android?
right AI, if that makes sense.
Yeah, for sure.
And one thing I'll note, I mean, we have meta AI at the top of every Instagram and
WhatsApp app.
We have AI overviews at the top of many Google searches right now.
We're going to have chat GPT in the iPhone, and we're going to have some of these AI
features in the iPhone.
I don't think any of those AI features yet, those AI consumer features, has people running
to the experience for them.
They're there, and in some cases,
they're annoying. Yeah. When they're not telling you to put glue on your pizza. You know,
the outlier, those are the outlier examples, but there's also like when it's working as
usual, sometimes you just want to scroll right past and use the, and maybe this is because
you're just like muscle memory or maybe because I'm old, but like it's like you just want to
use the product the way that you've been using it for the past 15 years or whatever. Yeah, I think
that's true. Well, it's not there yet. I mean, it's this stuff is really in its infancy. So the
question is does it get good enough i think it will i think it's going to get i think it's going to get
pretty good over the next couple years and it will start being really useful but that that only
happens when like you can basically tell your i what you want to do with your computer and just does
it reliably yeah exactly we're not there yet no not even we're not even at basic intelligence
let alone super intelligence right i don't know maybe we're at basic intelligence but we're not
much past that so yeah basically it's it's like a it's like a as like this intelligent person but like
every like every 10th interaction you have with them they just do something completely unexpected and you're
like this this person is really weird and i cannot trust them with anything important
very well put just kind of how my wife describes me sometimes but still that's about to say yeah that's
my reputation in life. So that makes sense why we're here and talking about this stuff.
Reed, great having you on. Thanks so much for coming on. People can find your stuff at
Semaphore.com and click the technology, sign up for the newsletter, all that stuff. Yeah,
and the newsletter is free too. All right. Well, we hope to have you back again soon. And
thanks so much for being here. Thanks everybody for listening again. Michael Batnik is here
from the compound and friends and from Red Holt's wealth talking about the invidia trade
on Wednesday. Thanks so much. And we'll see you next time on Big Technology Podcast.