Moonshots with Peter Diamandis - The Man Who Invented Prompt Engineering on AI, AGI & The Future of Humanoids w/ Richard Socher & Salim Ismail | EP #152
Episode Date: February 25, 2025In this episode of WTF is Happening in Tech, Richard, Salim, and Peter discuss the latest news in tech and AI, including the LLM war, Grok’s update, and more. Recorded on Feb 24th, 2025 Views are... my own thoughts, not Financial, Medical, or Legal Advice. Richard Socher is the founder and CEO of you.com and co-founder and managing partner of AIX Ventures. Richard previously served as the Chief Scientist and EVP at Salesforce. Before that, Richard was the CEO/CTO of AI startup MetaMind, acquired by Salesforce in 2016. Richard received his Ph.D. in computer science at Stanford. He is widely recognized as having brought neural networks into the field of natural language processing, inventing the most widely used word vectors, contextual vectors, and prompt engineering. He has over 205,000 citations and served as an adjunct professor in the computer science department at Stanford. Salim Ismail is a serial entrepreneur and technology strategist well known for his expertise in Exponential organizations. He is the Founding Executive Director of Singularity University and the founder and chairman of ExO Works and OpenExO. Get one year free of you.com Pro: https://you.com/moonshots Join Salim's ExO Community: https://openexo.com Twitter: https://twitter.com/salimismail ____________ I only endorse products and services I personally use. To see what they are, please support this podcast by checking out our sponsors: Get started with Fountain Life and become the CEO of your health: https://fountainlife.com/peter/ AI-powered precision diagnosis you NEED for a healthy gut: https://www.viome.com/peter Get 15% off OneSkin with the code PETER at  https://www.oneskin.co/ #oneskinpod _____________ I send weekly emails with the latest insights and trends on today’s and tomorrow’s exponential technologies. Stay ahead of the curve, and sign up now: Tech Blog _____________ Connect With Peter: Twitter Instagram Youtube Moonshots
Transcript
Discussion (0)
If you were given a couple of billion dollars, you'd be able to build a digital superintelligence.
How quickly?
I think probably like a year and a half to two years.
Richard Sauscher.
Richard Sauscher.
Richard Sauscher, often called the father of prompt engineering.
He's one of the top five most cited researchers in AI.
Former chief scientist at Salesforce.
The co-founder of the AI-powered search engine.
View.com.
We're too late to explore the oceans and the world.
We're too early to explore maybe different galaxies.
We're right on time to explore super intelligence.
Why haven't we seen yet a kind of an agentic version of a Jarvis that just watches your
tasks?
Programming, science, research, that's where the next frontier is for a lot of these amazing
models.
I cannot believe that we're alive right now.
It's like people should realize
how extraordinarily lucky we are.
Undeniable.
Now that's the moonshot, ladies and gentlemen.
Everybody, welcome to Moonshots,
another episode of WTF Just Happened in Tech this week,
here with Salim Ismail, Peter Diamandis, and we have AI royalty with us today. Richard Soscher is the
fourth most cited individual across AI and Richard, what's the proper way to
phrase your domination and being cited? I have over 200,000 citations,
invented one of the most popular word vectors, got
neural networks into the field of natural language processing.
Invented prompt engineering.
That's right.
Incredible and Richard is the founder and CEO of U.com.
We'll get into that a little bit later.
He was acquired by his company Metamind, was acquired by Salesforce,
and he was the chief scientist and EVP at Salesforce,
and a lot more.
Saleem, welcome as well, buddy.
Good to be here.
Yeah, so a lot happening this week in the field of AI,
and I wanna dive into that, Richard,
get your extraordinary point of view here.
I wanna start with the launch of Grok 3. If I had to sort of like
tier all of the activities that have just occurred, and I want to contextualize it on
the notion that it wasn't very long ago that Elon raised $6 billion. I was full disclosure
an early investor in XAI, and he announces he's going to create the largest
GPT cluster on the planet, make it coherent, and he does that 122 days and blows people away.
Were you shocked, Richard, on how fast he built what he did?
Yelon executes, and with $6 billion, you can do a lot of damage in AI. I mean, we've seen companies like DeepSeek and that hedge fund
built amazing models with much less so in some sense, it's
amazing. And it is surprising how quickly they got that far.
But in some ways, you can expect some of these with exponential
technologies like AI, enough resources, you can go hard
pretty fast.
Yeah, my my standard phrase is is don't bet against Elon.
I just saw him last week in Miami.
I was there for the FII summit.
And the guy does execute.
He's got an incredible team.
So I'm curious about how you're benchmarking Grok 3.
Apparently, it's outscoring, you know,
chat GPT, Gemini, DeepSeek. How do you rank it again as an AI engine?
So we actually have Croc 2 already within U.com too, and it's a popular model, though there are
others that are even more often chosen by our users. I think what's interesting is, you know,
Sam Altman also talked about how the next generation of models are going to be almost the level of a PhD student. But what we
notice is that not many people are PhDs and have PhD level
questions in their lives. For more and more people, I think
we've reached a level of informational needs and
knowledge needs that is good enough for them. So now you kind
of push harder on really hard tasks like programming.
We've seen some some exciting announcements today. Anthropics, the 3.7
model. I think programming, science, research, that's where the next frontier
is for a lot of these amazing models. Salim, what have you been hearing on the
ground? I'm hearing Grok 3 is incredible but the outperforming all other AI models seems
to be a little bit more hype than reality. I think it's coming in as far as I can see when I scan
Twitter or X a little bit lower than them but still unbelievable that he's been able to achieve
this in such a short period of time. I'm fascinated Richard when you because you guys do like
federated AI because you have access to many models right so I'm really, Richard, when you, because you guys do like federated AI, because you have access to many models, right?
So I'm really interested in hearing more about your model and what you guys are doing.
But just on the GROK3 thing, I think the for me, the biggest thing is the ability to achieve coherence across such a large cluster.
That part blew my mind, because as far as I could see, every AI expert said you can't do it.
And I, Richard, I'd love to get your kind of take on that piece of it.
Yeah, I think not many people have been set like being able to set up
a big cluster that quickly.
I think in many ways, that is a combination of hardware and software.
And a lot of folks like me are more software people.
And a lot of AI folks have been spending most of their time in software.
And so I think it kind of speaks to his ability
to like work in both hardware, just sort of where he comes from
much more of Tesla and SpaceX and such. But now moving into
like scaling that up and actually getting all the software
components. The same time, of course, there are companies like
any scale and, and others that are making it easier and easier
to deal with massive clusters.
AnyScale allows you to upscale from five GPUs to 5,000 GPUs within a few lines of code.
And so the layers of abstraction are going higher and higher.
And we're all, thanks to AI, partially are operating at higher levels of abstraction. I'm curious about how people can evaluate these against each other.
At the end of the day, I think about human IQ tests
as an interesting metric to evaluate them.
I was fascinated when Claude III came out in IQ of 101,
and then was it GPT-01 or GP GPT03 came in at an IQ of 120?
And I've been wondering about when we'll see,
something coming out at an IQ of 150.
Is that a relevant measure?
You know, IQ has like a lot of different dimensions.
I think intelligence overall
has a lot of different dimensions,
which we briefly talked at our FII conference conversation.
I don't know if it makes sense to just boil it down to this one number. I think even the
Turing test is essentially broken in the sense that the best way to fail the Turing test is to
answer questions so much better than a human could. Like, write me an app in 30 seconds, and then if
it can do it, it's an AI. If it can't do it, it's human, right? So,
it's like there are many ways that we measured intelligence that are broken and I'm working on
helping the world kind of structure that measurement a little bit better by understanding
sort of what the dimensions are of intelligence and if there are upper bounds to some or if it
can just keep on growing. So, you know, it's interesting, right? So, you're providing access
to large corporations across most of the AI models.
How many AI models do you have on u.com?
Like 40 plus.
40 plus, amazing.
If you were going to,
just for people to get a sense of the largest
and most powerful models out there,
what's your list of the top five thereabouts?
You know, you can't ignore OpenAI still.
A lot of folks want to use OpenAI,
and especially 01 and 03 are quite popular.
We have a lot of fraud too,
people trying to trade accounts
and then just make us into a free API
and then make 10,000 calls in one hour,
and you're like, no one can read that.
This is clearly a bot attack that happens all the time.
Sonnet is still very, very popular, too.
Sonnet 3.5 and I'm sure 3.7 now.
This is Anthropic, yeah.
Yeah, Anthropic's model is probably
one of the best models for programming still.
So we actually have our own models, which we just
fine-tune open source models.
And then we also federate and ask different models,
depending on where people give the most positive feedback given the intent that they have.
So we classify the intent.
Is it a programming intent?
Is this a history or medical intent?
And then we route to different models and it changes.
Actually, the most surprising thing maybe is how often it changes and how much
Mindshare DeepSeek also got in such a short time with not much of a marketing budget.
Right. So that was a very popular model for for quite some time.
Everybody, Peter here. If you're enjoying this episode, please help me get the message of
abundance out to the world. We're truly living during the most extraordinary time ever in human
history. And I want to get this mindset out to everyone. Please subscribe and follow wherever you get your podcasts and turn on notifications so
we can let you know when the next episode is being dropped.
All right, back to our episode.
Let me head to the next slide here.
So, Grok3 benchmarks versus the competition.
And here are the numbers.
So these benchmarks, are they relevant and valuable?
I'm curious because everyone wants
to know how fast they're progressing.
This is on reasoning and test time compute.
Richard, how do you view this?
Yeah, I think there are two interesting insights here.
Indeed, most normal people don't have crazy hardcore coding
signs and math questions every day in their lives.
So this is where we push signs forward,
like I just mentioned earlier.
And that's where that frontier is really exciting.
The other really interesting bit here
is that we're looking at test time compute.
And so it doesn't even make sense anymore
to think about a single model's intelligence
because it turns out there's some fun research
that came out where you just say,
wait before you answer this and give it some more thought.
The same model actually does better and gives you more accurate answers.
Speed is becoming a type,
a dimension of intelligence,
obviously overlapping with a lot of other kinds of intelligence.
The faster you have to be,
the less intelligent you are,
the less intelligent your answers are from these models.
What that also means is we may not have to worry about sort of AI running away and open source, because
you're going to have to have a lot of compute even at test time if you want to get the smartest
possible answers from these models. So lots of interesting insights. Suleim, any questions,
Richard? I've got a big one, which is, you know, as we move towards AGI, I struggle massively
when people say AGI and what the hell does that mean even?
And so I'd love you're one of the few people that I think could give a cogent answer on
how do you define AGI?
And if we achieve it, how will we even know?
And you just put out a tweet, Richard, that I found interesting that said something like,
if you were given a couple of billion dollars, you'd be able to build a digital superintelligence.
How quickly?
I think probably like a year and a half to two years.
Was that a call for funding?
Everybody, listen, give me two billion dollars and I'll give you your digital superintelligence.
Yeah.
I mean, you know, I miss going on the research side, going hard,
you know, when you build the products and you, you know, make revenue, that's amazing. It's very
meaningful. But I think there's still a couple ways that the community is stuck on in terms of
research where we can really push it forward. I think in terms of AGI, indeed, the definitions
are so broad, right? Some folks say, well, it's 80% of work can be automated.
And that's a very pragmatic way of just like, you know,
sort of financially defining intelligence.
Of course, I would say that maybe 80% of all digitized work
can be automated.
And then, you know, maybe 80% of all those workflows,
and that's already a huge amount of GDP,
and that could be a reasonable financial definition of intelligence.
But of course, if you're more academically inclined,
you have to acknowledge that there are certain kinds of intelligence and types
where you want to really get faster at learning too.
Like humans are able to just with one or two examples,
learn something.
So we call this sample efficiency, right?
And if you're really that intelligent,
you should be able to learn with much less data
along certain dimensions.
And so I think as we wanna define it really properly,
we're gonna have to go into the different types
of intelligence, visual intelligence, language,
reasoning, mathematical reasoning.
There's some type of social intelligence too,
even among AI is like,
what actions could I take to modify your internal state
in order to influence your actions, right?
So there are these different dimensions of intelligence.
Knowledge is a good dimension too,
which is quite unbounded, right?
We can learn more and more about the universe.
You end up hitting sort of physics based boundaries
of how much knowledge you can accumulate, based on the speed
of like cone around the different sensors that you may
have. So the full definition is probably takes too much time
here. But a financial pragmatic one of just like we automate a
lot of digitized work seems reasonable.
And what's your view of going into the physical realm? For
example, a Wozniak's test is, can
you make me a cup of coffee? And now you're getting into robotics. Or the other one I've
heard is, can you put and take an Ikea box and put the piece of furniture together? Right?
Now you're getting into physical manipulation, which is really is one of the core rationales
for intelligence. Do you go into that world or do you stay in the digital side because
it's just, you can
boundary it more easily?
I actually think that physical manipulation is another definition that dimension of intelligence
or group of dimensions.
And at the same time, you know, a deaf person can be very intelligent, a blind person can
be very intelligent.
So none of those abilities, a person that's paraplegic can be very intelligent, even though
they can manipulate matter.
So I think we have to accept the fact that these are not
necessary capabilities to have a super intelligence, right?
You can have a super intelligence that I think is purely digital, and it's just
different to our intelligence.
And I think people who require to say, oh, you got to have a bunch of fingers and
move around, they're just like not having read enough sci fi, maybe, or not like
sort of creative enough in their definitions of intelligence.
At the same time, I'm loving the humanoid robots.
The tricky bit is that oftentimes we use robots
when we can do certain things,
when we want to do certain things many, many times,
very efficiently and very quickly.
Like wash the dishes or vacuum the carpet.
Exactly, which then has like a simple robot Roomba or dishwasher, right?
And then we call it a dishwasher and then we call it a vacuum.
We give it a specialized name.
I am, you know, Salim, you and I have had this debate a bunch and I'm curious about
your opinion still and Richard's, which is the whole open versus closed AI debate.
And do you feel like open is gaining on closed
and is that a definitive future?
Undeniable, undeniably open source is gaining.
When you have this much excitement around something
and it is a product and experience
that any normal person can appreciate, there's
so much energy that goes into open source, that it is very
hard to compete with that in the long term. The more niche you
are, the more technical it is, the fewer people can appreciate
using that technology. Like let's say you do ion thrusters
for you know, satellites, like no one's going to build an open
source model for that fruit of millions and millions of dollars in that excitement.
It's undeniable of DeepSeek that it's been catching up
and I'm hoping we can build one system eventually where almost like Wikipedia,
people can contribute to it. No one does that.
I'm going to have to do that at some point.
I have the same view. We saw this in the software world when you had Microsoft
running its internet server,
and then you had open source web servers,
and the open source web servers just absolutely took over.
It's 99.9% of all web servers are now open source.
And therefore, over time, that will always win.
So my question then is, OK, we're
going to be heading towards open source.
Got it.
We have still a number of closed source companies.
Are they eventually going to go open source?
Is there a winner-take-all scenario here?
I think there's a good chance that if you're a purely foundational model,
a purely foundational model company, you're going to look more and more like a telco.
Like, huge CapEx, very expensive
to build, creates a ton of infrastructure that creates value, but it's unclear you can
capture all of that value yourself.
Thank you for using that analogy.
And I think that's the perfect analogy here.
So we're commoditizing and demonetizing all of this stuff.
I mean, if you look at the demonetization curves in terms of the cost per transaction.
It's like just this rapid de-escalation.
So how do you rationalize?
So in the telco space, folks need to realize we had massive amount of bandwidth being built
out in terms of fiber, in terms of cable, in terms of 3G, 4G, 5G, and all of the value was captured not there,
but captured on YouTube, captured on Netflix,
captured on apps on top of that.
And so how do you think about that, Richard?
Yeah, you can't build an Uber
without internet everywhere, you know,
but Verizon doesn't get a cut of Uber, yeah.
So I think that is why at U.com,
we haven't spent a ton of money
on training models from scratch.
And we've built a trust layer on top
that professionalizes this so that companies can really
use that technology.
And I do think more and more, thanks to DeepSeek,
of our existing and now new customers are realizing,
oh, yeah, we should partner with someone like you.
Because if a new model comes out in two months
and I'm stuck on a one-year contract with one
of the closest companies, now I can't benefit from that. Makes it makes a ton of sense because there's
a continuous competitive and everybody's it's a race down to the bottom and if they become if you
become stuck with a particular model you have no guarantee that you're going to be using the most
efficient lowest cost model. Yeah we call it future proofing. So what does a trust layer mean for you.com?
A trust layer is highly connected to data and helping people actually train on how to use the
technology. So we do certifications so everyone can become a manager of their AIs and all their
agents. And we incorporate not just public data better than anyone else, because we've been doing
it longer than anyone else. But we're also incorporating company internal data.
And so then you can actually start to trust it.
And then when you click on citations on U.com, especially in our more advanced
research modes, you will actually get sent directly to the quote and the browser
will scroll down and highlight, oh, this is where I found this fact.
So you can very quickly build that trust with them.
We we taught our models to say, I don't know.
A lot of models, if they don't find the information
somewhere on the web, they'll just make up something.
We're like, don't do that.
So these are a lot of different moving pieces
to making it more accurate and building that trust.
You know, Saleem, you and I have talked about
when we're advising companies and investors
about investing in AI. It's like
investing companies that have a great connection with their end
customers and with data, and then assume that the layer in
between is just going to be constantly, you know, flick over,
replace it, get the latest lowest cost model, but its
relationship with the with the customer base
and the data sets. Yeah, I think this is going to be key to success in AI platforms, right? And I
think, Richard, it sounds like you at U.com has done an amazing job of creating that layer of
abstraction that protects people from the underlying thing. Because otherwise, one of the huge questions everybody has, as we talk, both Peter and
I yourself, we talk to CEOs around the world, when do you place your chips?
Because the minute you put your chips down on a particular model, it's out of date in
three months.
And so therefore, you really need platforms like U.com to help with that.
And I think it's fascinating to see what you've done there.
Here's another article from the New York Times,
for those listening to the podcast, not watching it,
says, Open AI Encovers Evidence for AI-Powered Chinese
Surveillance Tools.
So, of course, we've had this entire, you know,
incredible back and forth with TikTok,
and now we potentially have it as well on DeepSeq. What's your
what's your views here, gentlemen? I'm not surprised. How would it not be the
case would be my question that that you would ask. And then all these companies
have downloaded, you know, DeepSeq and put it into their systems. But it's not
is it are those if you download the model and are utilizing it in isolation,
is it still reporting back information that it's gathered?
You can take the open-source model and still force it to take stuff
from a prompt and from a search engine backend.
That is possible. You can actually also fine tune
the model to get rid of all the CCP alignment. Fascinating. All right. Our next story here is,
and I love this, accelerating scientific breakthrough with an AI co-scientist.
I love the fact that we saw the Nobel Prize going to Demis and John Jumper for the creation
of an AI model able to predict the folding of a protein.
My expectation, Richard, and you're both a deep scientist and a deep programmer, is that
almost all breakthroughs are going to come from AI in the not too distant future.
And we'll attach it to a human so a human
can get the Nobel Prize.
But it's going to be fundamentally
in materials and mathematics and science and medicine.
Am I wrong there?
100%, yeah.
I'm writing a book on this in my nights and weekends on AI for science and it's called the Eureka Machine.
It's sort of the working title and I'm a big believer.
Interesting enough also when you ask a lot of folks all over the world in areas where they're scared of AI, most folks are scared that it takes their jobs.
But in terms of science and medicine, no one wants more jobs.
They just want more
breakthroughs and cool discoveries. So everyone is a
lie. Everyone worldwide is a lie. Let's just have AI do a
lot of science. So there's a lot of positive momentum behind it.
And I think we'll see more and more discoveries. First, with
the help of AI, and eventually, you might be mostly guided,
right, you need to kind of tell the AI, this is what we care
about the most, and then it can go off and do more and more in an automated fashion.
This is the area that I'm most interested in, because I think there's just so many,
if you provide it with data sets and go formulate 5,000 hypotheses and start testing them, it can
do virtual testing of all sorts of things. And I'm incredibly excited as to what's going to come from
this. I love this last bullet here.
It says replicated 10 years
of antibiotic resistance studies in just 48 hours.
Dario was at Davos, Dario, the CEO of Anthropic,
and he said something which I clipped, which I love.
He said, listen, we're gonna see a century worth
of biomedical research in the next five to 10 years. And one could imagine that during that century of biomedical research in the next five to 10 years.
And one can imagine that during that century of biomedical research
that we would potentially double the human lifespan.
And so it's not unlikely we could double human lifespan in the next, within the next decade.
So I'm always listening for those signals because, you know, that's like,
I'm in it for, in it to win it on the doubling the human lifespan.
And then we'll negotiate where we go from
there. We saw Larry Ellison, when he was on stage on Stargate,
announcing the idea that we're going to have, you know,
personalized mRNA vaccines against your cancer should you
have it. And so, for me, this is like one of the most
extraordinary areas of reinventing medicine, curing cancer, curing viral
infections, curing death, perhaps, who knows?
Yeah, I think a lot of people who now say, oh, like Brian Johnson and
longevity folks are just like, that's a bad idea. I think one, most of those
people are healthy and aren't currently battling anything.
And two, they're just like people before the baby pill came out, right?
And they're like, oh, that's not natural.
And like, yeah, you know, like there's a lot of bad stuff that's natural, like murder and
no laws are natural.
Like there's just animal kingdom, right, stuff.
And so there's like all kinds of bad natural things.
And humanity has been pretty good at improving
from that natural state. And I think it lacks a certain creativity when people think we can't
ever solve aging and health spans and things like that. So, you know, we in 2018 started the
largest project for a large language model for proteins. And we actually published that paper
when I was still at Salesforce. And we've had incredible success.
In fact, we believed in so much,
we worked with wet labs and actually synthesized
those proteins.
And they were 40% different to naturally occurring proteins.
And just to put that into perspective,
Francis Arnold about four years ago won a Nobel Prize
for what you call directed evolution,
which was random permutations with a lot of experimental, like science in
the in the in the loop, and then saying, oh, this random
permutation improve this particular property. So let's
keep this and then keep iterating. By the end of her
very long process, those proteins were 3% different to
naturally occurring proteins. And ours were 40%. And what
taught us that we actually captured the syntax, the grammar of these
proteins was that they folded properly and they had the properties we predicted them
to have and we wanted them to have.
And so there's a lot more work that comes from this bunch of startups have already started
and once you understand the language of proteins, all the medicine will follow.
This goes back to Salim, your point about AI interfacing with the physical universe, right? So another
friend, Alex Zavorankov, the CEO of Insilico Medicine, one of the things that
he's done, and he was very early in generative AI and drug
discovery, but he's built a massive robotic laboratory where he can basically have the AI come up with
experiments and run those experiments 100 times faster than humans, get the data, iterate
the experiment, run the experiment.
And so you literally create a theoretical world and a physical world.
I find that extraordinary.
I think we're going to see hundreds of examples like this where people now, the only limit
is our imagination and how fast we can apply some of these because the speed of the technology
is now at a level where we can pretty much go down any avenue we want.
Me personally, I'm looking for how do you reconcile quantum mechanics with with relativity?
As a physics major that's my thing and I think yeah, we'll be able to figure it out. Yeah, I'm I
Cannot believe that we're alive right now. It's like people should realize how extraordinary lucky we are this
I don't think this is you you know, every generation feels like
they're alive during the most extraordinary time, whether it was, you know, at the beginning of
flight and electricity and the internet and so forth, but I think we're...
We're too late to explore the oceans and the world. We're too early to explore maybe
different galaxies, but we're right on time to explore super intelligence.
Yeah, for sure. You know, the other area besides medicine is material sciences. So we just
saw MatterGen out of Microsoft, right? Talk about prompt engineering, my friend, your
prompt engineering has now gone into a completely different like, please build me design new
material that is superconducting that includes these elements that is this cost that can
be manufactured. You know, it's like crazy.
It's like design.
If we get like a room temperature, normal pressure
superconductor from that, it would be world changing.
And I'm very, very excited for that.
Yeah.
The nice thing about chemistry is that unlike biology,
you can iterate even faster, right?
There's no living tissue or you don't have to run FDA trials and so on.
You can just iterate even quicker in that loop. Yeah and you know, Salim, you and I have always said
material sciences is at the foundation of everything else and you know we consider
material science as heroes in our world. All right, Salim, what do you think about this one?
Satya Nadella on quantum breakthroughs, quote, we believe this breakthrough will allow us to create a truly meaningful quantum computer not in
decades but in years. I think this is beyond huge. I think as we get to this we
have to keep in mind the limitation that quantum computers are only good for
certain classes of problems so there's that limitation but the limitation that quantum computers are only good for certain classes of problems
So there's that limitation but the fact that you can create stable environments is is really something huge I go back to Helmets comment that the existence of a quantum computer
Hardness of what hardness yeah
his comment that the
Exist it gets very kind of metaphysical very quickly because he said the existence of a quantum computer
may be proof of a multiverse and your head kind of just breaks right then as I know. So Richard,
I'd love to get your take on this because you're crossing both these areas. He goes a step further, right?
He says the only way quantum computers can do all of the calculations as rapidly as they do is that they're borrowing
resources from a near infinite number of universe
adjacent universe.
We're doing the computation parallel universes and bringing the answer back.
I love it.
At which point they're gonna be pissed when they find out we're stealing their resources.
So there's that to think about as well.
Hey, Richard, what's your view on all of this?
I'm super excited.
I think anything you can simulate, any domain you can simulate, AI can solve pretty much every problem that domain, it's just a matter
of time and whether humans want to put that effort in. So you
can simulate go, you can simulate chess, like so chess is
obviously solvable by an AI because you can know AI can
learn through two ways, right, either imitation or exploration,
aka, you know, supervise and fine tuning and supervised
training, or reinforcement
learning. And so when you can allow a simulation to just train and try billions and billions of
things, it can get smarter over time. What quantum computers will enable us to do once we scale them
up is to simulate much more in the physical reality. My favorite science influencers have
been Hossenfelder put a little bit of a damper on this particular announcement saying, oh, you know, we'll see if they really can scale it.
But I'm very excited. I'm excited that there are different ways of approaching that, you know, like the trapped ions, the neutral atoms.
It's interesting. You hear a lot of quantum scientists kind of diss the other approaches and think their approach is the best.
And then comes like this total left field one of these topological qubits that no one had been working on.
And I just love the fact that there's this energy and that, you know, in some ways, we have companies that have such a massive
monopoly in their space that they have all these extra resources to do 17 years of research in one year before something comes out, Seth.
Amazing.
And honestly, thank you to Google and Microsoft
for investing in this direction, because there
was no immediate return.
We saw Hartmut Nevin's latest, remind me
what his breakthrough was a few months ago.
It announced that the larger the number of qubits,
the more stable it became.
Fifth.
And Majorana, is that how you pronounce it? Majorana one?
Majorana, yeah.
Yeah. Incredible.
It was about 13 years ago, I had my two kids, my two boys.
And I remember at that moment in
time, I made a decision to double down on my health.
Without question, I wanted to see their kids, their grandkids, and really, during this extraordinary
time where the space frontier and AI and crypto is all exploding, it was like the most exciting
time ever to be alive.
I made a decision to double down on my health. And I've done that in three key areas. The first is
going every year for a fountain upload. You know, fountain is one of the most advanced
diagnostics and therapeutics companies. I go there, upload myself, digitize myself, about 200 gigabytes of data
that the AI system is able to look
at to catch disease at inception. You know, look for any cardiovascular, any
cancer, neurodegenerative disease, any metabolic disease. These things are all
going on all the time and you can prevent them if you can find them at
inception. So super important. So fountain is one of my keys. I make that
available to the CEOs of all my companies, my family members, because
health is in you wealth. But beyond that, we are a collection of 40 trillion human
cells and about another hundred trillion bacterial cells, fungi, viri, and we
don't understand how that impacts us. And so I use a company and a product called Viome.
And Viome has a technology called Metatranscriptomics.
It was actually developed in New Mexico,
the same place where the nuclear bomb was developed,
as a bio-defense weapon.
And their technology is able to help you understand what's going on in
your body to understand which bacteria are producing which proteins and as a consequence
of that, what foods are your superfoods that are best for you to eat?
Or what foods should you avoid?
What's going on in your oral microbiome?
So I use their testing to understand my foods, understand my medicines,
understand my supplements, and Viome really helps me understand from a biological and data standpoint
what's best for me. And then finally, you know, feeling good, being intelligent, moving well is
critical, but looking good. When you look yourself in the mirror saying, you know, I feel great about life is so important, right?
And so a product I use every day twice a day
is called One Skin, developed by four incredible PhD women
that found this 10 amino acid peptide
that's able to zap senile cells in your skin
and really help you stay youthful
in your look and appearance.
So for me, these are three technologies I love and I use all the time.
I'll have my team link to those in the show notes down below. Please check them out.
Anyway, I hope you enjoyed that. Now back to the episode.
All right, let's go on to our next topic here. So Microsoft dropped some AI data center leases.
So cancellation of US data center leases raised concerns about AI infrastructure,
over capacity and shifting partnerships, move sparked industry reactions with European energy
stocks. So there's been a lot of build out. You know. This ties directly to energy as well.
I keep on hearing, and Richard, I'm
curious, and Suleimir point of view,
that there's an open checkbook for building out capacity
and building out energy.
We're seeing small modular reactors, SMRs.
This is fourth generation nuclear setting up next to these,
we're seeing, I mean, you know, I don't get into politics here, but Trump is like drill, baby, drill.
You know, it's like we need as much energy as we can in the US to support this industry.
Are we overbuilding or are we not even close?
I believe we're overbuilding.
You think we're overbuilding?
Yeah, I believe we're, and I'll tell you why.
Because we're, you know, you look at, say, DeepSeq and the massive breakthrough for a much smaller cost, right?
The incremental effort to create the next generation is dropping 10x every time we go through this and therefore we should get to a point where
training can be done very inexpensively and then you've spent a lot more time on
inference and therefore the amount of build out is is exaggerated because it's
aiming at a time aiming for a model or size of model that was there six months ago when you
started the building and that will not be the case when you finish the building.
So that demonetization aspect of it I don't think is being taken into account.
They're building for the capacity they think they'll need given what's how the
projections without realizing that those projections will be wrong. That's my
general complaint. Richard you may have some more specific...
I mostly disagree. I mostly disagree. I think I've been
talking. Yeah, I've been talking for over a year. And a lot of
other folks have recently picked it up about Jevons paradox,
right? When we make things more efficient, we actually will use
more of that resource. And I think we're seeing that play out
with intelligence. And so we'll will use more of that resource. And I think we're seeing that play out with intelligence.
And so we'll just use intelligence in more and more
places, everyone will have a personal assistant, a personal
health team, a personal tutor, and we'll just use all of that.
On top of that, there's sort of so many things on the eye, and I
can talk about that forever. But a lot of human problems are
related to not enough energy. So even like when people say, Oh,
there's a shortage of water,
there's obviously no shortage of water, just happens to have too much salt in it,
which is an energy problem.
Right. So all these water fights that are going on is like, well,
if you have more energy, just desalinate ocean water and problem solve.
Right. You can like, there's all these deserts that you can't live in right now
because there's not enough water.
Well, if enough energy that those problems go away too.
So my hunch is we're going to find a lot of uses for that energy.
Now, where I do agree with you on that one small bit is when you build a lot of
data centers, you also need to have data that actually goes into those data
centers and you don't want to have like a real estate crisis where you build a
lot of buildings, but people don't move into them.
And so I do think, you know, I have some ideas on how to fix that, but my hunch
is data will increase, energy needs will increase, and
intelligence will get cheaper and cheaper, but we'll just use more of it everywhere. So let me distinguish between energy needs, of which I think we need lots of energy,
right? And specifically data centers, which apply that energy in a particular way,
I think we'll need less than people think of that. But we definitely will use all the energy we can
for desalinization
of things.
So yeah, we're kind of generally coming to agreement there.
Before we get into thinking machines here in this article from TechCrunch about mirrors
and use startup, I am curious.
Over the last year, we've seen this constant flow of the leadership of OpenAI out of OpenAI,
which is concerning. I mean, I'm not an investor in OpenAI. If I were,
I'd be very concerned. What do you think's going on there? I'm curious. Open question, either of you
guys. I think the doors are very open. I think that the basic general thing is if you get to
that level and you're suddenly the hottest property executives or deep researchers in
OpenAI, you can essentially go follow your passion and go find your MTP and go
build something with the murals doing what she's doing or any of the other
rafter people. Some may be interested in health care and the specific
application there and they can now have the currency to go do that. I think a lot of it has to do with that and a secondary layer of the speed and move fast and break things a person
Sam has for how to build stuff that is concerning a lot of people. Then you've got the third class
of people kind of really nervous that we're moving this quickly without adequate wisdom and thought
as to what we're building here. And I'd be curious, Richard, as to where your reaction is towards the emphasis on those two or three different areas.
I think at a very high level, zooming out a little bit, the fact that California has non-competes and the rest of the US is actually moving towards that,
or like doesn't, you know, doesn't have non-competes, as in like non-com competes are not enforceable in California is tough for
companies. Very often research costs a lot of money. But once you show the world that something
is possible at all, it's much, much cheaper to copy it. And it's also much easier to knowing how
you've done in one place, go and take that knowledge without taking any code, but it's
that's now stuffs in your head, and then go do it cheaper somewhere else. And honestly, it's sort of overall for the ecosystem. It's a positive thing where we're
just going to see cheaper, better, faster models. So let's talk about thinking machines. Any,
any clue about what Mira is going to focus on? So I guess lots of smart people joined her,
john shulman, who led the chat gbt application of the LMS, you know, that had been available as API is before we had
already incorporated them before chat GBT came out inside
you.com in a search engine like context. And so having some
amazing folks that really understand the technology and
also have ideas for for building products is is probably a very
positive thing. And I mean, she describes and they describe a lot
on their website.
My hunch is they're gonna try to explore.
I hope they don't just build another LLM.
I think there's so much more stuff out there,
but yeah, we'll see.
Yeah, what I find fascinating and Salim,
I'm curious about your point of view here is
I think her starting valuation is $30 billion.
I mean, it's crazy. Everything's gone up. Everything's gone up. Yeah, it used to be
millions of dollars and now it's billions of dollars. I don't know how quickly you can
justify monetize this stuff. I think we're headed for a pretty big bubble as we get to the application side of this.
Because when you get to the end user, it's demonetizing so quickly that where's the revenue
will be the big question over time?
I think putting my investor head at AIX Ventures on for a little bit, the way we think of this
is that it's essentially seed stage risk combined with late stage returns.
And so as an investor, that effective value just doesn't quite work out.
But it doesn't mean that no one will succeed, right?
It's just seed stage risk.
Once in a while, every, you know, five, 10 percent of seed stage companies
actually do something amazing.
And like one or two of those in the power law as an investor
really blow out and return the entire fund multiple times.
And so there are a few such possibilities,
but man, it's really tough and the bar is so high to be able to get enough revenue to eventually
be able to justify these high valuations. So can I just riff off that for a second?
Richard, when you're kind of trying to invest in AI startups, right, you've got to figure out,
AI startups, right? You've got to figure out, A, does the founder or the team have something really magical? And B, can they get to market? And can they find product market fit? And
that's a big, big challenge today. How do you guys assess that?
And can they get revenue?
Do you think for proof points? Can they get revenue? Yeah. Or do you invest in stuff that
has massive breakthrough and has the potential and you'll hope that the potential yields? Where do you put your chips on that?
Yeah, so we've been doing really well fund ones already like five, five x TVPi, and it's only like four years old or so. And so we're looking, there's sort of two ways to slice and dice it. One is there's a horizontal new infrastructure layer, right?
And in that you have companies like Hugging Face.
I was very fortunate.
There were my students when I was a professor at Stanford
invested at a 5 million valuation round.
They're 4.5 billion now.
So there are a few of those that can break out
and really become part of this new stack
of building software that is fundamentally different with
AI and cursor, similar one or investing into the cursor CEO is
actually an internal minor really bummed I didn't get to
invest in that one. And so then there are 1000s of application
companies vertically that sitting on top of that are
sitting on top of this new stack. And so there, we look for deep industry insights
and deep AI expertise,
like teams that actually understand,
my buyer will want this feature
and they don't just sort of go off
and try a bunch of different things
and spend a lot of money.
Are proprietary data sets something that you look for
or find exciting in all of this?
The best companies will have
what I call virtuous data cycles, at least.
If they don't have a direct data access already, they are building a product that as you use the product, you collect more data.
Now, one of the reasons why Tesla is much better suited and why we've seen a lot of self-driving car startups die is that they have to pay for every mile driven by a human to collect data versus of Tesla, we all drive the car and we give the data for free and we actually
pay to drive the car to collect that data. Right. And so that
that is a perfect example of a virtuous data cycle. And you see
that in very SaaS software's like you.com we get people,
people give us feedback of like, this was a good answer. This
wasn't a good answer. I didn't like this part and so on. And so
those are sort of ways where you can build some kind of
advantage over time.
So I get two out of the two things out of this one, Elon owes
us money. And number two, to be really successful in AI be
Richard's intern at some point.
All right, guys, that was fun. And I think the other side of AI
is one of my favorite topics.
It's humanoid robots.
I was building robots when I was in junior high school, but they didn't do what the robots
today do.
So, I'm going to share a short video here.
This is a robot called Clone.
I contacted the CEO and he's going to be bringing his robots to the
Abundance Summit next year, but let's check out a little bit of a video here.
So what Clone is doing is basically creating what's that Westworld. So these
are these are muscles, they're hydraulic systems, and that video is under-representing
what it can do in terms of moving the hands. They hope to have it walking in the next few
months. They're based in Eastern Europe where they're doing a lot of the work. But talk
about an interesting future of robots where, I mean, a lot of the robots today out of the US and China
are clunky work walkers they do walk but they don't have that that human
emotional fluidic moment movement but these these might it's interesting that
they chose to work like in that way you know in sense like brushless motors are
have kind of helped us do an amazing amount
of like cheaper prices and incredible capabilities in robotics. That's my first thought. The second
one is I think the black horse here similar to to beat seek is UniTree. UniTree has some insane
videos. They look like CGI where you have four legged robots that also have wheels, which I think
is a clever ideas and so super fast, but also jump and climb up stuff and spin the wheels at the same
time. That's the second thought. And the third one is, yeah, I'm excited. And now the question is
always like, what's the what's the really most amazing use case for humanoid robots versus,
you know, like a tractor factory where you just have a bunch of little lasers and thousands of arms and things like that.
You wouldn't want a bunch of humanoid robots walk over a field similar to the dishwasher stuff we talked about earlier.
At the same time, it's not a zero sum game. There's a ton of cool stuff.
I would totally buy a humanoid robot to have like stuff be done in my house and just kind of clean and they can do it at night.
Right. So they don't have to be super fast.
And now the fourth comment is, I feel like everyone works on the AI version of robotics,
like the original Terminator.
No one works on the T1000.
And one of my many ideas is actually to build a T1000 like robot.
And I have a bunch of ideas.
I recently like jammed on with a really brilliant hardware hacker.
And he's like, you know, this could actually work and make sense.
So project number five. If I have something. Uh-oh. And he's like, you know, this could actually work and make sense. So project number five.
You heard it here folks.
Jim Cameron was right.
And it's all going to be due to Richard Scharcher.
I got to say something here.
You know, if you want a musculoskeleton humanoid robot,
you get a man and a woman and you have a baby
and you grow the baby.
I mean, I don't, I really struggle with this. Like if, you know, man and a woman and you have a baby and you grow the baby. I mean, I don't,
I really struggle with this. Like if, you know, we talked earlier, right? If you want
a dishwasher, you have a machine that sprays water in a particular way and it looks like
a box and you have trays to put dishes in, whatever. Same with the vacuum cleaner. The
point that Richard just made, it's so much more powerful to have wheels on the legs, etc
Etc. Why are we constantly going back to the human form? We've had this frankly we have we
This you're I think drives me nuts. You're just wrong. It's I'm just wrong
We should the argument we've had is I kind of say if you're gonna build a robot have one with seven arms
It can do much for many more things
Why make it look like a human?
Well, because it's cool.
So I am an investor in Makina Labs, too.
They built these massive arms and they can form sheet metal.
And they were with SpaceX and a bunch of folks.
Whenever you don't want to build an entire factory to make
that same large piece of metal millions of times, but you need
it like 200 times, they're perfect for they can literally
ship a factory that creates any spare part into the field somewhere and then
just have like, almost like a blacksmith, but massive and AI. And they're also
like, oh, anti humanoid. Now, again, it's not a zero sum game, right? I think some
people want like a beautiful humanoid-like robot in their house,
but we can still have dishwashers and factory robots and so on that are very custom purpose
and look crazy funky with 20 arms and you know that that's the excitement for robotics doesn't
have to be zero-sum. All right, we have a lot of a lot of robot announcements this week,
so let me let me continue on here. Next up is Neo's Gamma.
Let me continue on here. Next up is Neo's Gamma.
So listen, I think this looks pretty damn cool.
I mean this is in terms of its motions.
Now how staged this is and how practiced.
We don't see the 37,000 shots that went wrong, but that looks like a pretty friendly home robot.
You know, one of the questions I ask everybody is, how many will you own?
You know, when I interviewed Elon and Brett Adcock, Brett's the CEO, we'll see him in a minute, CEO of a figure.
And of course, Elon oversees Tesla and TeslaBot, now called Optimus.
The projection is as many as 10 billion robots by 2040.
And I can imagine that I have no problems imagining I would own, you know, two or three,
maybe 10.
So, Salim, no, not you.
You know, my struggle with this, I mean, one robot moving very quickly is the same as seven
of them. And again, why does
it have to look like a human being? It would be much better with wheels and seven arms.
You could have making coffee at this. So so I think I think I feel more comfortable
having you know, a humanoid robot walking around the house than some strange looking contraption.
I think we're going to end up with the problem in the same way with virtual reality with
the uncanny valley, where it's very disconcerting.
I think we're going to have the same thing with humanoid robots.
For sure.
And like the sci-fi kind of is underrated in showing us sometimes also the positive
ways.
Like, you know, people will fall in love with their robots and they'll have these androids.
Now, I think short term, we're going to have to, we're going to see a lot of folks just remote controlling a robot,
collecting training data that way. And so there's going to be an, part of the uncanny valley is you
may have someone in India or somewhere sitting, looking into your entire home, being able to
navigate everything, seeing your kids, opening your doors and everything. And you kind of have to be okay with that invasion of privacy potentially, right?
And then once they get good enough, then you're right, like they could be faster.
I mean, they could put on like wheels and like, you know, shoes with wheels on and then, you know,
and attach another arm if we really want them to. You know, they can be more modular that way.
So I'm excited for it.
All right. So that was Neo Gamma from 1x tech. Let's go to the
next robot here. And, and this is figure AI. So just for for
disclosure, I'm an investor in figure. I don't know if you are
Richard. This is Brett adcox company, and they just
announced their software. Interestingly enough,
Figure used to have a software relationship or a gen AI relationship with OpenAI
and they shut that down and they decided to build their own AI team internally
and to build Helix. And I think the logic there is in the same way
that Tesla got so much data from autopilot
as we were driving it around that allowed them to create
these incredible models, that figures AI,
I really hope they come up with a separate name for it
because calling the company figure and the robot figure,
it gets a little bit confusing,
but they're gonna get a lot of data
and that's gonna train the AI in the physical universe.
Let's take a look at their video. So, Suleyman, instead of having four arms, you have two robots instead, and they collaborate.
It's called collaboration.
I think this is going to take a much longer time for people to work out than people realize.
But you know, it's fantastic to see the speed at which it's moving forward.
Because if 10 years ago when we were first looking at robots, it was really hard to imagine
they would get this level of dexterity.
The DARPA Grand Challenge?
Yeah.
Remember the DARPA Grand Challenge?
Yeah, it was so clunky and so I think so it's fantastic to
see that. But the use cases and the application areas is where I
think it'll be you know, my Roomba still cannot clean a room
without me moving all the furniture around for it. So
we're working for a loop.
Yeah, and like, I think robotics has done a phenomenal job if we
can constrain the environment a little bit more. That's why
self driving is also a fairly constrained environment, it is standardized in a lot of places. The highways all look the same.
Things like that, road signs are standardizations. Houses have very little standardization. And you're
right, it will be very, very hard. And the companies that are actually able to get through and get like
one use case so nailed that is big enough and important enough for folks will be in a huge advantage. But it is harder
than most people think it'll be very capital intensive. And then
the question is, can you be a fast follower out of China, and
just say, oh, this is how they do it now we reverse engineer it
and then you can leapfrog skip all the expensive research stage
and
and and I'll go to my favorite use case, which is gonna be a
while before you get one of these humanoid robots and say go change the baby's diaper. There's just so many things that can go
wrong with that. Yes, I still love the walk into the room and the robot is holding the baby by one
foot. The funniest comment I saw on this figure video was uh this reminds me of two of my buddies being really stoned and trying to
the lot that
With episodes around this the doorbell is about to ring it's my it's my figure robot coming over there to give you a hug
So answer it be nice
All right, I I can't help do an episode without Bitcoin.
Let me begin the question to you, Richard.
Are you a believer in Bitcoin?
There's a faith component here when I say a believer.
Are you a holder in Bitcoin?
I have just a tiny bit here and there.
I'm invested in a fund that does a lot of crypto things just to have a little bit of exposure.
But I mostly want to focus on AI and find it a bit of a distraction.
So I'm not really deep in it. Well, when focusing on AI,
I mean, listen, AIs are going to need to have, and agents are going to need to have mechanisms for transacting financially. So,
you know, let's take it slightly sideways to
cryptocurrencies for AI agents to do business amongst each other.
What do you think about that?
I mean, it makes sense, but they can also like do that with credit cards, right?
Like we'll have AI kind of make credit card purchases fairly quickly.
I was a little bit dismayed when I actually tried to play around with the technology.
And then it's just like the gas fees and so on were also pretty high.
And I'm like, wait, this is like a credit card fee almost.
This is like already cost a lot of money.
I'm like, this doesn't seem right.
So I don't know.
I feel like they need to really lower the prices so that the transactions
themselves are insanely cheap.
Yeah, there's a whole stack of, you know, you've got Bitcoin
with very expensive transaction fees and proof of work to proof of stake.
And as you get closer to the end use where you need less security. So if I'm moving from storing jewelry in a bank vault,
and you have a lot of security, but you don't do that many transactions when it comes to
a debit card, you can have much less security. The transactions are limited to like $50 each.
And therefore, you can lower the security in exchange for the volume.
I think that's the kind of thing we're going to see in the crypto world as well.
How nervous do you get, Saleem, when you see the price now at this very moment?
I'm actually, yeah, so I'm really encouraged by what's happened here.
So two things happened over the last few days.
One was the bit-byte, whatever, bit-byte hack, which was the biggest hack ever. And in previous years, this would have caused
massive collapse in the crypto world, and it barely even noticed. And the second was the
the response from the exchange with the CEO, where we're going to get everybody made whole again very
quickly, etc. gives me encouragement that there's robustness being built into the ecosystem, which gives
people a lot of confidence going forward. So I'm pretty excited about where this will
go. The Trump meme coin did not help the crypto world at all. And so that's really unfortunate.
But that's life. You get what you ask for.
You didn't buy it, did you?
No, no, no, no, no, because you can see it's it's only going one direction.
So if you don't mind, you mentioned the the the buy bit billion dollar hack.
Can you unpack it?
Yes.
Yeah.
So what's happened was one cold wallet which
stored a lot of Ethereum got hacked and suffered a massive withdrawal. Now the
challenge here is that you want to, if you're the hacker, you want to
move this into anonymous places and kind of wash the transactions because crypto
is fairly traceable. There's appeals to Ethereum right up to Vitalik to say can
we just roll back the
thing before the hack and it'll just undo the hack basically. So there's a
call for that but trying to wash all the currency out is going to be very very
tricky to do and everybody's watching all these wallets where they're going
very very carefully to find out who it is. I don't know how you sustain
this. I'll just repeat I'm really encouraged by the response from
Bin Zhao and the Bybit folks saying we're gonna just navigate all
this, we're gonna keep everybody whole and the fact that they had enough backup
to do this. In general what we found in the crypto world is you want to not keep
major wealth on a centralized exchange for this exact reason.
Mount Cox, a lot of people lost a lot of money on early on.
So you keep it offline and you do trading on these exchanges but not storage of value.
Yeah, I know.
But every time I use a treasure or a ledger, thumb drive wallet, but I pucker up every time I go and plug it into my computer.
It's non-trivial. It's very tricky. This goes to that whole usability idea, right? I remember your comment about when a technology goes from deceptive to disruptive, the usability becomes much 10x, 100x better.
So Steve Jobs made the smartphone usable and boom, it took off. Coinbase made the purchasing of
Bitcoin usable and very user-friendly and that took off, but the rest of crypto is still a hot
mess. Anybody that tries to buy an NFT or trade an NFT knows how sticky it is or execute a smart contract.
You have to be like geek level 14 to be able to even touch that stuff.
Yeah, I'm using Abra from a major holdings, but I still have again on Coinbase and a number of
different places. But it turns out to be a significant amount of capital and you've got
to be careful about it. That's right. I think the tricky bit is like why credit cards work is that you're kind of insured.
Like if someone steals your credit card and you see a bunch of purchases, you can just tell them like, that was me.
And then the bank will give you your money back.
And part of the problem with the decentralization here is you decentralize also the risk, the security that you have to have and the
liability that each user has for their own wallet. And then, you know, people are just not sophisticated enough to be able to deal
with all the cybersecurity threats very often.
You know, switching here to MicroStrategy is now called Strategy.
Richard Michael Saylor was my roommate,
fraternity brother at MIT.
So we go way back. He is
extraordinarily brilliant. And I was just with him in El Salvador. I was there
speaking with Carlos Slim, Mike Saylor, and Mark Andreessen, Ben Horowitz. And Mike
gave a massively compelling 90-minute presentation to this room full of
billionaire family offices.
And every time I hear them, I'm like, okay, mortgage my house, sell everything, buy Bitcoin.
The guy is, you know...
It's very dangerous to listen to Michael for any time.
It's compelling. You know, it's interesting though, that I want to just point one thing out for those of you who are nervous about this fall,
the equivalent is, you know, it's hodl and buy on the dips, but I have to verify this, Salim,
I wonder if you know that if you try and buy into and out of, like sell out of and buy into Bitcoin,
that's problematic, that most of the gains, this is a memory, I wonder if it's
true that most of the gains last year were made on like five trading days.
Yeah, this is historically accurate.
In any given year, Bitcoin accelerates at some point in the year and it's very, very
few trading days that make up 80% of the upside.
The problem is you don't know which those five days are, right? And
I've managed to spectacularly miss four out of the five of those. And so, and then you
buy on the other side of it and it goes horribly wrong. So it's a very tricky thing that what
I tell people is just buy as much of it as you can and close your eyes for 10 years.
If you can. Well, this is a string, you know, Michael made
another move. He acquired another 20,000 Bitcoin for about
$2 billion. You know, you know, it's pretty extraordinary moves.
I mean, yeah, I wish a lot of incentives to give 90 minute
presentations. I'm on Bitcoin. Yeah, he does. For does for sure. It's definitely we know.
I think there's one other way I look at it is if you wanted to have somebody be the prime evangelist
for a technology the articulation he brings to the table is heartbeat and you could spend a lot
of time trying to find a better one. It's incredible. He is amazing.
Richard, open forum here.
What's been the most amazing events, breakthroughs,
technologies, companies that you've
seen in the last few months?
Oh, boy.
We just covered quite a few.
And I saw Agent Force.
I did a podcast with your buddy and mine, Mark Benioff, Mark's
amazing AgentForce 2 coming on strong.
What do you think about the whole agentic world?
I'm a huge fan.
I think when you think about what kind, so essentially large language models can be thought
of neural sequence models, right?
They're very large neural networks.
They can be trained on any kind of sequence of things.
And you can train them both with imitation
and with exploration.
And so when you think about what are other interesting sequences,
in 2019, we started, 2018, 2019, we
started on these large-language models for protein sequences.
So boom, you've got biology.
But then the very obvious sequence
is a sequence of actions too.
And so I'm very excited.
We already have over 50,000 custom agents built
on the U.com platform by our users.
You can select which LMs you use.
Give us examples of the agents that people would use.
What are the top?
So for example, you're in marketing,
and you say, oh, every time, like every two or three weeks,
I get a huge PDF file with a bunch of new features and some like website that describes
a new feature that product engineering have been shipped.
And then I'm tasked to write two email marketing campaigns for specific industries, tasked
to write three LinkedIn messages.
I have to go out on the web and compare these new features to the competition so I don't say this is super novel, no one has it, even though other people
have it and so on. And what we've done is like we talk to these marketers and they say, oh, well,
just describe that, explain that very well to an agent on U.com and then next week when a new
thing comes in, like you just drag and drop that PDF and it just goes through all those steps.
It writes the LinkedIn messages for you. It writes the email campaigns for you.
And you're just done. And then we have journalists who say, well, I need to like research a new thing.
I want I'm supposed to write an article about prostate cancer like advances.
Then I go to these 50 different sources. I read a bunch of research papers and then I put it together.
Perfect use case. You know, this is like the kinds of sources you describe, like use medical journals only.
You can just say that in your prompt.
You don't need like a special sort of feature of the switch on in the UI.
You actually just prompt it differently.
You explain that and then it writes like more and more of that for you.
And then you just need to start comparing.
So we have journalists and chief editors and writers that told us that tasks that
used to take them multiple days now take them like two, three hours and they're done.
Maybe the last fun one that's relevant for you is we have venture capital firms
that say, well, if I get a new data room, I go through 10 steps.
I look at net dollar retention.
I do CAC, LTV ratios, blah, blah, blah.
And then you just describe that again and you drag and drop and hold data room into
you.com and it just goes through those steps.
So whenever it's like knowledge work, you can already automate a ton of it.
Can you create an agent that says, go out there and raise me a billion dollars of venture
capital and go find the companies that are going to be unicorns and invest in those and
then just send me the bank account information.
That's step two. My description of a genetic AI is white collar job description.
Yeah, that'll be epic. And then I think the next level will be they actually start taking actions for you. They start booking flights and things like that.
Now, the interesting bit is that just like if, like we're going to have an uncanny
valley or like just a trough of this disillusionment potentially.
Because when I saw like this rabbit R1, for instance, and they in the demo, they said,
oh, I want to book a flight with my four kids to London on these dates.
And then boom, boom, boom, now it's done.
And I'm like, no way that was real.
Because you have so many details, right?
Like this hotel, I wanted to be close to
these kinds of sites I want to see. And then over time, you
change, right? When I was a poor graduate student at Stanford,
like on like less than minimum wage, I would have been willing
to wait 10 hours for a layover in order to save $200. Now I
spend thousands of dollars extra just to have a one stop like or
zero stop flight and have a direct flight, right?
And so you need to know all these subtleties of like, when are you willing to wait for how long, how much extra do you pay?
And then you need to like have much more personalization still to make those agents work too.
But for knowledge work, you can already automate a lot. Richard, why haven't we seen yet a kind of an agentic version
of a Jarvis that just watches your tasks and says,
hey, last time you booked these, you always did this.
So are you sure you don't want to do that again?
And tracks you and learns from your patterns,
and therefore that can then represent you more easily.
I would have hoped to have seen that by now.
Have you seen anything like that? Give it permission to listen to your phone calls, read your emails, watch
you all of that.
There are two, two or three problems of why we haven't seen it yet and sort of
lockers. Not nothing impossible to fix.
But so number one is you're not allowed to record other people without their
consent. So that puts a damper on a lot of things.
A lot of countries will sue you and like California, like in Europe and so on.
So that's why you can't have it.
The second thing is Microsoft actually tried to like launch this where it just
watches everything you do on Windows and people just went crazy.
And they're like, no way you're going to send a screenshot of every one of my things.
People do private things sometimes in their browser.
They don't want to share all of that with the world.
So that that will be a privacy like it's just a privacy thing. You need to build an insane amount of trust with those
companies. Then you have a lot of AI companies that the AI forward, like AI first kind of novel
startups, they don't have all the users trust yet and that ability to collect all of the data and
so on. But then, you know, I think we will eventually get to it.
I think someone will be able to like Apple is very good.
They care about privacy and probably more likely you're trusting Apple
with everything you might do on your phone.
And then the fourth thing is that eventually we're going to have more
AI agents surf the web than people.
And that is a massive change for how the internet monetizes.
Cause there are basically a few companies
that make money actually selling physical goods like Amazon.
But even those companies are getting more and more
into the second main bucket, which is advertisement.
Turns out your AI assistant doesn't get distracted
when it has to just book a quick flight for work to Utah
with this Bahamas like ads to like go
for your next vacation. And so Expedia, even Amazon makes a lot of money with ads. If you start
ignoring all of those, it changes how the internet monetizes. So those companies will try to block
all these operators, all these AI agents from just being able to get the work done.
And so, you know, these are just like, oh man, you can have the intelligence, but the infrastructure
around it will slow things down for
adoption. Amazing. Richard, who are your main customers at U.com? Who should check
out your site and tell us how to check it out? Yeah, so you can just go to U.com,
yiu.com. Our biggest customers are cybersecurity companies like Mimecast.
We have publishers, a lot of publishers that basically improve
internal efficiencies for a journalist or allow you just ask questions on your
website and then get citations only on articles from your own network so you
can keep users longer. I want every journalistic outlet eventually to have
their own GPT version where it just answers questions about an article. You
can eventually even think of these
articles having very personalized follow-up questions. Like let's say you never understood
why the Hutus and Tutsis were fighting each other and you read a new article and the
outlet knows this is the first time you read about this particular human conflict. Maybe they show
you like some more explanations and background stories and stuff. We are building that for
media and publishing companies. We have universities with like 30,000 students going live on U.com where all
the students can use it and the professors. Which I think will push those
universities and all their professors to realize, wait, my students can just drag
and drop this assignment in here and it just gives them the perfect answer. I
need to think in my assignments, like rethink all of that. So we're excited
about those. And then there's a whole host of like consumer companies that want both the search APIs
that power sort of the plumbing of the LMS as well as the answers be done for
them and have some API customers that are ramping up massively and revenues
increasing a lot. It's been really great. Amazing. It's been a pleasure to get to
know you and build our friendship. Salim, as always, thank you for making time.
I used to feel like I had a grip on what just happened.
Now it's at an insane rate.
I can't imagine next year.
But, yeah, incredible week in technology this week.
Richard, Salim, thank you guys.
Thanks for Stop me.