How I Invest with David Weisburd - E283: How AI will Affect Financial Markets
Episode Date: January 15, 2026What happens when the marginal cost of intelligence approaches zero? David Weisburd speaks with Richard Socher about building U.com, the evolution of AI search and agents, and why infrastructure—no...t hype—will determine AI’s real economic impact. Richard shares a first-principles view on where AI creates value, how enterprises are deploying agents today, and what long-term shifts in labor, productivity, and education may follow.
Transcript
Discussion (0)
We'll get into some of the interesting investing that you do later on,
but your full-time job is your co-founder and CEO of U.com, which is a unicorn.
And it's an AI search engine that's more accurate than chat GPT.
Tell me about that.
And how could that be that a one-half billion dollar company is more accurate than OpenAI?
Yeah.
So you can obviously not do that across the board, but you can focus on particular areas.
And U.com actually filed a patent for LMs and search a few months before chatchpte came out.
We've been at this for a very long time.
We're also very happy partners with OpenAI.
GBTOSS uses the U.com search backend as its default.
We also work with OpenAI models.
In deep research in particular, the best research is done when you have the most amount of data.
And so having the right data and search infrastructure backend is how you get any agent to move above the slop that LMs often produce, above the sort of average mediocre outputs.
The best way to do that is by giving it more data.
And so that is part of how we were able in a bunch of different evaluations and benchmarks outperform the deep research agents of Open AI.
And every day you're working with enterprises solving very hairy issues with AI.
What are some case studies in how enterprise is using AI?
And how does that translate downstream to revenue and cost for enterprise companies?
So we have a very broad range of different customers from very, very large consumer companies that make hundreds of millions of API calls or more.
a month all the way to legal companies, AI legal companies that use us for research for their
agents.
We have companies like WinServe that use us for their coding agents.
All of the different agents and LMs out there in the world benefit from a good AI search
infrastructure.
And so you're more efficient as a programmer.
If you're AI that writes code for you also can look up the most recent issues on GitHub
and the web and so on.
The last time we chatted, you mentioned that the marginal cost of intelligence.
will go to zero. When do you see that happening? What are the second order effects of it?
There will always be, of course, like sort of electricity and compute on top of it, but then
we're seeing this already now. It's really incredible how much knowledge is at our fingertips.
And it's not just at our fingertips to like for us to have to consume it, but it will be
summarized and explained now for us based on what the internet comes back with. And so you think
that will change humanity pretty significantly. I think the way
you know, in the early pre-industrial revolution, like 150 years ago, over 90% of people
work in agriculture. The fact that we can now build machines that do the work of 90% of all
living people and do it more productively meant we have an abundance of more food, right?
Like, we don't have to all spend our daily lives thinking about how to create more food
and wheat and so on. It's mostly automated. And only 5% of people now work in that field.
Did 90% become unemployed? No. And that's what a lot of people are worried about.
But 90% of humanity, as they transitioned away from having to just work on farming,
to finding new kinds of jobs, like have to learn new skills.
And was that transition tough?
And can there be support systems for it?
Absolutely.
And so I think as intelligence gets cheaper and cheaper, that will also allow humans to do a lot more different things.
I think just like before, there's sort of lump of labor fallacy that a lot of people have,
thinking there's a fixed amount of labor.
And once AI takes it away or a tractor takes it away, then it will never be, you know,
it'll never be recoverable.
and there won't be new jobs.
I think interestingly,
if enough humans are baseline creatures,
we always adapt to whatever it's our baseline,
and then we want a little bit more.
Most people want a little bit more after that baseline.
And so the same thing we'll see with AI.
A lot of jobs that were very repetitive,
but it required some intelligence,
but weren't that creative.
Those will all get automated for AI,
and what ultimately will become one more important is agency.
A lot of people lack agency.
They just kind of want to be told what to do,
you know, not too much, but just enough to not have to worry about, you know,
what the day and the year and the decade will bring.
So when you have agency to really say, wow, I can now create something,
I want to create more outputs of this kind.
You love AI.
I think in the future, if you're thinking mostly about, I'm going to get paid by the
hour, no matter how much output I produce, then you don't really love AI because AI will
change those equations in many ways.
And so there are so many more ramifications I can talk about the future and how AI will impact
it, then the marginal cost of intelligence going down will impact it. Every field, every facet. Ultimately,
maybe one last trick that I use to try to predict that future is to look at what goods and services
only wealthy people have currently access to. And then especially considering those that are bottleneck
on intelligence. And then you can see that over the next few years and decades, all of us will have
personal tutors for our kids that are actually good, that really keep track of what the kid understands.
No one can afford that right now. With AI, we will. None of normal people can afford
a personal healthcare team that keeps track of all the things and then gives you a very personalized,
highly research with all the latest up-to-date data on how to live as healthy and as long as
possible. Most people don't have a personal assistant and can't afford one. Also just logically,
you know, if there's six billion people, if everyone wants to have a personal assistant or we need
12 billion people, I think obviously doesn't work. And so I think that is another capability
that will just be our baseline away, one is our kind of a baseline for most people now in the developed
world and AI will bring us that. It's going to be quite exciting.
AI agents were supposed to be the big rage in 2025 and there's supposed to be the year of AI
agents. Do you see 2026 finally being the year that AI agents have a big effect on the economy?
Yeah, it's really interesting. A lot of people overestimate. I see AI is kind of struggling. A lot of
people are like, it's like overhyped and nothing. Like it's just kind of bubble burst.
And the other people are like, oh, it's going to change everything. And next year we have 20% higher
GDP. And the truth as always is somewhere in the gray middle. It doesn't create as many fun,
fuzzy headlines. But the reality is that AI is already changing different industries. And we're seeing
that. Like if you were an illustrator that has one style and that style is fairly common in the world
and can be trained on the internet, AI has disrupted that entire field. Now, the illustrator industry is
not as big, right, as the movie industry or the music industry is, for instance, they don't have as
money, strong copyrights as the music industry. And so we don't see as much sort of disruption
in terms of overall GDP. But I believe that the largest GDP driver for developed economies in the
world will be AI. I don't think it'll be, you know, 10% for now, even the internet or, you know,
electricity and so on. It takes usually years to really get into every industry, every company
and so on to adopt it. And we'll see that with AI too. But what we are seeing certainly now is that
the people and the organizations that use AI and really lean in are slowly starting to pull away
from the people and organizations that don't. And if you work in any kind of job that requires
intellectual work, which is, you know, more and more jobs in the future as physical automation
has already happened, you will not be able to say, I'm not good with this agent thing in five years
from now. Just like right now, if you work in a high pay job, you cannot say, I'm not good
with this internet thing or I'm not good with this computer thing. That used to be a thing that
maybe 20, 30 years ago, you could say.
And people like, okay, whatever, you know, I'll just use a tax machine or something.
Like, you just can't say that anymore if you want to be taken seriously in the workplace.
And I think the agents are an obvious one.
Why?
Because you're just so much more productive with it.
And we're seeing this in companies, like in our customers.
What are some early use cases for AI agents that you see being deployed in enterprises today?
Winserve is a customer of ours for programming.
Programming is a massive application we see in the enterprise in a lot of different places.
Legal work, Harvey is a customer, so like automating more and more legal work.
There's a lot of interesting investments that we're making also at AIX ventures in AI legal tech.
We see companies like ILOCA that automate architecture and design for architecture firms.
There are a lot of painstaking slow work.
We see consumer apps obviously changing a lot and giving us answers more quickly.
We see journalism being massively disrupted on both sides.
People don't read the original news as much anymore.
but also journalists are becoming more productive.
And you can now have an AI journalist in almost every little town and city that takes in data from different places and then writes like articles for just that, you know, 5,000 person town the way you couldn't do without AI.
So yeah, we're seeing it in almost every industry.
Full circle back to the local newspapers.
We had local news and then we had the internet now now it's AI local news.
Yeah.
Besides running U.com, which I mentioned recently raised at a one-half billion dollar evaluation, you also have a $200,000.
$50 million AI fund. Tell me about that fund.
AIX Ventures. It was really exciting. Started sort of from my angel investing and has since
really grown with an incredible team. We've been very fortunate to invest in a bunch of companies
in their seed rounds like Hacking Face, perplexity, weights and biases. Flow, like whisper flow
that changes how people interact and talk to their phones and their computers.
It's a audience that helps how doctors keep notes. It's such a frustrating job. Imagine your
doctor. You really want to work with patients. And you spend 20
30, 50% of your time typing up notes and working on some computer system to keep it up to date
because you have to do that for reimbursement and so on. Ambience and others help automate that
massively invested early in Windsurf, which is improving AI coding and many other edible unicorn
companies. We're mostly focused on the I-plus X. What is that X? It's again, different apps,
consumer apps. It's some of that infrastructure that AI needs, like HggingFace, all these different
examples from architecture to legal healthcare. I think bio is also an incredibly exciting space right
now. Tech bio really will be changed massively because of AI. What essentially calculus did for
physics, AI will do for biology. It's the right language, the right way of thinking about
large scale complex systems. And so from first principles, we're very excited about. Do you think
that that's inevitable? A lot of people in biotech believe that it's just different systems and that's
naive to plug in AI into something like biology?
I think that's very wrong.
We're seeing it.
We're seeing very interesting companies,
companies like parallel bio that use AI to build and track organoids.
And we have full FDA approval to essentially test immunotherapy,
treatments and drugs in these organoids instead of in live animals.
You know, it's just like incredible.
There are millions of animal lives will be saved.
and times to get through FDA will be cut down by years with this one company.
So the impact is there.
It's undeniable and it'll just get bigger and bigger.
How do you go about deciding what to invest in your fund,
given AI is evolving so quickly?
What are your first principles for investing?
In early stage, you really have to look at the team a lot,
just like the intellectual horsepower of the founders,
their willingness to work, to grind, to not give up.
You know, running a startup, starting a company is a huge,
emotional roller coaster, right?
Someday you think you're the next biggest thing in place or company in your space.
And another day you think maybe it's all dead and it's not going to work out.
And like you have to just like work super duper hard.
And so we look at strong founders that have those that sort of not giving up attitude
and are really smart and working the right things.
It's always a balance between stubbornness and adaptability.
I mentioned you founded your first company in 2014.
Obviously you've gotten through a lot of highs and lows and especially within AI.
Has it been easier for you to stay even gild or is it?
still very, very bipolar days.
Humans are definitely, you know, sort of baseline creatures.
So if your average day is pretty crazy, you get used to crazy days a little bit.
Also, you know, with review.com, like we're now making a large amount of revenue.
We're growing very well.
We're growing our sales team.
So it's a little bit less like, oh, this will go completely to zero.
I think those days are kind of over.
But, you know, you can still be very excited and there's certainly still deals that you're pushing
for and so on.
And sometimes you win some deals.
we don't lose too many deals, but sometimes we lose deals, and that's really frustrating, too.
They're still ups and downs, but they're slightly smaller for sure.
And yeah, you do get used to it.
But so come back to your question, like, so the founders, then the founding team is what you look at and make sure the dynamics between the team work out.
Well, then we look at the overall technical risk for AI companies.
We have kind of a big selection bias, like people who pretend they can solve the world with AI don't come to us because we know we can look through those and know what's actually realistic right now and what isn't.
But you want to also not be too obvious on the technology and on the risk side and upside.
You look at the risk of the industry, you know, especially in healthcare and biotech.
And then we look at kind of first principles of like, where's the world going?
I love predicting the future.
That is surprisingly decent, like, hit rate on, like, predicting various things.
And I love kind of enabling founders to build that a future in sort of an optimistic and constructive way.
How would you describe your ability to predict the future?
Often it's first principles, like what can be done, what people would benefit from, what are goods and services, again, that only a few people have access to, but more people would love to have access to.
That's one, and then when it comes to AI, is like which ones of these are bottlenecked on intelligence.
And then, you know, having a deep understanding of what the technology can actually do.
And in some cases, doing the actual research to push it forward certainly helps.
I remember a famous management consultant went to state telling them to prepare for 4 million third graders.
And I said it's absolutely impossible, but there were 4 million kindergartners.
All you had to believe was that these kindergartners would age and become third graders.
Sometimes first principles thinking is just kind of the obvious if you just ignore all the noise around what people's preconceptions are about a certain topic.
When you want more, you start your business with Northwest Registered Agent.
They give you access to thousands of free guides, tools, and legal forms to help.
help you launch and protect your business, all in one place. With Northwest, you're not just forming an
LLC, you're building your complete business identity, from what customers see to what they don't
see, like operating agreements, meeting minutes, and compliance paperwork. You get more privacy,
more guidance, and more resources to grow the right way. Northwest has been helping founders and
entrepreneurs for nearly 30 years. They're the largest registered agent and LLC service in the
U.S. with over 1,500 corporate guides. Real people who know your local laws and can help you
every step of the way. What I love is how fast you could build your business identity with their
free resources. You can access thousands of forms, step-by-step guides, and even lawyer-drafted operating
agreements and bylaws without even creating an account. Northwest makes life easy for business owners.
They don't just help you form your company. They give you the tools you need after you form it.
And with Northwest, privacy is automatic. They never sell your data because privacy by default is
their pledge. Don't wait. Protect your privacy, build your brand, and get you.
your complete business identity in just 10 clicks in 10 minutes. Visit www. Northwest Registeredagent.com
slash invest free and start building something amazing. Get more with Northwest Registered Agent at
www. Northwest Registered Agent.com slash invest free. It is certainly a lot of noise again.
There's so much hype on both sides. Some people think, oh, it could kill all of humanity.
And no one can ever give me a realistic scenario where that actually is the case. And eventually
they go down to like, well, it could hurt some people. And we're like, yeah, we should definitely
regulated when it actually is applied to real people in like the FDA, you know, for food and drug
issues. And we have FDA trials and we have FDA approvals or in self-driving cars, right?
Certainly there should be regulated and so on. So the more real and impactful AI applications
become, the more we can and should regulate them. But some people get ahead of their skis and just
say, oh, we need to regulate intelligence. And I'm like, oh, that sounds pretty dangerous.
You know, you don't want to regulate math, regulate math, regulate intelligence. Regulate. Like, European Union has
so many, like, really unfortunate regulations and laws and taxation ideas that destroy their
entire AI sort of economy before it couldn't even start.
If you look at crypto as analogy to AI in terms of the evolution of the space, first you
had obviously the currencies, then you have the infrastructure, then you have the apps.
Is that how you look at investing in AI in that there are certain layers of the value chain
that have to develop first?
Or do you just invest in kind of best ideas with the most ambitious team?
the best ideas and ambitious teams look exactly at what is the right time.
I often say as a researcher, if you're right and ahead of your time, you're called a visionary.
As a startup founder, if you're right, but ahead of your time, your company is just dead
because people don't know or don't want your product yet or something is not quite ready yet for
really mass disruption.
And so I think we are looking at what data, for instance, is available.
It's a very easy way to predict where I will have a lot of impact, is looking at where is there
a lot of data or can data be very cheaply and efficiently collected in a space?
And then you will know that that space can be more likely disrupted than those spaces that
aren't.
So I'll give you a silly example to some degree like plump.
No one really collects a lot of data on how a plumber is crawling in some, you know,
below a house space to fix the pipe.
And so there won't be any AI plumbers anytime soon.
And, you know, like at some point, that physical labor will be so much more expensive than
digital labor that then it makes sense to really get human.
to really work that can crawl into spaces that previously humans were also able to get into
and do physical labor and roofing and tiling and plumbing and whatnot. But there's there's a weird
future where the prices of that might go up, up, up until it makes economic sense to automate it.
You're competing not only on the U.S. scale, but also worldwide. And you see this geopolitical rivalry
between China and U.S. and AI. Where do you think that plays out? And what are this upstream
battles are being held to determine who wins the AI race.
Anyone who actually participates in the IRAs will benefit from it.
I don't know if there will be like a single winner.
I don't think it's a winner-take-all market,
but it's certainly a non-player lose-a-lot market and world,
where if you don't even engage with AI,
then you'll certainly fall behind.
Yeah, it's not a zero-sum game overall in human productivity and progress
that is mostly pushed forward by AI now.
I do think China has some structural advantages of just hyper-competitiveness.
It's so brutal and so competitive in China as a business.
A lot of folks aren't able to do regulatory capture the way we see in the U.S.
In more places, you see a lot more competition.
China has been, for the last three decades, very, very good at taking ideas from the West
and then making them cheaper, producing them at scale.
and we're seeing the same thing, first with trains and cars, and now also with AI models.
Once these ideas are out there, China is very, very good at making things more efficient.
We'll continue to see that.
We haven't seen many super exciting, extremely novel ideas that really change the field out of China,
and maybe that is part of what they're not as well set up for,
like complete failure that is likely if you try something extremely novel
and out there kind of out-of-the-box thinking
is more acceptable in Silicon Valley, for instance,
but also not everywhere else in the Western world.
Countries and places that allow for some failure
to be recoverable in your career are more conducive,
like Silicon Valley, more conducive to people trying out different things.
China has less of that.
And so that is the fact that the U.S. pulls in a lot of amazing people
from all over the world that want to build that future
and I'd want to come to Silicon Valley and sort of have this constructive optimism, I'd call.
It's unparalleled in the world, and that will continue to be a big driving factor for the U.S.
I mentioned earlier in the podcast.
You're one of the most cited NLP researcher, I think, 230,000 citations.
When do you think that we will achieve AGI and ASI, which is advanced general intelligence, advanced superintelligence?
I sometimes don't even call artificial superintelligence, artificial, because there is no superintelligence.
natural or artificial, so we can kind of drop the A from superintelligence.
I used to be more cautious.
I actually now think that depending on how you define AGI, we're already there.
These models are quite general.
They can write you a poem and then they can tell you about your medical, you know, results.
And then they can talk to you about the Macedonian Empire and Alexander the Great.
And like, it's pretty general and it's pretty incredible.
To get to superintelligence, you need a couple of different things.
You need to have either domains that you can simulate or domains where you can verify all the outputs.
I'll give you some examples.
Any game you can simulate quite easily is obviously solvable to a superhuman capability by AI.
So we'll see these pockets of superintelligence already emerging.
I was never that surprised that an AI will eventually play a game that it can infinitely sample from and infinitely play better than a human.
But it's certainly because it's kind of lock constraints and it's able to constantly simulate without needing even.
outside data. Then you just collect training data. You're like, I tried this. Was this good?
Yes, no. And so if you can get into any state where like you try something and then you get feedback on was this good yes or no, then you can be I can now infinitely try things.
Like, not infinitely, but you know, millions or billions of times. And so in a game like chess or go where you can like simulate everything perfectly, you have full information of everything, can now try billions and billions of moves until you gain that intuition of like what is a good move or not. And so that is a
example where AI is already super intelligent. Now, where it gets interesting is the real world
isn't, you cannot simulate it. And if you go out and you try to make some money or have a job
and get a salary or something, no one will tell you like this particular action just now made
you more money. In some places you can. In finance, you can in math, you actually can, like
proving things in math, you can get the feedback of like, yes, every step here was provably correct.
And so you got to the right place. So math will get majorly disrupted. And another really beautiful
domain is programming. And a lot of people like to say software is eating the world, like to say
AI is eating software. So as we can build their viable programs with AI, that will disrupt the entire
digital economy in very exciting ways. You're known as an AI optimist. What's the downcase?
AI is only as good as the people, the policies, the infrastructure and the data that influence it.
AI is kind of like an only use technology. It's like a hammer or the internet. The internet can be used
for wonderful things and communicate with the world and be connected to everyone and learn about
things.
But it can also be used to share horrific content of people getting tortured, right?
Like there's all kinds of horrible things you can do on the internet.
And we need to regulate those things, right?
Like we need to regulate really bad things from happening.
And there are other industries, again, that need to be regulated with and without AI.
Edison is another one.
We don't want some crazy drugs or an AI neurosurgeon to just like practice and get its
reinforcement signal in my brain, you know, while.
figuring things out. Like you need to regulate those industries because there's a huge downside
risk. Personally, also military applications are kind of scary, right? Like you don't want to have
superintelligence kind of given the objectives and the goals of murdering people. I think, you know,
that's that's a really terrible way to think about efficiency. And so those are areas that we definitely
to regulate because they have real downside risk. I think one of the most realistic negative
is probably in biology where we don't want to create
like some super virus or something.
And instead we should, you know,
work on creating some super vaccines.
And that actually work the way like traditional vaccines work.
You know, don't get people sick and keep them healthy.
And why do you discount P-Doom or existential risk of AI?
There's many different versions of this.
There's a paperclip problem,
which is if you tell AI to create paperclip,
so it'll turn the entire world into a paperclip.
But there's other edge cases.
Why do you discount that?
And how do you look at that risk?
It's really interesting.
The paper-gling example is a great one for failure.
of prompt engineering and reward engineering, ultimately.
I think that will be a new kind of job, right?
And you can already see this now.
Like we'll have in the next few years AI agents for your enterprise
where you say, get my Csets score to be higher in my service department.
Just go make it higher, right?
In the I, if it has access to a huge amount of actions
and different kinds of things it can do,
just be like, okay, easy.
I'll just create a million bots.
They all call and then they give me a five out of five rating on my CET score.
And then boom, I just improved your Csett score.
But you're like, that's not what I meant.
That was, I guess I have the wrong reward.
Right.
So as I'll give you, I'll give you a better reward.
It has to be done with real people.
Then I will say, okay, real people maximize CISC, boom.
The easiest way, I just give a $10,000 gift certificate to everyone after the call.
And then boom, you get five out of five perfect C-set score.
That is another example of a very poorly thought-out reward.
And so if you're this stupid to give an AI that is super intelligent, the reward to only maximize paperclips,
then, you know, you're just really, really dumb.
And like, you probably wouldn't be given access to billions of dollars worth of compute to actually accomplish something.
And so you just have to like be realistic.
And as the technology gets better and better and could eventually have these like real life ramifications, people will also get better and better at defining their rewards properly.
And that will be one of the many new kinds of jobs that are going to come into existence in the world.
That's the first problem that people go.
The second one is at some point, the I also get smarter itself enough so to question.
rewards and to question a context. And so if you really think an AI is somehow smart enough
to be able to destroy all of humanity, get access to all the physical resources to build
paperclips, but then at the same time, I think she's dumb enough or it is dumb enough to not
realize that if no one is there to buy a paper clip, you don't need to buy a build paperclips
in the first place. You kind of assume this very weird type of intelligence that is both
ultra-brillian and ultra-stupid at the same time. And I think that's also a very unlikely scenario,
or like basically zero. People ask me almost on a weekly basis. I have a son or daughter in college.
What should they be learning? Used to be computer science was the answer. Now maybe it's the most subject to
disruption. What's the best way to think about what the next generation of students and people
early in their profession should be learning? Computer science is still one of the best things to study.
I disagree there with some other people that I otherwise respect a lot.
I think if you understand the basics of computer science,
that means you understand the basics of logic and math,
we know that training AI with programming improves its reasoning capabilities.
Why?
Because the same thing happens for people.
Like when we learn how to program and improves our reasoning capabilities.
And then this whole technology becomes less magic and more like a program,
a piece of code, something that you have control over,
you have agency over.
You can actually modify and make better and improve in different ways that you think is
valuable to humanity. And so I'm still a big fan of computer science. I think computer science
should be kind of like math and physics in high school. Every high school should learn or should
teach skills to program. People take it literally. It's a way of thinking versus writing this code.
Yes, the code might become obsolete, but how do you think how you construct this reward prompting
these new jobs that will come out? You still need the same skill set. That's exactly right.
And then I would probably recommend people to combine computer science with another passion, another applied field of application where AI can have an impact.
That can be biology, it can be chemistry, it can be physics, can be economics.
Right now we're making economic policy based on like oversimplified linear models that obviously are wrong.
And incredibly so.
And so we published this paper called an AI economist where you can build sophisticated simulations
where you actually deal with AI actors that adapt to your policy, that try to circumvent your policy,
that then, you know, like are intelligent themselves, the way people are intelligent on the
economy, so many more areas, medicine, but even, you know, history and like philosophy, like all of these
areas will be impacted by computer science and AI.
$250 million venture fund. You're running one of the hottest AI startups.
you're doing AI econometrics.
What do you do for fun?
I just finished a book on AI that I was right.
It's called a Eureka machine, by AI for science.
I think that is really incredible.
It would be a good superintelligence if all it does is create some memes and answer our emails.
You know, our average intelligence can already do that for the most part.
But in scientific, so the frontier, there's still so much more to go.
And so I'm thinking right now a lot about ways and actually starting another organization
to really think about recursive self-improving superintelligence that can eventually not just improve itself,
but then also improve our understanding of science in the world.
So that's really it.
I don't have that many more other hobbies.
I used to paramotor a lot.
I love paramotoring, but I just don't get quite enough time anymore for it.
I did get a couple days this year where I got in the air in between work.
But yeah, paramotering is a beautiful hobby.
It's surprisingly not as popular as it could be despite like enabling you to see the world from the
most incredible vantage points.
If you could go back to 2014 before you started MetaMind, what's one piece of
timeless advice you would have given a younger Richard that would either help you accelerate
your career or help you avoid cost of mistakes?
Work as hard as you can until your health and your both mental and physical health
kind of cannot take it anymore.
And then you have to tone it down a little bit.
So I've been doing that for a long time and certainly during the phases where I was most productive.
That's how I operated.
And I think that's generally good advice.
I'm pretty happy where I am.
So I don't know if I have any like don't do things you've done.
But maybe I could have been, have even more constructive optimism even earlier.
You know, we invented problems engineering, but didn't scale it.
So now is it time to, you know, have exciting ideas and really scale it.
And I think the biggest thing we need to teach our kids outside of being intelligent is to develop a certain amount of agency that they can, quote unquote, just do things.
Like there are a lot of things where you're like, that seems impossible.
But actually, it can be done.
and sometimes you have to be in the right place at the right time.
I was very fortunate eventually gotten into Stanford after multiple rejections
and eventually got through Stanford into Silicon Valley
and then be surrounded also by other people who have this constructive optimism
that is still in a positive way infectious and allow you to think bigger.
Richard, thanks so much for jumping on the podcast.
Looking forward to sitting down soon.
Thanks for having me.
That's it for today's episode of How I Invest.
If you're a GP with over $1 billion in AUM and thinking about long-term
strategic partners to support your growth. We'd love to connect. Please email me at David at
Weisperd Capital.com.
