Prof G Markets - The Case for AI Optimism — ft. Reid Hoffman
Episode Date: May 1, 2026Reid Hoffman joins Scott and Ed to make the bull case for AI amid rising public skepticism. They cover OpenAI's missed targets, the frontier model landscape, the dangers of Mythos, AI's impact on jobs... and wealth inequality, why Inflection merged with Microsoft, and Reid's framework for sensible AI regulation. Reid Hoffman is a co-founder of LinkedIn, Inflection, Manas AI and author of Superagency: What Could Possibly Go Right with Our AI Future. Check out his new Substack, Theory of the Game, now. Get your tickets to the Prof G Markets tour Subscribe to the Prof G Markets Youtube Channel Check out our latest Prof G Markets newsletter Follow Prof G Markets on Instagram Follow Ed on Instagram, X and Substack Follow Scott on Instagram Send us your questions or comments by emailing Markets@profgmedia.com Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Hey, I'm Matt Bouchelle, comedian, writer, and floating head you may or may not have seen on your FYP.
And I'm starting a brand new podcast. Wait, don't swipe away. It's called, That Sounds Like a Lot.
You know that feeling when you check your phone, read a few headlines and think, that sounds like a lot. I can't do this.
Well, I can, and I'm going to get into it every Friday. You can watch on YouTube or listen wherever you get your podcast.
I'm going to start by breaking down whatever insanity is happening in the world.
And then I'll sit down with a comedian or actor or writer or, honestly, anyone who responds to my DMs.
This is not the place to get the news, but it is a place to get the news.
feel a little bit better about it. That sounds like a lot. Coming May 1st, part of the Vox Media Podcast Network.
Today is number 0.001. That's a percentage of deep sea ocean floor that explorers have actually seen.
Ed, how do you identify the blind man at Anuta speech? How? It's not hard, Ed.
Listen to me. Markets are bigger than I. What you have here is a structural change in the world distribution.
Cash is trash. Stocks look pretty attractive. Something's going to break. Forget about it.
Ed, how are you? Um, let's see.
I'm headed to Florence, Italy.
What?
For what?
For a speaking gig.
Hold the phone.
Now we know why my speaking revenue is crashed.
You're taking my gigs.
Let me get this.
Who's bringing you to Florence?
Royal Bank of Canada.
RBC is bringing you to Florence?
That's right.
Florence, baby.
I'll be there for one day.
So I'll give that, do that.
Probably try to have some fun as much as I can,
but we'll be recording at the same time.
Coming back, then going to Texas.
Oh, well, that should be easy. There's a ton of direct flights from Florence to Houston.
Oh, my God, you're literally living the life I had in the 20s and 30s and not. Let me tell you, it's exhausting. Look at me. You're going to look like this.
I'm anxious about it, for sure. So you are now getting my speaking gigs and you're a memo to self undermine Ed's credibility.
Begin slowly diminishing his professional standing. Kill the Prince. Okay, okay. That's great. I can go
Congratulations. Thank you. Yeah. I'm excited. It'll be fun.
That's really excited. Have you been to Florence before?
No, it's going to be my first time.
It's I love the tourist places. People are so touristy like Venice and Florence.
I love highly touristy destinations. I think they're great. It's a beautiful little city.
Why are you found of the tourist drops?
I went to Florence right out of college with a backpack and a Ural Pass.
Oh, nice. Do young people do that anymore, or do you just sit at home on their phones?
Yeah, of course. Interrailing. That's what I did too. Yeah, a little bit. I joined my friends for the latter half, so I was mostly in Eastern Europe. We were in like Slovenia, which was not that interesting. But Florence would have been fun. If I'll probably have like an hour to myself, if I had to see one thing in Florence, do you have any recommendations?
The honest answer just get fucked up being a lot of good food. I don't have any like cultural landmarks or in the know. Florence with Scott Galloway. I couldn't be.
less Stanley Tucci. I couldn't be. I'm like, where's the bar? Where's the bar? Yeah, that's not,
that's not my good, but I'm very happy for you. Should, uh, oh, we should get on our promo.
Ed, how are ticket sales going? What's going on with our tour? I believe they're going
quite well. We're, uh, I think we're close to sold out in San Francisco. To be honest, I haven't
checked, so I'm sort of speaking out of my ass here. New York and San Francisco are almost sold out.
We have very soon we're going to be announcing speakers in L.A., Miami, and I think are we close to New York?
I think we're close.
Very excited about that.
We're doing kind of business of entertainment or business of nightlife in Miami.
We're going to do business of money or business of finance in New York.
We're going to do business of media slash entertainment in L.A., business attack in San Francisco, you know, that kind of stuff.
But if you are interested in getting a ticket, please go to Prof.
Prof.G Markets Tour.
Prof.
G Markets Tour.
Thank you.
Propg Markets Tour.com.
And we look to see you in either L.A., San Francisco, Chicago,
Miami, or New York.
Absolutely.
And if you're watching us on YouTube and you like what you're hearing,
hit subscribe.
And if you're listening on Spotify or Apple Podcasts, hit follow.
And with that, shall we get into a conversation?
Let's do it.
Over the past month, we have focused on the downsides of AI
from real world violence.
targeting industry leaders to growing political pushback against data center expansion.
So this felt like the right moment to bring on someone who believes that AI can serve the public good.
Our guest has spent years making the case that AI will improve our lives.
He has also created, advised, and invested in some of the largest and most successful technology companies in the world.
Few individuals sit closer to what might turn out to be the most transformative technology of our time.
And so we wanted to find out what is actually happening on the ground.
Here is our conversation with Reid Hoffman, co-founder of LinkedIn, Inflection and Manus AI, partner at Greylock and author of Super Agency,
what could possibly go right with our AI future.
Reid, great to have you on the show.
Thank you for joining us.
I want to start with some specific news that we saw this week that is making investors a little bit anxious,
and that is this open AI news, specifically that they missed their revenue target last year.
They also missed their user target, and they're obviously set to go public soon.
It should be one of the most important IPOs ever.
And suddenly investors are very anxious about this company and anxious that they're actually not growing
in the way that they had expected.
And you are in an interesting position because, one, you're on the board of Microsoft,
and Microsoft is obviously one of the largest shareholders in,
OpenAI, one of the earliest investors, and also you are an early investor in Open AI through
your VC firm, Greylock. So let's just start with this. What do you make of this news
and should investors be worried about Open AI? Well, ultimately, as an investor, I'm not worried.
I mean, part of it, the company had very aggressive targets. And so when you have aggressive
targets and you kind of go in a little below them, that's actually not the kind of thing that I
worry about that much. And classically, the reason why public companies, because, you know,
most public investors tend to want reliability quarter to quarter, you know, tend to do their best
to be on target in terms of what they're saying because there's this kind of reliability thing,
whereas I, as a private investor, tend to be the, just established, the really, really strong
basis. And so, you know, for me, the thing I wasn't tracking so much was, you know, last year's
revenue last year's user count. Those are great in the making progress. Obviously, we'll need to
grow more, too, but a lot more. But the real question that I was like very happy to see was,
you know, the 5.5 release because, you know, I think the thing that I'm most tracking is
is open AI continuing to deliver some of the world's best, you know, kind of technology,
frontier models and so forth. And, you know, that's, I think, the precursor to everything else. And
That from every source I've seen and multiple different benchmarks and multiple different engagements
has been, you know, is kind of the new world leader and quality of model. So that's more of what
I pay attention to because it's, you know, with investing, it's downstream effects. It's, you know,
what does next year look like is the most relevant question. Does that make you concerned? It's an
interesting point. Like early stage investors, venture capitalists, they're more interested in
the technology. They're more interested in what's going to happen 10, 15, 20 years down the line.
public markets investors are more interested in what's happening right now.
I mean, we're literally, we read quarterly earnings, we look at what happened last year,
we look at what happened in the previous quarter, it's much more of a sort of backward-looking practice,
which seems to be like it might be a big problem for Open AI if they are to go public at the end of this year,
because suddenly they will be scrutinized for their financials.
And so I wonder if that concerns you that suddenly the companies,
leaving an ecosystem of people and investors that are okay with financials that might not make sense right now,
but then suddenly they're going and pitching the company to a group of investors who care a lot about that stuff.
Are you worried that maybe they will get punished or that they will have a hard time in the public markets when that time comes?
Well, if they can't establish themselves, there is a small set of companies, and it isn't just Tesla,
which is probably the extreme example of this, which is kind of betting on the future.
But Amazon for a significant number of years and others that say, hey, I've got a foundational promise of which I'm going where investors go, we believe in that future.
And so we're buying in that future and the fact that the current, you know, the PE comparables, other kinds of things, you know, don't make sense on a quarter by quarter basis, even factoring in a, you know, an interesting keg or anything else.
I think it'll be for all of the AI companies that are, you know, kind of, you know, prepping and considering going public, it'll be important to establish that basis because I think the first year or two will look a lot like, you know, kind of early.com companies, some of which were complete flameouts and some of which were enduring institutions, Amazon being an example.
What do you make of the unit economics right now?
This is something I've been thinking about when we think about these companies like 10 years down the line,
where currently the unit economics don't really make sense.
Like the cost to build these models is so incredibly high,
and that is why opening eye and Anthropic, any of the sort of foundation labs,
are burning a lot of money right now.
And it almost seems as if the only way that this works out from the perspective of the business,
model is for these companies to become, like, essentially utility companies, like, almost
have these monopolies overcompute. And Sam Altman has said this. He said that eventually he could
see Open AI becoming something like a utility company down the line, which to me is quite an
interesting perspective, because it's quite different from what we saw in the dot-com boom,
where we've had, you know, a handful of companies, we might call it an oligone,
but a handful of companies operating in the same space.
I mean, where do you see the business model trending further down the line?
Obviously, you know, as you know, you have to distinguish between the training costs and the inference costs.
And one of the things when you look at these numbers and the inference costs are actually, in fact, pretty good economics for inference,
but obviously you have an expediting training category.
And so the question is, is you're doing this exponentially training training.
And I do think that at some point, the training costs, even though there's a goal to be,
get as strong as you can there because then your quality of model, you know, kind of dominates
through all of the different inferences units. Essentially, you know, at some level of exponential,
that starts asymptoting just because it doesn't make sense within, you know, how many gigawatts
do we have, what kind of economic cost do we get there? And I think a little bit of the race is a
race to that asymptote. Now, where is exactly that asymptote is, I think, somewhat governed by
availability of capital from investors believing, and then the delivery of revenue. Now, part of the
delivery of revenue is not just the current services where you say, okay, you know, Open AI is
dominating the kind of chat bot and kind of what the consumer and kind of interaction with that
service is. Anthropic is doing the coding side, you know, followed by OpenAI, you know,
on the coding side in terms of how the API and coding works. But like, this has got to be.
be just the beginning. Part of how you're looking at these AI models is not just the provision of
tokens, which obviously is interesting and across a number of things. But what is the output that comes
from now that software engineering can be much more broadly spread and a whole bunch of areas
which otherwise wouldn't have been able to afford software engineering can now do them? That can
affect what their productivity of their areas. What happens there? What happens in services firms
when a bunch of different services take legal or accounting?
or anything else can be done with intensive, you know, kind of amplification of this,
and that it's not just the question of, like, you know, paying seats in the software basis,
but what is the delivery in terms of the margins of what the delivery of the service is.
And where all those economics play out is, I think, very early in TBD,
and you'd say, well, should we wait to go public for those?
I actually think one of the things when you, you know, you two know this better than I do,
But I think one of the things that's been a bit of a kind of call it a social problem over the last few decades has been so much of the growth has been contained only to the private markets because of like let's delay going public until it's, you know, extremely stable, which means that a lot of folks who who cannot participate in the private markets only get exposure to the public markets.
And I think that's one of the benefits of having some of these AI companies.
But I think it's early relative to what are the business model is going to?
going to be? And then what is that revenue going to be? And now, utility, I think, in terms of the fact
that any task that you do with language or information, I think, will have AI participation at some
level of depth, whether it's complete automation, whether it's augmentation, whether it's
assistance in various ways. And so the spread of that is, I think, a good measure on utility.
I think one of the things that's good for society
is the fact that we have multiple providers
of these frontier models
so they're competing with each other
in terms of pricing, in terms of availability
to entrepreneurs, building other kinds of different services
in terms of being able to, you know,
like the best governance mechanism
is, you know, relative to the health of society
for the cost of things, oligopoly, monopoly, et cetera,
is to have competition, which then brings the price down.
And I think that's something that
we are also seeing. So the utility question around, kind of around the, like, should it run like
utility, you know, I don't know how, you know, how well utilities are thought of in the UK,
and the U.S. utilities are kind of a disaster. So you don't want it run like that.
So I'd love to just get your, I mean, granted, you might have a bias because you're on the
board of Microsoft, but you're an investor in several AI platforms.
LLMs. Can you give us a layer of the land in terms of the competitive set? You know, obviously
Open AI, Anthropic, Google with Gemini, and then the Chinese models, and I'm sure there's a
bunch of others. But give us what you think is the competitive landscape right now.
So I think, and this is to some degree a good news for startups and protective startups is,
I think the strongest positions are Open AI and Anthropic. I think in the traditional big
companies, Gemini is kind of next, and I think we'll have a bunch of different efforts from
meta, you know, Microsoft on its own, obviously he used a lot of open AI right now, you know,
Amazon, who knows where Apple will play out and all this stuff since, you know, like they still
don't seem to realize that Siri is like, you know, 20 years old, kind of in tech terms.
Now, for the Chinese models, the interesting question will be, like there's a lot of
lot of good open source models that pretty clearly have some roots in distillation from
kind of the major Western frontier models. And I think part of what we're going into is an
area where that distillation will get a lot harder. Now, that being said, the Chinese are
building up compute, building up chips, have extraordinary amounts of talent, hardworking,
great tech companies. Matter of fact, the C-dance, you know, multimodity.
video model is, I think, amongst the best in the world. So I think it's already, you know,
that you already have stuff that's kind of playing there. But like, and then you get kind of a jagged
edge. Like you go, okay, let's go coding specific. So most developers will say Claude Code.
Part of the reason is because Claude Code has got the best model, I think, the best interaction
thing for iterating through kind of the amplification of a software engineer.
But then the in-depth engineers that I know who play with them go OpenAI Codex because it spends, it's much more useful at like call it, you know, 20, 30, 50, 90-minute reasoning, long tasks that play to a harder engineering.
And so they prefer that versus Claude Code.
And right now, at the moment, that's it when you get to like people who have exposure to it.
Now, some people then go, okay, I can't afford either of those.
So I'll use Quen, you know, the Chinese open source model, which is quite credible.
But when you think about the fact that, you know, when you're looking for coding, just as one instance,
you're actually looking for really part of the reason why you pay, you know, software engineers and all the rest,
you're really looking for something that's quite good and quite reliable that doesn't introduce bugs.
Obviously, we have mythos coming with questions around cybersecurity and what is this,
all mean there, and that matters too. So, you know, I tend to think a little bit on the coding side
for your particular coding problem. It's a little bit like what we call in blitzscaling at
Glen Gary, Glen Ross Market, which is, you know, first prize Cadillac, second, knife,
steak knives, second prize, steak knives, third prize, you're fired. So roughly. But then it's
different across, you know, like images, like the Open AI image generator, is I think now,
like I've been wanting to create graphic novels for eight years with AI, and now I think I can.
So that's interesting.
Anyway, so we could go, we could spend the entire time on the different capabilities of these.
So it's a good segue into talking about mythos.
So Anthropic described mythos.
It said that it turns every computer into a crime scene.
And I'm trying to distinguish between what is response.
warnings about their own product and catastrophizing as a means of fundraising.
Because everyone's talking about Mythos right now, and there's sort of this, and it's prevalent, I think, across a lot of AI,
that my technology is so amazing that it's going to reshape the world, both good and bad.
Please invest in my Series D at 100 times revenues.
How much of this is responsible warnings from, you know, Dr. Frankenstein that Frank could be dangerous?
and how much of it is just, quite frankly, is just fundraising.
One, I think the anthropic people are very earnest, principled, and honest.
So I think there's a fundamental basis, which is completely honest.
And here's a simple way of making the case, which is, if you said the minimum that mythos is
an ability to have an infinite number of quality cybersecurity engineers who are penetration testers,
and it's not infinite, obviously, because there's a compute cost for running of them,
but you could take a thousand of them and run them in a direction.
You end up, if it's just that, you end up with a re-changing of the cybersecurity landscape
because we have billions of lines of code that essentially haven't been touched
and partially because there hasn't been either an economic model
for the number of cybercriminals or the number of people employed by rogue states
to go after any other than a certain set of systems.
But it broadens the range by the ability
to just kind of spin up many new penetration testing engineers
with a kind of an AI mechanism.
And that's the minimum that I think you could look at
the discussion around mythos
and its cybersecurity issue is.
I think that it's actually somewhat better than that.
And I don't mean that, that's amazing, what I just said.
But I don't know yet how much more, right?
Like is it, is it, oh my gosh, you put a classic, you know, call it a, you know, top 10% cybersecurity engineer and put them against, you know, kind of a couple of mythos agents and do they outpace that person?
You know, it's unclear to me.
Does it think in ways that we haven't thought about before and generate new patterns?
that's unclear to me, but you don't need those to have this be a massive, you know, kind of shift in how we think about cybersecurity.
What are the kinds of things we need to do?
And so I actually think that they're doing a responsible thing by saying there's an important change coming and we need to get at least the essential systems ready.
We're actually more than a minute to answer this because it's a difficult or, you know, we're going to need a bigger boat.
but Andrew Yang, who was on our podcast a few days ago, said, you know, he's having a moment, right?
Because he's sort of in some ways predicted all the fears, the sum of all fears around capital destroying labor.
And there's, you know, people would point there's already evidence around youth unemployment increasing, a lack of hiring layoffs at some of the early adopters.
We, I won't use it.
I have taken the view that we have been to this movie before, and some of the catastrophizing
around the destruction in the labor market doesn't recognize the new opportunities and the new
jobs that will be created. So I would love to just get your thought and take whatever time
you need to describe what you think the impact on the labor market is going to be of AI.
Every job that uses language or information will have an AI component that will range in certain cases of automation, in certain cases, much more AI work, and in certain cases, kind of AI amplification or augmentation of human labor.
And I think that will be true of all of those things.
And so almost every industry, because if you think about even like steel manufacturer, you still have meetings, financial analysis.
this, you know, kind of capital, other kinds of things, you know, legal, other kinds of things,
all that play in it, you know, strategic planning, and so forth. So that means that AI will touch
everything. Now, and I, by the way, inherently also agree with you that one of the general
problems when you get new technology, put this in super agency, is you go, well, I can see,
if I take jobs as a fixed number that do not grow and do not change, and I can see that a bunch
of them will be, you know, now the horses, then the buggies will go away and we'll have cars
and like all the grooms people will go away and all the horse carrying jobs will go away and all the
horse cleanup jobs will go away. And I can tell that and be like, well, but you didn't predict
all the new cars, the new jobs and drivers and da-da-da-da, and that particular thing. And
you can predict the negative changes and you can't predict the positive ones. So now the
The rub in this is there's a couple of places where AI creates kind of multi-industry, very fast, much cheaper things.
Like, say, for example, if you said, well, you know, like, for example, one of the areas where I think you'll see a lot of automation is where human beings are following a script, the AI will follow a script much better.
So I think, I think, like customer service jobs go completely away.
But by the way, I think like sales jobs transfer a whole bunch too.
And what does the new sales universe look like?
Both of them are lots of body employers.
And so you get a, you get some pretty big transitions in terms of how this plays.
And that's the thing is I actually think that the transition points will be like more likely than not change.
challenging to navigate. Now, part of the reason, like, I still tend to be a AI, like, we just should
accelerate as much as possible and use it. We can go into that, even though we're going to have
this transition difficulties. But the thing we should be doing is saying, well, with AI, how do we
help in these transition difficulties? Namely, how do we say, oh, if you're not long, longer,
customer service jobs, which they haven't really been decreasing yet. And so therefore, a little bit
of the, you know, kind of college people not being hired and so forth, it's much easier for
companies to say we're doing layoffs because of AI, then we mishired in the pandemic.
There's global instability.
We're not investing in the future at the moment because of global instability, et cetera,
so no, no, we're strong because we're using AI.
So there's a lot of discourse on that.
But I actually think that, you know, things like the customer service jobs that will be the,
as one of the kind of canaries
and the coal mind of
we have to have customer service
and all of a sudden the jobs start going down
because of Sierra or Parloa
or whatever
then I think that the
that you'll begin to see some of that.
Now, I think you'll see it.
It's just the question of speed, time, how, etc.
And then we get to the transformation,
transitions.
And part of the transition is let's deploy AI for that.
Let's deploy AI for,
okay, I don't longer have a customer service job.
How do I, what other jobs could
work with me? What other jobs could I do
or being created? What are the
things that, you know, like,
you could help me learn? What are things you could help
me do? What are the jobs you can help me
find? And when I think about this
from a society point of view, because these
kinds of workforce transformations,
you know, the industrial age
was really painful in terms of
it's, like, we don't
have any of the wonderful things we have,
without the Industrial Revolution.
And I think the same thing is through
of the cognitive industrial revolution with AI,
but the transition is really being able.
How do we make the transition better?
And I think the simplest thing is deploy AI for it.
We'll be right back after the break.
And if you're enjoying the show so far,
tune in on Sunday for our founder series.
We will be speaking with the co-founder
of the hot new legal AI startup, Harvey.
Support for the show comes from LinkedIn.
It's a shame when the best B2B marketing gets wasted on the wrong audience.
Like, imagine running an ad for cataract surgery on Saturday morning cartoons or running a promo for this show on a video about Roblox or something.
No offense to our Gen Alpha listeners, but that would be a waste of anyone's ad budget.
So, when you want to reach the right professionals, you can use LinkedIn ads.
LinkedIn has grown to a network of over one billion professionals and 130 million decision makers according to their business.
data. That's where it stands apart from other ad buys. You can target your buyers by job title,
industry, company role seniority, skills, company revenue, all suit can stop wasting budget on the
wrong audience. That's why LinkedIn ads boast one of the highest B2B return on ad spend of all
online ad networks. Seriously, all of them. Spend $250 on your first campaign on LinkedIn ads and
get a free $250 credit for the next one. Just go to LinkedIn.com slash Scott. That's
LinkedIn.com slash Scott. Terms and conditions apply.
I'm Maria Sharpova, and I'm hosting a new podcast called Pretty Tough.
Every week, I'm sitting down with trailblazing women at the top of their game to discuss ambition, work ethic, and the ups and downs that come on the path to achieving greatness.
We'll dive into their stories and get valuable insights from top executives, actors, entrepreneurs, and other individuals who have inspired me so much in my own journey.
Follow Pretty Tough wherever you get your podcasts.
I'm a stead Hearnden.
America actually. We're all talking to each other to see what did we do wrong? What did we not see?
I'm in Washington, D.C. this week to interview Ruben Gallego. He's a Democratic senator from Arizona,
and he's been thinking openly about running for higher office. But he's recently running to
some hot water because of his connection to Congressman Eric Swalwell. I have to learn from this,
and I will learn from this. But for me, it's not a 2028 question. It's about what it means to be a better
first boss in my office and also a better senator to my constituents.
This week on America, actually, we asked Gallego about predatory behavior in Washington,
his plans for immigration reform and more.
We're back with Profji Markets.
I agree, and Scott knows this, that eventually new opportunities will come as a result of
AI transforming our economy.
But the question is what happens in the interim and the Industrial Revolution
I think is a great point. We're in a comfortable position now looking back 100 years ago
saying, oh, it worked out. Look how wonderful the world is today. But I'm sure there are thousands,
potentially millions of people who lived through that revolution, who did lose their jobs,
who were not very happy about the situation. And we've talked to people who have talked about
that too. Darona Samoglu is someone who says, actually, we didn't do a very good job of handling
that transition. And it seems that this is going to be a similar thing, except even
faster because instead of putting vehicles on the ground or building factory equipment,
you're literally just shipping software to companies. And we're already seeing it where
employees are being laid off like thousands at a time in one go. I mean, this is what
happened with Block. This is what happened with Amazon. They just literally send an email.
They say, you're all fired. Sorry. That's a new kind of system. And from a just purely like
capitalist perspective here. One, I mean, from a human perspective, that's got to be kind of shitty
for a lot of people. I'm already seen that. But also from a capitalist perspective, it seems as
though one of the biggest obstacles for the AI buildout, if you are an AI optimist technologist,
one of the biggest obstacles is how unpopular it is now, largely because of this work being
replaced dynamic, where AI is now less popular in the US than ICE.
So that seems to be a big issue, and we're now seeing it in the political sphere.
And so I guess my question is, like, how much of a problem is that?
Is the popularity problem for AI a genuine concern among the technologists who you are close with in Silicon Valley who are actually building this technology?
You know, you'd have to be pretty blind not to be concerned about the popularity problem.
And, you know, it's manifest and such, you know, things also is, you know, weird, like, we should have.
We should ban data centers.
And you're like, okay, you want the data centers built in Canada or somewhere else, you know,
as opposed to here.
You know, and most of the arguments against data centers are pretty spurious.
I think the key question around the local economy is say, hey, make it a good enough trade for us to bring it here.
Like bring in some economic prosperity.
Sure, the construction jobs and the running jobs aren't a lot of jobs.
Maybe there's some other things you can bring in.
Like, you know, maybe you could bring in a plus 10% of your energy of your usage.
You add 10% that helps us in a structural way.
I mean, there's all kinds of different ways you can do that.
Now, that being said, you know, for example, you know, the UK didn't invent the Industrial Revolution, but embraced it strongly and had, you know, centuries of global prosperity from it.
And so I do think the transition is challenging, but I think the embrace is valuable.
And I think that the question around, you know, what should we do is like it's important from an economic, it's like it is the driving force of what the future economies are.
And it'll only work for future generations if we embrace the right economic things.
But I think the key thing is to try to figure out what's the way we navigate better.
Like I think it's like, you know, anyone who says, oh, we navigated the Industrial Revolution perfectly is that.
out of their mind or a historical or anything else. It's like, look, what's the learnings? What are the
things we can do? Now, the most often thing that people who are not building within the, you know,
kind of just called competitive business world, competitive technology world, say, we'll just slow
everything down, right? And you're like, okay, the problem is competition doesn't slow down.
That's actually not the way that kind of competition works. Like if you just said, hey, everyone,
wait for me to get my economy in order, and then you can start shipping your export things.
I mean, you know, take a look at what's going to be happening within Europe with the spread of BYD cars from Hungary, for example.
It's like, oh, no, no, wait for a decade for us to get our auto industry in place.
That's not going to happen in terms of how this operates.
So you have to be like figuring it out how to make the adjustments and transformation at the speed that, you know, here we are in the markets podcast that the markets are bringing about.
And that's the thing to do.
So the idea is they give good ideas on the transformation that we can embrace to help make the transformation better.
And that's part of the reason why, like I gesture at the speed and depth of AI also gives us tools to help with the transformation.
How much genuine thought is being given to those kinds of ideas?
Like, I think the trouble that a lot of people have is, on the one hand, there's the possibility that AI could be good for society and it could be a great.
thing and bring prosperity to many people, potentially. But also, on the other hand, it could also
make a small handful of individuals incredibly, incredibly wealthy. It could make wealth inequality
even worse. It could concentrate power and wealth into the hands of a small few, which makes it,
I think it makes people feel fairly and justifiably suspicious. And, you know, Dario Amadee could
could go on a show and say
AI's going to be, you know,
wreck everything and people would
might say that that's
not a lie, but
exaggeration. He could also say
AI's going to be a great thing for everyone. Everything's
going to be fine. Everyone would say, that's
an exaggeration, that's a lie, he's just trying to get
rich. So I guess the question is like,
when you interact
with these people, as
someone who's kind of like in the eye of the storm,
to what extent are people
actually concerned
and working on solutions to make this work for everyone
versus just trying to kind of like keep the message intact
as they pursue the ultimate goal of wealth.
I don't think that's wrong to produce a goal of wealth.
I think every networked system, cities, trade,
everything else creates, you know, economic inequality
and an economic inequality as part of the engine
by which we, you know, fuel capitalism and competition and all the rest.
So it's like, you know, just to be, you know,
clear. I think that's a fine thing. I do think the question is, is how do you bring, you know,
kind of the bulk of society along, e.g., how do you have benefit that spreads, you know,
throughout all society? And I'd say, like, for example, is one instance, you know, Sam Altman's
gotten a bunch of bad press over the last, you know, X months. And yet in the very early days,
he was funding, you know, universal basic income experiments, trying to figure out what kinds of
things would work there and, you know, spending money directly himself in order to do that.
So I'd say that there are people who actually, in fact, pay attention to the issue,
care about the issue.
It's one of the things I don't think Sam gets enough credit for is just one instance.
Now, that being said, the question is, say, all right, are we putting in time and energy
in order to help society?
I think that a number of the different players,
you know, definitely Microsoft, Google, Anthropic, Open AI,
and some others, are interested in willing to do that.
They're not willing to do it if it's fruitless relative to its engagement and everything else,
given that they all have an intense competition clock going.
So if you wanted to say, like one of the things I've been telling, you know,
governments for years is, like, the kind of thing you should be asking for is, I'd like to have
a medical assistant that runs on every smartphone that can help people. I'd like to have a
legal assistant that runs on every smartphone that can help people. I'd like to have a tutor
that runs on every smartphone and essentially make those the equivalent of free as part of the
benefit for helping in transition. Like, I think you could do that, and I think you could very
easily cut the deals in which they would build it and so where than the government could make
sure it's there for everybody and to be getting some benefits in, you know, as we sort out the kind of
melee of what's going. But like, say, for example, they'd just say, okay, we'll go build the
tutor that helps with, like, all things, economic and jobs. Well, okay, we build it. Are we
going to use it? How is it going to get there? Do we have to spend energy and time on that, too?
So, given that there's this focused competition about where the primary economics are and the
established a platform and so forth. So this is kind of an area where I think they are legitimately
concerned. Part of the reason why they in their own minds will say, you know, pretty, this is,
you know, Scott was gesturing at, you know, kind of like the, oh, are you as fundraising by saying,
white color bloodbath in X years, you know, buy my series D. And it's, I don't think that's a great
way to try to sell the series D. I think.
it's actually like, hey, shit's coming.
Somebody should be helping do something here, and we're happy to help.
We got our shit that we're working on, but we're happy to put some energy into it.
And I think that's what we need to be doing.
You know, now, unfortunately at the moment, saying well-run government seems to be, you know, something on kind of on the order of dinosaurs or dodo birds or anything else.
And it's difficult to figure out what to do.
you know, I'm always happy whenever a, you know, any government of any Western democracy,
style democracy calls me and says, hey, I'm trying to figure this out. What can I do to help?
Like, I'm like, okay, here's some ideas, here's things. Like, you know, the medical assistance
one that I've been talking about since 2018, right? Because you want to say, hey, this AI transformation
is challenging. Most people do not have immediate access to medical advice, right? The vast majority of the world
does not have that. And so it's like, oh, could I get some that's good? And that would be easy for
governments to set up because it's changing the legal liability and then the mandate for how it's
reached and then having a deal with whatever number of companies you want in order to go into that
in terms of what you do. And then all of a sudden you're providing something. Part of me wonders if,
I mean, you mentioned how like Sam Altman has been working on and talking about universal basic
income ideas for a long time. And in fact, opening I recently just published a proposal. They called it
like this New Deal style proposal where they gave a bunch of ideas on how we might lessen the
potentially harmful impacts of AI on our economy and things like wealth inequality. They suggested
higher capital gains taxes. They suggested something akin to a UBI. Like, to their credit,
they are thinking about this and proposing solutions and putting it out there.
Part of me wonders, however, that this issue of comms and AI's PR is almost an impossible problem
because of the reality, the fundamental reality of wealth inequality on the ground today.
And this is something Scott and I have talked about, where, you know, we look at like the wealth inequality genie coefficient, for example, which is 0.83 in the,
US right now, and during the French Revolution, it was the same number. Things like that,
where Americans hear this, and they get very upset about it. And so anything that suggests that
this could worsen that trend, AI being one of them, and it could, like, there's nothing,
there's no amount of communications, there's no amount of PR or IR that you can do to make them
feel better about that. That's what I've started to wonder about this AI revolution. I wonder
if you have any thoughts on that and if you agree or disagree.
I tend to not focus as much on the inequality question, as much on, like, if you say,
the kind of the bulk of 80% of society, is there circumstances improving in some substantive way?
And I tend to think that if they think, hey, I've got some reasonable opportunity playing the game,
my circumstances can improve, they might be improving, then I think you get more social stability.
And I think it's when a, like, I don't know how to feed my children.
I think, you know, my prospects over the next decade are just categorically worse, you know, etc., etc.
I think when you hit it.
And it's not, you know, like the usual kind of thing that I tend to say is inequality tends to be more of a political topic because it's like, well, the CEO should only be paid 20 times as much as the bottom person.
You're like, well, how do you get to that magic number?
Why isn't it two?
Why isn't it five thousand?
You know, like you let markets sort those things out.
out. And by the way, inequality comes with the economic system that we have built prosperity
in capitalism and in competition. And, you know, cities generate higher wealth because of network
density. Trade generates higher wealth, you know, kind of these kind of things. And it always has
some disparate curve. It's not like it's evenly spread out like peanut butter across the whole
system. Now, that being said, it is absolutely mandatory for healthy societies to be saying,
hey, I as a middle-class family am like viewing myself as having better prospects as I'm going into it.
I might have a rough year or two.
I might have some way that I have to work hard or anything else.
But like that kind of thing is an option.
And I think that that needs a strong solution.
And I think that part of the thing is right now, I think people are like, look, times are hard.
It's part of the reason why, you know,
I think democracies around the world tend to be,
not 100%, but tend to be, you know, kind of saying,
hey, we want to elect someone who's going to take a wrecking ball to everything
because we think maybe that's the only way that things can get better.
That's almost always leads to things being worse, you know, cultural revolution,
you know, French terror, et cetera, et cetera.
Renovation is the important thing, but I get the frustration,
the anger, the uncertainty that gets there.
And I think it's beholden on all of us to try to say, hey, let's try to renovate these institutions together and let's try to make it work for the bulk of 80%.
And whatever it takes to get there.
Because like, for example, if you said the simple problems in equality, it's like, well, then let's just simply, you know, institute a, you know, kind of like a 90% upper bound tax on stuff.
And that should just simply solve everything.
And I don't think that would actually, in fact, solve everything.
I think, you know, like I think someone did an estimate that if you took one of the super billionaires, you know, Elon or Bezos or whoever, and then simply redistributed their economics across the entire U.S. country, it makes a difference.
Everyone's, like, you know, small savings account gets a little bit bigger, but it doesn't fundamentally change anything there.
So you want to be changing the actually economic system of what's happening.
I would push back a little bit, Reed.
I find that just because taxing billionaires or super wealthy people wouldn't solve the problem,
it does feel like once you get to kind of the 0.1 percent, your tax rate plummets.
And just from an equity standpoint in being responsible, we do need to restore a progressive tax structure,
including it should go up for the wealthiest, not down.
And yeah, the defense that it's not going to solve our problem, I don't think is a solid defense.
Oh, just to be clear, I'm for progressive taxation.
This is an inequality point as being the, like, that's the problem of society and what we need to solve.
You think there should be a floor.
How do we raise as much of the, kind of call it 80 plus percent of people as we can?
And whatever that took in progressive taxation, that's awesome from my point of view.
I just, that progressive taxation itself, I don't think is sufficient to the solution.
Right.
Like, it's, by the way, I completely agree.
The differential, like, there's a reason why I think in, in most of the places where we've had strong growth in societies, we've had progressive taxation.
The fact that the wealthy can hack it in ways that they're paying less tax than a middle class person as a percentage is essentially a dysfunction of society and that that should be fixed.
Right.
But that's fixed for a different reason other than pure inequality questions.
What would be that fix? Because right now, a lot of people are just like, well, wealth tax. I mean, it might be a crude and unn nuanced perspective. But that's what is being proposed in California because they're like, well, we haven't seen anything else. Like, what is the alternative in your view?
Well, so take the California one, which is they go, okay, we have a budget shortfall because of, you know, things that Trump's, you know, quote unquote, big, beautiful bill did. So we're going to pose a one-off wealth tax.
for a particular, you know, kind of benefit of segment of California.
I think all the optics in this are wrong.
It's not fixing the systematic thing.
It kind of, you start with California,
which other than Manhattan has the highest income tax rate in the country
from the way of doing this,
and no real, you know, understanding of that
and no sense of what we're really trying to do
is fix the overall system.
Now, like, for example, I was supportive of the increase of,
of the state income tax provision and the fact that you say, hey, we now charge this income tax
now. We added a new level at, you know, higher levels, like a million dollars of income and so
I think those are good things to do. There's a, and by the way, some people against wealth
taxes go, well, wealth taxes, we never do wealth taxes. They're terrible. They're improving.
You're like, well, actually, property taxes are former wealth taxes. There's ways to do it.
Now, there's a bunch of questions around what happens with private.
you know, stock and so forth. I mean, there's a bunch of unknowns and challenges, and you want to
compound into growth. I'd say that the, you know, the reason why the wealth tax comes out in
California is because it's more politically expedient. The easiest one is repeal Prop 13, but that's a
third rail, which is a fixed on property taxes, which is a wealth tax. But that's a third rail
in California because the commercial real estate associations, which benefit massively economically,
where to slaughter any candidate who does that.
So it was like, oh, let's go after billionaires
versus, versus, you know, real estate across the entire country.
I'd love to get your take on the case that kicked off yesterday,
and that is Elon Musk's actions against or action against Open AI.
I was around, and basically Elon, I think,
is doing everything possible to throw anything he could possibly invent at Open AI.
And, you know, among them was, you know, kind of claiming that they missed
led him, like he wanted to identify himself as a co-founder of creating it, but then it's, oh, you
misled me into philanthropic donation with a later plan to turn this in a commercial. And having
been around it, I know it's factually incorrect. We'll see what happens. It seems that it's like,
it's the kind of the definition of legal harassment and kind of trying anything possible to
going to say, no, I didn't make a huge mistake when I basically told OpenAI that it should
become a company that I own I Elon should own 80% north of. And if you're not going to do it,
I'm leaving. And oh, look, the organization did well. So I'm very, I'm hoping the truth and justice
will prevail, which is basically it'll be a speedy trial, which all of this stuff will get kicked out
and maybe even an open AI countersuit for massive defamation and false suits might even work, but we'll see.
What do you think of the idea of creating sort of this space media AI conglomerate?
And I mean, they're talking about the SpaceX IPO.
You're a markets guy.
You're an investor at 95 times revenue.
and Ed pulled some data.
When Google in public, it was growing 240% a year and trading at 10 times revenues.
SpaceX is growing 20% or 24% a year and going out at 100 times revenues.
And I'll say this about all the kind of upcoming IPOs.
Doesn't it feel like the valuations have gotten way out over their skis here?
I certainly wouldn't buy in the SpaceX IPO.
I think most of the people who are, who are,
are buying there are kind of in the, hey, everyone thought Tesla was crazy for a long time,
and I made a bunch of money on Tesla. And they did. I agree, but they did.
Yes, exactly. And so, well, what the hell? I'll give it a shot. As I think roughly what the
investment thesis is there, which, you know, it's kind of like, that strikes me as like a Bitcoin
charter. It's like, well, it went up, so I should buy because it will go up again, which is, again,
smart investing behavior.
I also, a friend of mine kind of said,
hey, I think what he thinks is going to happen is that the,
and he's heard from other people,
he thinks that SpaceX will go out and then they're going to merge it with Tesla
because, you know, Elon's comp package is apparently on market cap.
So, you know, gets a whole bunch of money if it goes out and they merge.
Oh, wow.
You know, I haven't cross-checked this, but, you know,
it strikes me as a possible.
you know,
a set of
considerations.
But anyway,
so I think
it's IPO should be
made on a prospect
of real growth.
And so
if you're not having that,
I think, you know,
the market should be more sane.
Whether or not they will be sane
is a different question.
We'll be right back.
And if you're enjoying the show,
come join us on tour
and hang out with us live.
You can get your tickets
at ProfptuMarketing Markets Tour.
The link is in the description.
Wedding season is here and your wallet is already sweating.
Between The Bachelorette in Vegas, the destination ceremony, the registry gifts and the outfits for every single event, being a good friend has never felt more expensive.
I'm Vivian 2, Your Rich BFF.
And on this episode of Net Worth and Chill, we're breaking down exactly how to survive wedding season without going broke.
We're talking hidden costs you forgot to budget for, how much you actually need to spend on a gift, flight and hotel hacks that could save you hundreds.
my most unhinged but totally legal money tips for stretching every dollar because celebrating
love shouldn't mean sacrificing your financial future. Listen, wherever you get your podcasts or watch
on YouTube.com slash your rich BFF. We're back with Profi Markets. We were talking earlier about
that there is a lot of competition in AI, at least for the moment, and that might be a good thing,
bringing prices down. One thing that I would also add is that it's also being subsidized right now
by essentially the VC community, which is helping with prices. I feel like, a very good thing. I
eventually prices are going to go up. Either way, one thing that we have been a little cautious of
in the AI ecosystem is what we have described as corporate incest, which is where there is a lot
of sharing of revenue that we're seeing. We saw this between Nvidia and OpenAI. And then also,
we saw this trend of these kind of like pseudo-acquiois or pseudo-investments that turned out
to be acquisitions. And one of the most prominent ones was,
was a company that you founded, which is inflection.
And something that really struck me,
I mean, I remember reading the headline,
like Microsoft is partnering with inflection.
And Infliction was this company that everyone was talking about.
It was one of the hot AI startups.
Microsoft is partnering with Infliction.
They're going to pay them, you know, several hundred million dollars
to partner with them.
And then suddenly we also learned that half of the employee base
has moved over to Microsoft.
And then we also learned that the co-founder,
who you started the company with, Mustafa Sullyman,
also moves over to Microsoft
and is now the CEO of Microsoft AI,
at which point I started to think,
well, this wasn't a partnership,
this was an acquisition.
And then we started to see it with other companies, too.
Meta goes and grabs Alexander Wang,
who started Scale AI, another big AI startup.
We saw it with Google and Character AI.
They kind of create this deal,
and then suddenly all the employees end up working for Google.
It made me think that big tech is sort of taking over
the AI ecosystem, you are sort of at the heart of this.
What happened there with inflection and what do you make of that concept in my head?
Whether it's an acquisition or not, by actual point of view, does the business continue?
Infliction continues.
It has 70 employees.
It has been doing a B2B business and so forth and pivoted.
So I think the question is, you know, is the business continuing relative to the, is it really an acquisition or not?
So it's a partnership is a very generic world and it's a huge deal.
which a bunch of employees and a non-exclusive IP right and, you know, co-founder went.
And that's happened in even more cases than you've mentioned.
I think inflection, we set the stage on.
And I think that the question is, I think part of what you've seen in the large tech companies is they would prefer to be buying companies.
versus doing these kind of, you know, kind of strange deals,
but feels that the regulatory environment is too hostile for them to do that,
especially earlier.
And so are looking for other ways to make it work.
And so you say, well, what's a deal that you could make work that isn't an acquisition?
It's like, well, you know, get a, like kind of something that's not quite a BD deal,
but not kind of a corp-dev deal.
Like, it's kind of like halfway in between, and the measurement is, does the business
continue, you know, with vigor and effectiveness afterwards. And I think that the, you know,
I think regulators, you know, from having seen a bunch of commerce questions about this, you know,
aren't fond of it because they're like, well, we want to be, we're trying to regulate the corp dev side
of things. I think part of the challenge is, is I think that the theory, just speaking in the U.S.,
although I know that the
the UK competitions market
can be a little nutty,
you know, like view.
Like, we should not allow something
if there's going to be future competition,
which is like kind of strange.
I think that the question around
acquisition should be
not, like, are we increasing
the amount of capability of competition?
Like, so let's take, for example,
one of the big companies
that has no play in AI right now, which is Apple.
Like, you shouldn't go get in the way of Apple
buying an AI company
because that will actually increase their competitiveness
with the other hyperscalers in terms of what they're doing,
even if it makes a big company bigger.
So you want to be increasing the competitive landscape of this
and looking at where they currently are.
And I think that's what you'd want to do on the Corp Dev side,
Now, this kind of thing, I think, is a deal innovation that's like BD plus in order to kind of navigate the universe.
And, you know, Infliction got a bunch of capital and could pivot its business from doing frontier models to B2B and a bunch of other things in order to execute on this.
I mean, it sounds like you acknowledge like it is a way of getting around regulation, essentially.
And I guess, you know, people can make, can do what they will with that fact, whether it's a good thing or a bad thing, whether it's innovation or like, you know, kind of skirting around the law.
Well, I don't think it's skirting around the law because I think what happens is the corp dev law doesn't cover it.
Right.
So it's like, here's a way to make a deal transaction of a certain sort, a new sort of work.
Right.
Because this doesn't cover it.
So, like, skirting implies, like, well, I'm going 80 miles an hour versus the 60 miles an hour that's posted on the highway.
It's like, well, I'm going down a different road.
Right.
It's almost like tax avoidance versus tax evasion.
Like, it's technically legal, but it's, like, in essence, accomplishing kind of like the same thing.
It's still following the law, but effectively, like, it is an acquisition, like, in a lot of ways.
So I guess the question would be like, is that of concern?
Should startup founders, I mean, just as an observer, I looked at inflection, I looked at
Mustafa Suleiman, I'm kind of thinking, you know, and you co-founded this company, and I'm
thinking, this company could be the next Google, this company could be the next Microsoft.
And that didn't happen because there was this moment where big tech had so much cash.
essentially, that it could just kind of be like, hey, like, let's just partner, we'll pay you
hundreds of millions of dollars, and then that'll be the end of it.
Well, actually, I think you've got it exactly reversed.
Okay.
No, I think you've got it exactly reversed, because the question is, I think these deals
are generally terrible for investors.
They're not, like, what you really want to be able to do is command a really high price
and say, the only way you get anything here is that you buy the whole company, right?
And if that's not available and you believe that your prospects on your current path aren't working, you will accept a suboptimal deal.
So I think these deals are broadly much less good for investors.
Fair enough.
Right. And so investors don't want them.
And by the way, then there's also the function of, you know, are they usually in these deals, it's on smaller numbers.
Deals are better for investors.
On larger numbers, investors and employees are the same.
but that also means that it's then suboptimal for employees as well.
By the way, the reason why people will do this is because they go,
oh, I see that the path we're on currently this business is not going to work.
So, for example, in the inflection cases,
we're not going to be able to establish our agent with building frontier models
in the way that we hope we need to pivot.
So we're going to pivot to P to B.
How do we get the capital?
How do we make that happen?
Well, we do a deal where that enables that to happen.
But that was a, that's a, that's a pivot deal, not a, like it wasn't Microsoft came along and kind of said,
hey, a little bit of cash, you know, you completely upend your business.
It was like, no, no, we've actually already decided that we need to change.
And we're trying to figure out how to fund that change and make that change go through.
But then the question becomes, why is inflection in a position where they feel that they can't compete in this space?
Like, inflection, from my understanding, you had some of the greatest minds assembled in all of technology, yourself included, to make this thing a success.
And the fact that you guys decided that the better option would be to sort of slightly fold and pivot into Microsoft tells me that the landscape was too dominated by big tech, which is something that people have been concerned about, about this generation of technology versus, say, like, the dot-com boom.
So we figured on a chatbot side, which thing we were focused on, that going into scale building frontier models would be too expensive. And a year or two down the road, we'd be like in this really, really difficult position. Because there's a strategy. You're trying to, you try to pivot before you've driven the bus off the cliff. You try to not, you try to go down different road before you're like off the cliff. Now, if we had had the idea which Anthropic got into, which is let's go totally coding.
and APIs, and just on that, maybe we would have been anthropic, right? We didn't have that idea.
So we had a bunch of talent, but it's always about the engagement with the market, what you're
showing new investors, even in crazy times, et cetera. And we looked at the thing we had and said,
we need to pivot to B2B. If instead we said, hey, we could do coding, maybe things would have
been different, but we didn't, you know, more fool us. We didn't have that idea. And, you know,
hooray for Anthropics, you know, iteration.
Do you believe that the monopolization or market power of big tech isn't a problem?
Do you think that it isn't stifling innovation or having a negative impact on startups?
Well, the short answer is no, for a couple of reasons.
One is it's a question of do you have a growing number of competition?
So, like, for example, if we were five hyperscalers heading to three,
Right. Then I'd say we have a problem. But I think we're five to seven hyperscalers heading to 10 to 15, you know, Open AI, Anthropic, you know, Nvidia being some of the new ones with all competing with each other. And as a startup founder and as an investor, that surface gives me a lot of opportunity to maneuver. And by the way, the reason why you don't want to block out corp dev is you say, I want to create a company to complete with hyperscaler X.
well, what happens when the company is not doing that well?
I cannot raise substantial amounts of capital
if I don't have an acquisition opportunity to get out, right?
And so if you say, well, we're going to block all acquisition opportunities for that,
then I will never create the capital.
And then, by the way, me as a series AVC means that I'm not going to invest in a company
that's trying to create a new kind of search experience
because I can't get an acquisition out as one possibility
if the IPO doesn't work.
And by the way, my incentive is to create, you know, in modern parlance, a trillion-dollar company.
That's what I want.
And I want to go long for that.
So that's the reason why the regulatory structure is somewhat fubar in terms of the way it's currently conceptualized.
And right now, you can say, well, you get pure capital determination about, like, what the markets are.
And every VC is trying to spend as much money on AI companies as they possibly can in investing.
What do you think of some of these tech companies going vertical and acquiring media companies?
Complicated question, actually.
And actually, I'd be very curious to hear your answer to this too.
Like, roughly speaking, I think that people have not quite generated the right interesting generalizations from like Netflix originals and Prime and Apple movies and so forth, like all these things.
I actually think that part of what you should be doing in the modern age of the Internet
is everyone has an ability to have a certain amount of expressiveness to the world at large, society, customers, industry.
And so everyone should be running kind of like almost like some version of content marketing.
And I don't mean that as pure like, buy my product.
But I mean it's kind of like, this is who we are in the world, this is the kind of thing we're doing.
Now, some of it could be more pure like, hey, we're just going to have a cool set of TV series, you know, whether or not we're, you know, Netflix, you know, Amazon or Apple.
But it also could be other things.
And I actually think that's a good way for the companies to process.
Now, that's separate from we desperately need kind of objective civic news for democracy.
And how do we solve that problem, which has been crushed from multiple sides?
And obviously, I don't think the commercial is, the tech companies, other people buying these things isn't helpful, but I don't think it's the source of the harm.
By the way, I'm a huge fan of big tech vastly overpaying for independent podcast networks.
I think it's a great idea.
Yeah, I think it's a great idea.
Yeah, I think it's a great idea.
Well, I mean, it's funny.
I mean, there's definitely attention, right?
There's a tension between deep-pocketed companies using, uh, you know,
what feels like or cosplaying independent journalism to do anything about independent journalism.
That's the fear.
And they crowd out true independent journalism, which doesn't have the backing of deep pockets.
But at the same time, going vertical and having content or thought leadership,
that's how I built my last company, my last company,
I started putting out media on YouTube.
So they should absolutely be allowed to do it.
I'm a big fan.
What, I'm curious, if you were asked by the administration, let's be real, you won't be asked by this administration.
Let's, if you were asked by the next administration to help, loosely speaking, with a crude AI framework for regulation, as far as I can tell, the only regulation has come down so far is meant to stop regulation at the state level.
If you wanted some sort of safe and sane guardrails for AI right out of the gates, what are the two or three things you'd,
want to see happen. Well, by the way, I thought what the Biden administration started doing was pretty
smart, which is bring all the tech companies in, push them very hard in a set of things, get them to
make some voluntary commitments, then to up it and kind of start enshrining it. And it's kind of like
having red teaming on safety plans and so forth. I think the first and most important issue is what
happens in, like, bioterrorism, what happens in cybersecurity, et cetera. How does that get provision? So, and I think
there, because you want to stop anything that could have a systematic damaging of the entire
system, and you want to make sure that that's kind of, you know, kind of well guardrailed.
And that would be one set of things you would do.
I think another one you would do is say, I want to generate, like, whatever your list of
concerns are.
Like, say, if you're concerned as job replacement or your concern is, you know, I don't think
the environmental impacts are actually, I think they're all more.
politics, England, anything else, but it will say you had those. Then it's like, I want to have
you generate these kinds of reports, could be for, you know, only for, you know, government
consumption, but validated by your auditors, whatever the particular set of concerns are. And I
I want to, I want to have a trigger that if you, if some of these numbers are getting worse,
you know, in a time frame that is quick, I want to have a trigger by which your auditors
tell us quickly, whether so we can say, is something really happening and, you know,
jobs or something really happening and, you know, et cetera, you know, misinformation on networks.
And we could get some sense of it so that we can start intervening on it.
And then, you know, as part of that, we also want to get previews about what's actually
happening in the engagements with your products, so we know how to steer. And I think that's a really
good place to start versus all of the political melee, which is more about what I care about
versus what I perceive the world to be versus actual data and information.
Reid Hoffman is the co-founder of LinkedIn, Inflection AI and Manus AI. He's also a partner at
Greylock and sits on the board of many companies, including Microsoft. He authored six bestsellers,
including his latest super agency,
what could possibly go right with our AI future.
Reid also co-hosts two podcasts, Possible,
and Masters of Scale.
And for more, you can check out his new substack theory of the game.
Reid, thank you for your time.
Pleasure.
Ed, what do you think?
I agree with him on something,
then I disagree with him on other things.
I think what he kind of honestly acknowledged
is that big tech is really dominating the AI,
landscape and I don't think he convinced me otherwise, to be honest. So I think that's one thing
I disagree with him on. I also kind of disagree with him on his views on the wealth inequality
problem. Like, it sounds like he is pretty aggressively against a wealth tax. And that's fine. By the
way, I don't think it would work either. But I think that there could have been more, I think there
should be more attention paid to, okay, what is the alternative? Like, it is getting to that point
where we need to be very seriously advocating for different redistribution methods if you are going
to also say that the one that other people have proposed is not the right path forward. So,
I guess that's where I would be in sort of disagreement with him on. You know, he's also an incredible
entrepreneur, incredible power player in the tech community. I mean, you can't deny his accomplishments.
So I appreciate him for sharing his thoughts on all of these subjects with us.
If you talk to anybody with the fingers and the pies he has, they're like, oh, no regulation,
income inequality isn't a problem, let our horses run.
What if China gets out ahead of us?
You know, it's constant.
Excuse, excuse, there shouldn't be any regulation of AI.
There's no problem here.
Move along, nothing to see.
Reed is going to be as focused on preventing a tragedy of the commons as anyone in his position.
You know, it's just he can, and it's hard for him.
He's on the board of Microsoft.
He has to be very measured about what he says about Open AI.
He has to be very measured about what he says about Elon Musk.
And he sort of calls it as he sees it.
He says, look, this is, calls it as what it was,
that Elon's having the biggest case of seller's regret in the history of business.
Yeah, a lot of people just wouldn't give their opinion or give an answer to many of the questions that we asked him just then and he did.
Find me a VC or someone hugely invested in AI that has anything resembling the kind of moderate reasoned views of read.
There just aren't very many of them.
This episode was produced by Claire Miller and Alison Weiss and engineered by Benjamin Spencer.
Our video editor is Jorge Carte.
Our research team is Dan Chalon, Isabel Akincel, Chris, Nodonoghue, and Mia Silverio.
Jake McPherson is our social producer.
Drew Burrows is our technical director,
and Catherine Dillon is our executive producer.
Thank you for listening to Profugee Markets from Profitie Media.
If you liked what you heard,
give us a follow and join us for a fresh take-on markets on Monday.
