Freakonomics Radio - 547. Satya Nadella’s Intelligence Is Not Artificial
Episode Date: June 22, 2023But as C.E.O. of the resurgent Microsoft, he is firmly at the center of the A.I. revolution. We speak with him about the perils and blessings of A.I., Google vs. Bing, the Microsoft succession plan �...� and why his favorite use of ChatGPT is translating poetry.
Transcript
Discussion (0)
Are you having fun in your job?
I'm loving every day of it, Stephen.
Most CEOs of big technology firms are not loving every day right now.
They've been facing all sorts of headwinds and backlash.
But you can see why Satya Nadella might be the exception.
He's worked at Microsoft for more than 30 years, nearly 10 as CEO. At the start of the personal computer era, Bill Gates' Microsoft was a behemoth, eager to win every competition and crush every rival.
But the internet era put the company on its heels.
Newer firms like Google, Facebook, and Amazon were more nimble, more innovative, and maybe hungrier.
Jeff Bezos of Amazon would reportedly refer to Microsoft as a country club.
But under Nadella, Microsoft has come roaring back.
He invested heavily in what turned out to be big growth areas like cloud computing.
Microsoft has always been in the business of acquiring other companies,
more than 250 over its history.
But some of the biggest acquisitions have
been Nadella's, LinkedIn, Nuance Communications, and, if regulators allow, the gaming firm Activision
Blizzard. And there have been many more key acquisitions, like GitHub, where computer
programmers store and share their code. Once again, Microsoft is a behemoth, the second most valuable company in the world, trailing only Apple.
Its stock price is up nearly 50% since the start of 2023.
But that's not even the reason why Microsoft has been all over the news lately.
They're in the news because of their very splashy push into artificial intelligence in the form of ChatGPT, the next-level chatbot created by a firm called
OpenAI. Microsoft has invested $13 billion in OpenAI for a reported 49% stake in the company,
and they quickly integrated OpenAI's tech into many of their products, including the Microsoft
search engine Bing. For years, Bing was thought of as something between footnote and joke, running a very
distant second to Google.
But suddenly, Bing with chat GPT is on the move, and Google is trying to play catch up
with its own chatbot called Bard.
So how exactly did Satya Nadella turn the country club into a bleeding edge tech firm with a valuation of more than two and a half trillion dollars?
Our mission, Stephen, is to empower every person and every organization on the planet to achieve more.
And so as the world around us achieves more, we make money.
I like that. I mean, I assume you actually believe that. You're not just saying that, are you?
No, 100%. You have to have a business model that is aligned with the world around you doing well.
Today on Freakonomics Radio, we speak with Satya Nadella about the blessings and perils of AI.
We talk about Google and Heidegger,
about living with pain,
and about Microsoft's succession plan.
No, it'll be nothing like that.
Nadella promises.
We will take succession seriously. This is Freakonomics Radio, the podcast that explores the hidden side of everything, with your host, Stephen Dubner.
I spoke with Satya Nadella one afternoon earlier this month.
I was in New York and he was in his office at Microsoft's headquarters near Seattle.
It's fantastic to have a conversation again. We first interviewed Nadella in 2017 for a series called The Secret Life of a CEO.
Even then, he was extremely excited about AI.
At the time, Microsoft was high on a virtual reality headset called the HoloLens.
Think about it. Your field of view, right?
What you see is a blend of the analog and digital.
The ability to blend analog and digital is what we describe as mixed reality.
There are times when it'll be fully immersive.
That's called virtual reality.
Sometimes when you can see both the real world and the artificial world, that's what is augmented reality.
But to me, that's just a dial that you set.
I mean, just imagine if your hologram was right here interviewing me as opposed to just on the phone.
Back then, Nadella cautioned that there was still a lot of work to do.
Ultimately, I believe in order to bring about some of these magical experiences and AI capability,
we will have to break free of some of the limits we're hitting of physics, really.
The limits of physics haven't been broken yet,
and the HoloLens has not been the hit that Microsoft was hoping for.
But Nadella's devotion to AI is paying off big time in the form of ChatGPT, which quickly captured the imagination of millions.
GPT stands for Generative Pre-Trained Transformer, and ChatGPT is what is known as a large language model or LLM. It takes in vast
amounts of data from all over the internet so it can learn how to read and answer questions very
much like a human, but a really, really smart human or perhaps a million smart humans. And the
more we ask ChatGPT to answer questions or summarize arguments or plan itineraries,
the more finely tuned it gets, which proves at the very least that we humans are still
good for something.
The current iteration is called GPT-4.
And what's the relationship between ChatGPT and Bing?
Basically, Bing is part of ChatGPT and chat is part of Bing.
So in either way, it doesn't matter which entry point you come to, you will have Bing.
So Satya, I asked ChatGPT for some help in this interview.
I said, I'm a journalist interviewing Satya Nadella and I want to get candid and forthright
answers.
You know, I just didn't want corporate boilerplate.
And what chat told me was to do my homework, which, you know, I did. I
usually do that to ask open-ended questions, which I typically try to do. But one that hung me up a
little bit was I need to build rapport. Now we have a relatively short time together today.
Are there any shortcuts to building rapport? Yeah. What's your knowledge of cricket?
Oh, I blew it.
I knew that you're a big cricketer.
You played as a kid.
I knew you cared more about cricket than schoolwork as a kid.
But no, I blew it.
That's too bad.
Because there's a world test championship starting tomorrow.
I was going to ask you about it.
But hey, look, your love for economics builds me an instant rapport.
I'd like you to walk us through Microsoft's decision to bet big on OpenAI, the firm behind ChatGPT. There was an early investment of a
billion dollars, but then much, much more since then. I've read that you were pretty upset when
the Microsoft research team came to you with their findings about OpenAI's LLM, large language model, they said that they were blown away
at how good it was and that it had surpassed Microsoft's internal AI research project
with a much smaller research team in much less time. Let's start there. I'd like you to describe
that meeting. Tell me if what I've read, first of all, is true. Were you surprised and upset
with your internal AI development? Yeah, I think that this was all very recent. This was after GPT-4 was very much there. And then that
was just mostly me pushing some of our teams as to, hey, what did we miss? You got to learn.
You know, there were a lot of people at Microsoft who got it and did a great job of, for example,
betting on open AI and partnering with open AI. And to me, four years ago,
that was the idea. And then as we went down that journey, I started saying, okay, let's apply these
models for product building, right? Models are not products. Models can be part of products.
The first real product effort which we started was GitHub Copilot. And quite frankly, the first
attempts on GitHub Copilot were hard
because, you know, the model was not that capable.
But it was only once we got to GPD-3
when it started to learn to code
that we said, oh, wow, this emergent phenomena,
the scaling effects of these transformer models
are really showing promise.
Nadella may be underplaying the tension
between Microsoft and OpenAI, at least according
to a recent Wall Street Journal article called The Awkward Partnership Leading the AI Boom.
It describes, quote, conflict and confusion behind the scenes. And because the OpenAI deal
is a partnership and not an acquisition, the journal piece makes the argument that Microsoft has influence without control as OpenAI is allowed to partner with Microsoft rivals.
Still, you get the sense that Nadella is excited about the competitive momentum ChatGPT
has given Microsoft, as you can tell from this next part of our conversation. Google still handles about 90% of online global search activity.
An AI search-enabled model is a different kind of search, plainly, than what Google's been doing.
Google's trying to catch up to you now. How do you see market share in search playing out via Bing,
via ChatGPT in the next five and 10 years? And I'm curious to know
how significant that might be to the Microsoft business plan overall. This is a very general
purpose technology, right? So beyond the specific use cases of Bing Chat or ChatGPT, what we have
are reasoning engines that will be part of every product. In our case, they're part of Bing and ChatGPT.
They're part of Microsoft 365.
They're part of Dynamics 365.
And so in that context, I'm very excited about what it means for search.
After all, Google, as you said, rightfully, they're dominant in search by a country mile. And we've hung in there over the decade.
We've been at it to sort of say, hey, look, our time will come where there will be a real inflection point
in how search will change.
We welcome Bing versus Bard as competition.
It'll be like anything else, you know,
which is so dominant in terms of share
and also so dominant in terms of user habit, right?
We also know that defaults matter
and obviously Google controls the default on Android,
default on iOS, default on Chrome. And so they have a great structural position. But at the same
time, whenever there is a change in the game, it is all up for grabs again, to some degree. And I
know it'll come down to users and user choice. We finally have a competitive angle here, and so we're going to push it super hard.
What are some of your favorite uses, personal or professional, for ChatGPT?
The thing that I've talked about, which I love, is the cross-lingual understanding. That's kind
of my term for it. You can go from Hindi to English or English to Arabic or what have you, and they've done a good job.
If you take any poetry in any one language
and translate it into another language,
in fact, if you even do multiple languages,
so my favorite query was,
I said I always, as a kid growing up in Hyderabad, India,
said, I want to read Rumi,
translate it into Urdu and translate it into English.
And one shot, it does it. But the most interesting thing about that is it captures the depth of poetry. So it finds somehow in that
latent space, meaning that's beyond just the words and their translation. That I find is just
phenomenal. This amazes me. You're. You, the CEO of a big tech firm,
is saying that one of the highest callings of chat DPD or a large language model is the
translation of poetry. I love it. I mean, I know you love poetry, but what excites you more about
that than more typical business, societal, political, economic applications.
I mean, I love a lot of things.
I remember my father trying to read Heidegger in his 40s and struggling with it.
And I've attempted it a thousand times and failed.
And, you know, he's written this essay.
Somebody pointed me to, somebody said, oh, you got to read that because after all,
there's a lot of talk about AI and what it means to humanity.
And I said, let me read it.
But I must say, you know, going and asking ChatGPT or Bing Chat to summarize Heidegger is the best way to read Heidegger.
According to ChatGPT, Heidegger himself would not have been a fan of AI.
In Heidegger's view, Chat tells us, technology, including AI, can contribute to what he called the forgetting of being.
And Heidegger is hardly alone.
After all, philosophy and poetry will likely not be the main use cases for AI.
So after the break,
we talk about potential downsides of an AI revolution
and the degree to which Microsoft cares.
I want all 200,000 people at Microsoft
working on products to think of AI safety.
I'm Stephen Dubner.
This is Freakonomics Radio.
We'll be right back. Last month, a group of leaders from across the tech industry issued a terse
one-sentence warning. Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks,
such as pandemics and nuclear war. The extinction they're talking about is human extinction. Among
the signatories were Sam Altman, the CEO of OpenAI, and two senior Microsoft executives.
Altman, Satya Nadella, and other executives from firms working on AI
recently met with President Biden to talk about how the new technology should be regulated. I
asked Nadella where he stands on that issue. I think the fact that we are having the conversation
simultaneously about both the potential good that can come from this technology
in terms of economic growth that is more equitable and what have you. And at the same time,
that we're having the conversation on all the risks, both here and now and the future risks,
I think it's a super healthy thing, right? Somebody gave me this analogy, which I love,
right? Just imagine when the steam engine first came out,
if we had a conversation both about all the things
that the steam engine can do for the world
and the industrial production and the industrial revolution
and how it'll change livelihoods.
And at the same time, we were talking about pollution
and factory filth and child labor,
we would have avoided more of a hundred years plus
of terrible history.
So then it's best to be grounded on what does the risk framework look like, right?
If AI is used to create more disinformation, that's a problem for our democracy and democratic
institutions. Second, if AI is being used to create cyber attacks or bioterrorism attacks, that's a risk.
If there is real-world harms around bias, that's a risk.
Or employment displacement, that's a risk.
So let's just take those four.
In fact, those were the four even the White House was upfront on and saying, hey, look,
how do we really then have real answers to all these four risks?
So in terms of, for example, take disinformation. Can we have
techniques around watermarking that help verify where did the content come from? When it comes
to cyber, what can we do to ensure that there is some regime around how these frontier models are
being developed? Maybe there is licensing. I don't know. This is for regulators to decide. Microsoft itself has been working on provisions to best govern AI.
For instance, safety brakes for AI systems that control infrastructure like electricity or transportation.
Also, a certain level of transparency so that academic researchers can study AI systems.
But what about the big question? What about the doomsday scenario wherein an AI
system gets beyond the control of its human inventors? Essentially, the biggest unsolved
problem is how do you ensure both at sort of a scientific understanding level and then the
practical engineering level that you can make sure that the AI never goes
out of control. And that's where I think there needs to be a CERN-like project where both the
academics along with corporations and governments all come together to perhaps solve that alignment
problem and accelerate the solution to the alignment problem. But even a CERN-like project
after the fact, once it's been made available to the world,. But even a CERN-like project after the fact,
once it's been made available to the world, especially without watermarks and so on,
does it seem a little backwards? Do you ever think that your excitement over the technology led you and others to release it publicly too early? No, I actually think, first of all,
we're in very early days and there has been a lot of work. See, there's no way you can do all of
this just as a research project. And we spent a lot of time, right? In fact, if anything, that,
for example, all the work we did in launching Bing Chat and the lessons learned in launching
Bing Chat is now all available as a safety service, which, by the way, can be used with
any open source model. So that's, I think, how the industry and the
ecosystem gets better at AI safety. But at any point in time, anyone who's a responsible actor
does need to think about everything that they can do for safety. In fact, my sort of mantra
internally is the best feature of AI is AI safety. I did read, though, Satya, that as part of a
broader, a much broader layoff earlier this year,
that Microsoft laid off its entire ethics and society team, which presumably would help build
these various guardrails for AI. From the outside, that doesn't look good. Can you explain that?
Yeah, I saw that article, too. At the same time, I saw all the headcount that was increasing at
Microsoft. It's kind of like saying, hey, should we have a test organization that is somewhere on the side? I think the point is that work that AI safety teams
are doing have now become so mainstream, critical part of all product making that we have actually,
if anything, doubled down on it. So I'm sure there was some amount of reorganization and any
reorganization nowadays
seems to get written about. And that's fantastic. We love that. But to me, AI safety is like saying
performance or quality of any software project. You can't separate out. I want all 200,000 people
at Microsoft working on products to think of AI safety. One particular concern about the future of AI
is how intensely concentrated the technology is
within the walls of a relatively few firms and institutions.
The economists Daron Asimoglu and Simon Johnson
recently published a book on this theme
called Power and Progress,
Our 1,000-Year Struggle Over Technology and Prosperity.
And here's what they wrote in a recent New York Times op-ed. Tech giants Microsoft and Alphabet Google have seized a large lead in
shaping our potentially AI-dominated future. This is not good news. History has shown us that when
the distribution of information is left in the hands of a few, the result is political and economic oppression.
Without intervention, this history will repeat itself.
Their piece was called Big Tech is Bad, Big AI Will Be Worse.
You could argue we are fortunate to have a CEO as measured as Satya Nadella leading the way at Microsoft.
But of course, he won't be there forever.
After the break, what does a Microsoft succession look like?
I'm Stephen Dubner.
This is Freakonomics Radio.
We'll be right back.
Satya Nadella grew up in India, where his father was a Marxist civil servant.
Satya really did want to be a professional cricketer, and he wasn't a great student,
but he did go on to study electrical engineering at university, then came to the States for a
master's degree in computer science. He worked for a couple years at Sun Microsystems,
then joined Microsoft in 1992. He also got an MBA from the University of Chicago.
In his early years at Microsoft, he worked on the operating system Windows NT.
Windows products were how Microsoft made its big money, and the majority of the world's
desktop computers still run on Windows.
But Microsoft famously missed the shift to mobile computing.
And from 2000 to 2013, under CEO Steve Ballmer,
they saw their stock price fall by more than 40%.
Satya Nadella inherited a lot of problems.
That is Jeffrey Sonnenfeld.
Officially, he is a professor of management studies at Yale, and he runs the Yale Chief Executive Leadership Institute.
Unofficially, he is known as one of the world's leading authorities on CEOs. Last year,
he published a list of the best American CEOs. Satya Nadella was number one. Rather than try to come up with long lists of ways of
vilifying predecessors, what Nadella did is he was able to be on a frontier at this exact same
moment as the early investors in open AI, as well as in reinventing their own artificial
intelligence opportunities
so that Bing, surprised to all, might soar past everybody.
He got people excited about building a new future, investing $25 billion in R&D each year.
That's perhaps twice as much as the average pharma company invests,
and that's amazing for an IT company to do that.
A big part of Nadella's success came from expanding Microsoft's footprint in cloud computing with their Azure platform.
Their footprint across the board in enterprise software was flourishing, where he knew how to invest in Azure and a commercial cloud business where his revenues grew 42% over the past year.
I asked Nadella himself if he had been surprised by how
valuable cloud computing has become for Microsoft. Both surprised and not surprised in the following
sense. We were leaders in client-server. But while we were leaders in client-server, you know,
Oracle did well, IBM did well. And so, in fact, it shaped even my thinking of how the cloud may sort of emerge,
which is that it'll have a similar structure. There will be at least two to three players
who will be at scale, and there will still be many other smaller niche players, perhaps.
So in that sense, it is not that surprising. What has been surprising is how big and expansive
the market is, right?
You know, let's think about it.
Like, yeah, we sold a few servers in India, but oh my God, did I think that cloud computing
in India would be this big?
No.
The market is much bigger than I originally thought.
I have a fairly long and pretentious question to ask you.
There are economists and philosophers and psychologists who argue that
most of us still operate under a scarcity mindset that might have been appropriate
on the savannah a million years ago, but now we live in an era of abundance. So, you know,
rather than competing for scarce resources, we should collaborate more to grow the overall
resource pool. From what I know about your time as CEO at Microsoft,
it seems you have embraced the collaborative model over the competitive model. One example
being how nicely Microsoft now plays with Apple devices, whereas the previous administration
didn't even want Microsoft employees owning Apple devices. So I'd like to hear your thoughts generally on this idea of collaboration versus competition and scarcity versus abundance. best technique humanity has come up with to create, I would say, economic growth and growth
in our well-being as humanity is through cooperation. So let's start there, right?
So the more countries cooperate with countries, people cooperate with people, corporations
cooperate with other corporations, the better off we are. And then at a micro level, I think you
want to be very careful in how you think about zero-sum games, right? I think we overstate
the number of zero-sum games that we play. In many cases, I think growing your overall
share of the pie is probably even more possible when the pie itself is becoming
bigger. So I've always approached it that way. That's kind of how I grew up actually at Microsoft.
And so, you know, all of what we have done in the last, whatever, close to 10 years has been to look
at the opportunity set first as something that expands the opportunity for all players and in there being competitive.
Were there people within the firm, though, who said or felt, wait a minute, I know you're the
new CEO and I know you have a new way of doing things, but Google is our enemy. Apple is our
enemy. We can't do that. Did you have pushback? Yeah, I mean, look, it's a very fierce competitive
industry. And even if we didn't think of them as our competitors, our competitors probably think
of us as competitors.
But I think at the end of the day, I think it helps to step back and say, you know, it
doesn't mean that you back away from some real zero-sum competitive battles, because
after all, that's kind of what fosters innovation and that's what creates consumer surplus and
opportunity.
And so that's all fine.
But at the same time, leaders in positions like mine have to also be questioning what's the way
to create economic opportunity. And sometimes, you know, construing it as zero-sum is probably
the right approach, but sometimes it's not. So Microsoft is a huge company and huge companies get bigger by acquisition typically. Let's go
through a couple. I know you tried a few times to buy Zoom. You haven't succeeded yet. You're
still in the middle of trying to acquire Activision. That's tied up in the US at least in
an FTC lawsuit. A few years ago, I read you tried to buy TikTok. You called those negotiations
the strangest thing I've ever worked on. What was so strange about that?
Look, at least let me talk to all the acquisitions that we did that actually have succeeded and we
feel thrilled about it, right? Whether it's LinkedIn or GitHub or Nuance or ZeniMax or
Minecraft, these are all things that we bought. I feel that these properties
are better off after we acquired them because we were able to innovate and then make sure that we
stayed true to the core mission of those products and those customers who depended on those products.
What about TikTok, though? What was so strange about that negotiation or those conversations? Everything. First of all, I mean, just to be, you know, straight about it,
TikTok came to us because they at that time sort of said, hey, we need some help in thinking about
our structure. And given what at that time, at least, was perceived by them as some kind of a
restructuring that the United States government was asking.
They needed a U.S. partner, in other words, yes?
Yeah.
So at that point, we said, look, if that is the case that you want to separate out your
U.S. operations or worldwide operations, we would be interested in being engaged in a
dialogue.
And it was just, let's just say, an interesting summer that I spent on it.
Okay. So not long ago, Satya, you became the chair of the Microsoft board in addition to CEO.
Now, a lot of corporate governance people hate the idea of one person having both jobs. I asked
ChatGPT about it. What's the downside? One potential conflict of interest, ChatGPT told me,
is the roles of CEO and board chair can sometimes be at odds.
The CEO is typically focused on the day-to-day, yada, yada, but there can be potential conflicts
of interest.
Can you give an example of one conflict that you've had?
Or maybe you haven't, which would give the corp governance people even more headache.
The reality is we have a lead independent director, a fantastic lead
independent director in Sandy Peterson. She has the ultimate responsibility of hiring and firing
me. That said, I think the chair role as I see it is more about me being able to sort of, you know,
having been close to 10 years in my role, to use my knowledge of what it is that Microsoft's getting
done in the short and the long run to be able to coordinate the board agendas and make sure that the topics that we're discussing are most
helpful for both the board and the management team. And so it's kind of as much about, you know,
program managing the board versus being responsible for the governance of the board. And the governance
of the board is definitely with the independent directors. Can you name a time when the board voted down a big idea of yours?
I don't know if there is a particular vote that they voted me down, but I take all of the board
feedback on any idea that I or my management team has. We have a good format where every time we get
together, we kind of do a left to right, I'll call it, overview of our business. And we
have a written doc, which basically is a living document, which captures our strategy and
performance. And having that rich discussion where you can benefit from the perspective of the board
and then change course based on that perspective is something that I look forward to and I welcome.
Now, the last time we spoke, which was several years ago, you talked about how the birth of your
son, Zane, changed you a great deal. He was born with cerebral palsy. And you said that empathy
didn't come naturally to you, certainly not compared to your wife, but that over time,
being a parent to a child with a severe handicap was a powerful experience for you on
many levels. I was so sorry to read that Zane died not long ago in just his mid-20s. So, my
deepest condolences on that, Satya. I'm also curious to know if or how his death has changed you as well. No, I appreciate that, Stephen.
It's probably, it's hard, Stephen,
for me to even reflect on it that much.
It's been, for both my wife and me,
in some sense, he was the one sort of constant
that gave us a lot of purpose, I would say,
in his short life.
And so I think we you know, I think we're
still getting through it. And it'll, I think, take time. But I just say, the thing that I
perhaps have been most struck by is what an unbelievable support system that got built
around us in even the local community around Seattle. At his memorial,
I look back at it, all the people who came, right, all the therapists, the doctors, the friends,
the family, the colleagues at work. I even was thinking about it, right? After all, Zane was
born when I was working at Microsoft and he passed when I was working at Microsoft. And everything,
even from the benefits
programs of Microsoft to the managers who gave me the flexibility. I think that sort of was a big
reminder to me that all of us have, you know, things happen in our lives. Sometimes things
like pandemics or the passing of a loved one or the health issues of elderly parents. And we get
by because of the kindness of people around us and the support of communities around
us.
And so if anything, both my wife and I have been super, super thankful to all the, you
know, the people and the institutions that were very much part of his life and thereby
part of our lives.
You are a young man, still 55 years old, but you've been at Microsoft a long time now, been CEO almost 10 years.
I'm curious about a succession plan, especially, I don't know if you watch the HBO show Succession.
Do you watch Succession, Satya, or no?
I watched, I think, the first season a bit and I was never able to get back to it.
Okay, so I'll give you a small spoiler.
It doesn't go well. And their succession plan turns out to be, I think the technical term is total show.
Okay. So I am curious if your succession plan will be somewhat more orderly than the succession plan
on succession. Obviously, the next CEO of Microsoft is going to be appointed by the
lead independent directors of Microsoft and not by me.
But to your point, it's a broad topic when we have a real update on it every year, as it should be.
And I take that as a serious job of mine.
Like one of the things that I always say is long after I'm gone from Microsoft, if Microsoft's doing well, then maybe I did a decent job because I always think about the strength of the institution long after the person is gone is the only way to measure the leader.
I'm very, very suspicious of people who come in and say before me, it was horrible.
And during my time, it was great.
And after me, it is horrible.
I mean, that's first of all means you didn't do anything to build institutional strength. So yes, I take that job that I have in terms of surfacing
the talent and having the conversation with the board of directors seriously. And you know, when
the time comes, I'm pretty positive that they will have a lot of candidates internally and,
you know, they'll look outside as well. And so, yes, we will take succession seriously.
That was Satya Nadella, CEO of Microsoft.
His intelligence, I think you will agree,
doesn't feel artificial at all.
Coming up next time on the show.
Most people, when they think about marriage, they think about it in terms of preferences and in terms of love.
But economists aren't most people.
So this idea is what encapsulates the idea of the marriage market.
Is marriage really a market?
I think people truly misunderstand these dating services. Why did you marry that
person? That's next time on the show. Until then, take care of yourself and if you can, someone else
too. Freakonomics Radio is produced by Stitcher and Renbud Radio. You can find our entire archive
on any podcast app or at Freakonomics.com, where we also publish transcripts and show notes.
This episode was produced by Zach Lipinski with research help from Daniel Moritz-Rabson.
It was mixed by Greg Rippin with help from Jeremy Johnston. Our staff also includes Alina Cullman,
Daria Klenert, Eleanor Osborne, Elsa Hernandez, Emma Terrell, Gabriel Roth, Jasmine Klinger,
Julie Kanfer, Catherine Moncure, Lyric Bowditch, Morgan Levy, Neil Carruth, Rebecca Lee Douglas, I blew an opportunity here.
I need to ask ChatGPT how to get over intense disappointment at myself.
The Freakonomics Radio
Network. The hidden side of
everything.
Stitcher.