The Prof G Pod with Scott Galloway - Google’s Anti-Competitive Behavior, Facebook’s Inhumanity, and Understanding AI’s Limits — with Meredith Broussard
Episode Date: October 28, 2021Meredith Broussard, a data journalist, associate professor at NYU, and the author of, “Artificial Unintelligence: How Computers Misunderstand the World,” joins Scott to discuss the state of play r...egarding AI, AI bias, and ethical uses of AI. Scott opens with his thoughts on Google removing the YouTube app from Roku and how Facebook strips people of their autonomy over information. Algebra of Happiness: embrace the chaos. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Support for this show comes from Constant Contact.
If you struggle just to get your customers to notice you,
Constant Contact has what you need to grab their attention.
Constant Contact's award-winning marketing platform
offers all the automation, integration, and reporting tools
that get your marketing running seamlessly,
all backed by their expert live customer support.
It's time to get going and growing with Constant Contact today.
Ready, set, grow.
Go to ConstantContact.ca and start your free trial today.
Go to ConstantContact.ca for your free trial.
ConstantContact.ca
Support for PropG comes from NerdWallet. Starting your slash learn more to over 400 credit cards.
Head over to nerdwallet.com forward slash learn more to find smarter credit cards, savings accounts, mortgage rates, and more.
NerdWallet. Finance smarter.
NerdWallet Compare Incorporated.
NMLS 1617539.
Episode 111. The atomic number for rent, Kenium. i'm pretty sure that's the brand of testosterone
i'm on i have more hormones right now than a cow owned by monsanto the prop strong like bull
go go go Welcome to the 111th episode of The Prop G-Pod.
111. 1-1-1.
There's got to be something weird about that, right?
There's got to be something, I don't know.
Does that mean Lucifer is going to show up at the middle of our break
and we have an ad from ZipRecruiter?
The Dark Knight is going to show up?
Anyways, in today's episode, we speak with Meredith Broussard, a data journalist, associate
professor at New York University.
Colleague, a colleague.
Everyone thinks we hang out and give each other pens and we're on a lawn and joking
and hanging out in our cardigans.
I know almost nobody at NYU.
Literally, I have two or three friends that I go out.
Essentially, the other profs that drink a lot.
That's kind of my posse. Anyways,
Professor Broussard is the author of Artificial Unintelligence, How Computers
Misunderstand the World. I think that's a little cynical. I think she should be more optimistic.
The glass is half full, Professor. We discussed with Meredith the state of play regarding AI,
AI bias, and ethical uses of AI. All right, all right. What is happening? We're seeing yet
another example of big tech wielding its power to play hardball with the little guys. Google is
removing YouTube from Roku after the two companies were unable to reach an agreement on how the app
would continue to operate on the platform. So what happened? According to Roku, Google insists that
when Roku users search for something on the platform,
YouTube results must be featured first, even if they are not the most relevant.
Google used to take you to the right place. Now they take you to a place they can further monetize.
A little ironic, Google, the search company, trying to manipulate someone else's search
results. Think about how strange this is. 93% of our queries, where we go for information,
and by the way, you trust Google more than any rabbi, priest, scholar, or any god you pray to, in my view. And one company gets to decide the algorithms,
which are very opaque, what those answers should be 93% of the time. What could go wrong? Roku
also claims that Google demands that Roku provide it with data about its customers,
data that Google doesn't require from other partners.
Well, Google, smell you.
The two companies have been negotiating since April,
but it sounds like those negotiations have fallen apart.
Google's statement is,
Roku has once again chosen to make unproductive
and baseless claims
rather than try to work constructively with us.
Oh, poor fucking you, Google.
Oh, you're the victim here.
You're the victim. 93% share of $150
billion market. No one has ever sustained that before. No one has ever had that sort of monopoly
power, which ultimately leads to abuse. What is some of that abuse? What is the sunlight,
which is piercing through Facebook's bullshit, like a four-year-old or an eight-year-old,
I should say, with a magnifying glass burning ants. That's what's going on now at Facebook.
Literally, all of the stuff coming about Facebook
is creating so much heat, so much sunshine,
that eventually some of that light
is going to start to illuminate the other platform
that is doing a ton of damage here.
Talk about anti-competitive damage,
you're talking about Google's ad network
with both a buyer, seller, and a market maker,
which makes no fucking sense.
But in terms of actual harm to our society,
you know who's next?
YouTube.
Extremist groups being suggested to young boys,
and Susan Wojcicki is the equivalent
of Sheryl Sandberg, but better.
She has been more effective.
She creates a likability shield,
and I think that the next kind of,
if you will, dime to drop is going to be on
YouTube. And some reporting has shown that in YouTube meetings, and this is going to sound
eerily reminiscent of Facebook, they always opt for free speech. Yeah, because they're just such
warriors for the First Amendment, aren't they, those people at YouTube? And they come up with
a committee, they come up with a task force, they do almost nothing, pretend to do something,
and delay and obfuscate under the
auspices of Section 230 or First Amendment some of the damage or a lot of the damage they have done.
Anyways, back to Roku and Google. Google claims it's baseless. Baseless? Really, Google? According
to a 2019 email that was released to CNBC, Google did in fact require special treatment. The email from a Google executive to
Roku reads, open quote, YouTube position, a dedicated shelf for YouTube search results is a
must, close quote. And since Roku won't surrender to Google's clear abuse of power, no new Roku
customers will have access to YouTube or YouTube TV beginning on December 9th. That's new Roku
customers, current users with access to the YouTube apps won't be affected. So it doesn't have monopoly power. The question is, does it have so
much power that they can abuse that power? The kind of the brand Disney antitrust is if you have
market power and you can abuse it to the point and create non-market or non-competitive conditions,
that's abuse of power. And I believe YouTube and definitely Google crossed that threshold a long time ago.
A couple other things in the news I found really interesting.
Yesterday, I spoke to Roger McNamee, and he pointed me to the work of Shoshana Zuboff
and came up with this concept I found is fascinating.
I was going on Anderson Cooper last night to talk about Facebook,
specifically Francis Haugen's testimony, the whistleblower case and all of
these sort of drip, drip, drip things that keep coming out about Facebook that are both shocking
but not surprising. A couple of things struck me. One, essentially, Ms. Haugen has big tech to big
tech. And that is the rollout here has been thoughtful, meticulous, planned, coordinated.
And that is you have a branded series of investigative
journalist pieces all called the Facebook papers. You have a national TV rollout. You have
coordination among different press groups. And Facebook seems to be absolutely horrified that
someone would bring the same discipline and resources to this battle that they've been bringing. For a long time, people who have
been highlighting the ills of social media have been fighting Panzer tanks on horseback. And all
of a sudden, it feels like we're finally starting to fight on Panzer tanks. It's also what was
fascinating about Shoshana Zuboff's work that Roger pointed out to me, that at some point,
when an individual loses
their autonomy, that's when the law needs to step in. So we have child labor laws because
it's generally accepted that children don't have their own autonomy over when and where they can
work and for how much. And so we move in and we enforce laws and create laws. And it got me
thinking about the folks in the insurrection, people who are vaccine hesitant, people who are trafficked. Do these people still have their own autonomy? And when I look at those tapes of the insurrection, I wonder how many of those people generally thought that they were doing the right thing. have not taken a vaccine. We're about 58% in the US. We should be 78% like some other advanced
nations. We have the supply chain. We have access to these vaccines. I mean, this is a gift from God,
and yet we are 20% lower than some other nations. Take 20% times 350 million people, that's 70
million, assuming everybody gets it, which a lot of smart people think at some point everybody gets
it. 1% mortality rate. And then what people don't talk about a lot of smart people think at some point everybody gets it, 1% mortality rate.
And then what people don't talk about a lot is that for every person that dies,
there are three or four people that maintain long-term detrimental health conditions,
whether it's cardiological or neurological conditions. So you're talking about 75 million people that probably should have got the vaccine, but I would argue have no autonomy around
information. You're saying, well, Scott, you're infantilizing them. I don't know.
I think when you have essentially two-thirds of the world's social media platforms are controlled by one company, and Americans now get a third to two-thirds of their news from this one source, and these algorithms build a digital corpus of you and then start penetrating your autonomy with misinformation to the extent where
it doesn't feel like misinformation. And the result is vaccine hesitancy. I mean, at some point,
does your reckless business model begin to breach into the category of criminal intent?
And that's where you have to show in a criminal case is intent. And I wonder if at some point
when the massive delay in obfuscation, the lying, the ignoring the research, and this business model, which is clearly creating just massive
misinformation around some really damaging things, if at some point this breaches into criminal
intent. This is a long-winded way of saying that I believe there's going to be antitrust. I believe
there's going to be regulation. I now believe there is going to be a perp walk at one of more
big tech companies because this is beginning to smell a lot like Christmas if Christmas were criminal. Specifically,
the antitrust case in Texas could be federalized. Why is that important? Because the remedies
for antitrust can be criminal. Two, human trafficking on a global basis and ignoring it
is in fact a crime. And aiding and abetting an
insurrection. And when you know an insurrection is being planned on your platform and you continue
to serve up algorithms to convince people it's a good idea, that in fact is a crime as well.
That's DOJ time, I think. And by the way, everyone, this is where you are. You're like,
oh, Scott, you're such an alarmist. You know what? Four years ago when I wrote The Four,
get ready for the virtue signaling
here. And I said, there's something wrong in Mudville here. These companies present a clear
and present danger. My publisher and people said, well, it's provocative, but is it really true? Do
you really want to say these things? And then three years ago, when I said Mark Zuckerberg was
the most dangerous person on the planet, people called me reactionary. People called me, or said I was engaging in
clickbait. And now let me say it again. I think there's crimes here. I think we're going to see
a perp walk finally. And I think that will be the beginning of the end of big tech as we know it,
or the end of the beginning where these companies ultimately morph to a more productive tech
ecosystem. We're net gainers from tech. Big tech provides a lot of wonderful things, as does fossil fuels, as does pesticides, but that doesn't mean
we shouldn't regulate them. It is time for regulation and also the algebra of deterrence
to kick in. And the algebra of deterrence is not in full force here because nobody's gone to jail.
No one's even been cuffed. I think that's coming. I think that's coming. Okay, so let's circle this
back. Let's come back. Where are you, Scott? Come back Okay, so let's circle this back. Let's come back.
Where are you, Scott? Come back to us. Let's circle this back to Roku and Google.
All of this news went down about the same time that Senators Amy Klobuchar and Chuck Grassley
introduced a bipartisan bill that aims to rein in companies, including Amazon, Apple, and Google,
and prevent them from abusing their market power and squashing competition.
Following the news that Roku called out Google for its unfair business practices, Senator Klobuchar, who has been an absolute champion,
gangster in leading antitrust actions, issued a statement explaining how Roku's claim is exactly
why we need new laws to prevent this gatekeeper behavior and wrote the following, open quote,
for too long, the big tech platforms have leveraged the power to preference their
products and services over those of thousands of smaller online businesses. They have said,
just trust us, but experience has shown that we can't rely on these companies to act fairly
in the marketplace. I'll go further than that. We can't trust these companies to have
any regard for the Commonwealth. They are not concerned with the condition of our soul. They
are not there to protect our teenagers.
They are there to make money at any cost.
And the costs have become too great.
We're totally obsessed around antitrust
with economic costs, specifically does the price.
The primary litmus test is the consumer test.
And that is the consumer prices go up.
It's difficult to apply that test
in a marketplace where the products are free,
but the non-economic costs we are paying has skyrocketed. What are the economic
costs that we pay as parents? I was with a friend of mine this weekend. I went to a concert.
I went and saw Rufus DeSalle, which made me feel 56 again. Jesus Christ, I am over these music
festivals. I'm just too fucking old, let's be honest. It doesn't matter how much tea I inject
into my system. I just can't deal with big crowds of young people. Although some people did recognize me and I lit up like
a fucking Christmas tree. When people come up to me and interrupt me in the middle of me having a
good time to say hi, you know how it makes me feel? You know how it makes me feel, honestly?
It's wonderful. Thank you. Thank you for saying hi. And if you see me and you recognize me,
please come up and say hi. I'm nice. I'm nice. Charming and nice.
Mostly nice.
Anyways, at the concert, I went with a friend, this guy named Brent, who is a really thoughtful
guy.
And he said something that really struck me.
He said, he has two kids.
His are younger than mine.
His are, I think, eight and 11.
Mine are 11 and 14.
He said, I would rather give my 16-year-old girl when she's 16,
a bottle of Jack, car keys, and a shit ton of marijuana than have her on Instagram and Snap.
And he said, imagine seeing your full self at 16. And that really struck me. Imagine seeing your
full self at the age of 16, 24 by seven. I remember
the stupid things I said in class, but I could go home and retreat to the safe place of my cartoons
and my friends who didn't see me say something stupid or say hi to my mom or think about tomorrow
was a new day. Every time you're in the way of someone who sees an opportunity to shame you
because of whatever the tribal instincts that kick in,
especially among young girls,
boys bully physically and verbally,
girls bully relationally.
Do we really want,
do we really want our children
to be confronted with their full selves 24 by seven
before the ages of 16, 17, and 18? We've age-gated porn. We've age-gated
alcohol. We've age-gated tobacco. Why on earth are we not age-gating the products from a group
of mendacious fucks who have figured out a way to program at scale algorithms that elevate
terrible content, divisive content that makes us feel bad about ourselves, that encourages us to go to extreme dieting sites, even though we might be 5'4 and 95 pounds,
it encourages us to engage in groups that are hateful. Why would we want to trust
these organizations with the full self putting up the mirror. They're the ones constructing the mirror that our children get to see 24 by 7.
Stay with us.
We'll be right back for our conversation
with Meredith Broussard to talk about AI,
AI bias, and ethical uses of AI.
Hey, it's Scott Galloway.
And on our podcast, Pivot,
we are bringing you a special series
about the basics of artificial intelligence.
We're answering all your questions.
What should you use it for?
What tools are right for you?
And what privacy issues should you ultimately watch out for?
And to help us out, we are joined by Kylie Robeson, the senior AI reporter for The Verge, to give you a primer on how to integrate AI into your life.
So, tune into AI Basics, How and When to Use AI, a special series from Pivot
sponsored by AWS, wherever you get your podcasts. Think about those businesses that grew their sales
beyond their forecasts. Companies like Momofuku or Feastables by Mr. Beast, or even a legacy
business like Mattel. When you think about them, sure, you think about a product with demand,
a focused brand, and influence-driven marketing.
But part of their secret is actually the business behind the scenes.
As in the business that makes selling and buying simple.
And for millions of companies, that business is Shopify.
Nobody does selling better than Shopify,
home of the number one checkout on the planet.
With their
shop pay feature, they can boost conversions up to 50%, meaning way less carts going abandoned
and way more sales going. So if you're into growing your business, you want a commerce
platform that's ready to sell wherever your customers are scrolling or strolling, whether
that's on the web, in your store, and everywhere in between.
Because businesses that sell more sell on Shopify.
Sign up for your $1 per month trial period at shopify.com slash voxbusiness, all lowercase.
Go to shopify.com slash voxbusiness to upgrade your selling today.
Shopify.com slash VoxBusiness. Welcome back. Here's our conversation with
data journalist Meredith Broussard. Professor Broussard, where does this podcast find you?
I am in New York City right now.
Okay, so let's start with the basics.
Can you provide a definition of AI?
Ooh, this is a good question.
And my best definition is that AI is math.
But most people are kind of unsatisfied with that definition because they expect AI is that it's very complicated and beautiful math.
And what's imaginary about AI is all of the robot apocalypse stuff.
But is AI when Netflix says season, you know, episode two is starting in three, two, one, give us an example.
AI is math.
It kind of becomes decisions, right? And I'm sincere about this question. I feel like AI has become this catchphrase for technology meets math meets decisions that are made without adult supervision.
When does intelligence become artificial, so to speak?
And what is real intelligence and what's artificial intelligence?
Well, let's unpack the term AI a little bit.
People imagine that AI is something very kind of special, but actually we're using AI all the time.
So there's something like 250 different machine learning models that get activated every time you do a Google search.
And in my book, Artificial Unintelligence,
one of the examples that I give of machine learning is I give an example that involves the Titanic.
So you can take data on the Titanic passengers
and you can feed it into the computer
and have the computer build a model
that predicts
who lives and who dies based on the Titanic passenger data, right? And if I were an insurance
company, I might want to use this model to determine pricing for insurance policies for
people who are going on boat trips. Because if you're going
on a boat trip and you buy insurance for me and you die, then I have to pay out. But if you're
going on a boat trip and you buy insurance for me and you live, then I don't have to pay out.
So as an insurance company, I would rather give a low-priced policy to somebody who's more likely to live and a high-priced policy to somebody who's more likely to die.
It makes total sense, right, from a capitalist perspective. There is bias embedded in it because the determining factor mathematically as far as who lives and who dies is passenger class.
The first class passengers survived at a higher rate because they got into the lifeboats first.
And the second and third class passengers were locked in their corridors and prevented from getting into the lifeboats. So if we were to use a machine learning model that's based on the Titanic passenger data,
we would be replicating a really terrible offense of the past.
And we would be charging first-class passengers less for insurance,
charging the richer people less for insurance,
and charging the less wealthy people, the the richer people less for insurance, and charging
the less wealthy people, the second and third class passengers, more for insurance. So AI doesn't
necessarily make the best decisions all the time. Or the most ethical, right? I mean,
this kind of leads to my next question, and that is, I've always thought that at the end of the day, behind every algorithm or vehicle of AI, there's an individual who's programming these things.
And I'm sure there's a world of unintended consequences.
But people say Facebook's gone beyond the algorithms have taken over, gone beyond their control.
And I know there's a lot of fear around AI,
you know, the kind of the star net movement or whatever moment or whatever they call it,
Skynet, excuse me.
I've always thought at the end of the day,
it's an individual somewhere
telling the algorithms what to do.
Do you, are you on sort of the Elon Musk side of things
where AI presents an existential threat to humanity
or kind of we're in charge?
And like you said, any tool can be used
for negative or positive. Where like you said, any tool can be used for negative or
positive. Where do you fall on that argument? I am definitely not in the Elon Musk camp.
I think that software developers, AI developers are generally pretty well-intentioned. I don't
think any of them are getting up in the morning and saying, I want to go out and oppress people today.
Yeah, agreed.
But I think that everybody has blind spots.
Everybody has unconscious bias.
You know, we're all working to become better people every day, but none of us are perfect.
And so when you have a homogeneous group of people like we have in Silicon Valley, they're embedding their own biases in the technology they create. And they have collective blind spots. And so that's how
we end up with biased technology. In the case of Facebook, that intersects with people getting
very, very excited about the idea of, oh, this is a new thing and, oh, it's making so much money.
And then they got kind of carried away with the idea of self-regulation. And that's,
among other things, why we ended up in the mess we're in right now with all of the social networks.
And do you think, you talk a lot about, going back to bias, you talk
about techno-chauvism. And I think of it as in the world of VCs, 40% of VCs are from two schools,
Harvard and Stanford, and they're generally speaking white dudes. And I bet it's 80 or 90%
of capital allocated is from that cohort of white guys from Stanford or Harvard.
Talk more about, you talk about techno-chauvinism. Sorry,
techno-chauvinism? Yeah, techno-chauvinism. Excuse me, I said chauvinism. Jesus Christ,
I can't even get the term right. Techno-chauvinism. Explain what you mean by that and how it's
manifesting. Techno-chauvinism is the idea that technology or technological solutions are superior to others.
And I would argue that really what we need to do is think about using the right tool for the task.
Sometimes the right tool for the task is absolutely a computer.
You know, you will pry my smartphone out of my cold, dead hands, right?
But sometimes the right tool for the task is something simple like a book
in the hands of a child sitting on its parent's lap.
It's not a competition.
One is not inherently better than the other.
It's, again, about the right tool for the task.
And so when people get carried away and they imagine, oh, yeah, if we use more technology, it's going to be in more social progress, that's when we start making
really foolish decisions. So for example, the case of using facial recognition in policing,
people thought, all right, we're going to use all these body-worn cameras and we're going to do
real-time facial recognition on live streams of surveillance, and this is going to be better policing and is
going to make us safer. Well, it has not happened, right? We can look at Joy Bolognini's work in
Gender Shades. And one of the things that she and Timnit Gebru and Deb Raji found was that facial
recognition is biased. It's better at recognizing light skin than dark skin. It's better at recognizing men
than women. And some people might say, oh, well, you know, if the problem is that facial recognition
doesn't recognize women with dark skin, maybe we need more women with dark skin in the training
data set that we use in order to make these models that are powering the facial recognition systems.
But Joy Bolanwini's work is so interesting because she says, no, that is not the solution.
These policing systems, facial recognition and policing, is disproportionately weaponized against communities of color.
And the abolitionist solution is, let's not use these systems at all. And if the algorithm goes,
okay, a certain community
is more likely to be incarcerated,
which doesn't take into effect societal factors,
but immediately makes the connection
between incarceration and wrongdoing
and so is more likely to identify
someone walking on the street
of a certain profile than someone else. Isn't that
how we end up down this rabbit hole of systemic racism and bias? Absolutely, yes. And that is the
core problem with things like recidivism algorithms or, you know, any kind of automated system used in policing.
So, for example, when you look at drug use between white and black populations, say,
there is roughly equal drug use among white people and black people.
But when you look at arrest data for who gets arrested for drug crimes,
it's something like 10 times more black people get arrested than white people for drug-related crimes. So if you're building an algorithm
and you're feeding in the data on who got arrested, you're building a system that's
going to say, oh, the Black people should all go to jail. And that's obviously wrong and terrible
and racist and not something that we want.
We'll be right back.
Support for this show comes from Indeed.
If you need to hire, you may need Indeed.
Indeed is a matching and hiring platform with over 350 million global monthly visitors, according to Indeed data,
and a matching engine that helps you find quality candidates fast. Listeners of this show can get a
$75 sponsored job credit to get your jobs more visibility at indeed.com slash podcast. Just go
to indeed.com slash podcast right now and say you heard about Indeed on this podcast. Indeed.com slash podcast.
Terms and conditions apply.
Need to hire?
You need Indeed.
Support for this podcast comes from Klaviyo.
You know that feeling when your favorite brand really gets you.
Deliver that feeling to your customers every time.
Klaviyo turns your customer data into real-time connections across AI-powered email, SMS, and more, making every moment count.
Over 100,000 brands trust Klaviyo's unified data and marketing platform to build smarter digital relationships with their customers during Black Friday, Cyber Monday, and beyond.
Make every moment count with Klaviyo.
Learn more at klaviyo.com slash BFCM.
So you first came across our radar.
You said something I absolutely loved.
You tweeted, the metaverse is a waste of time and money.
Can you say more?
It really is.
I love that.
I find, you know, virtual reality, 3D printing, there's always these technologies, wearables.
That was my favorite.
And I feel like the next one is the metaverse.
What did you mean by that?
Well, here's my problem with virtual reality.
It makes me vomit, right?
I mean, physically you get motion sickness. Yeah, I'm prone to motion sickness. And people
who are prone to motion sickness are more prone to VR sickness. Right. And there's an awful lot
of people in the world who are like this. So any technology that has vomiting as its
side effect, I just think is really misguided to expect that it's going to be a huge blockbuster.
So the other thing about virtual reality is that people have been trying to build it for so, so long. And it's really cool
for a little while, and then it quickly becomes pretty boring. You think about Second Life.
Second Life, when it came out, was supposed to be this radically new space, and everybody was going to have avatars in Second Life and we're going to
rebuild society in Second Life. And there are all these universities, right, who built, you know,
welcome stations inside Second Life. I read a story a few years ago where the reporter went in
and looked at Second Life and looked at the wreckage of the societies that had been built
inside Second Life and looked at the sort of digital tumbleweeds. And it was really sad
to just see the shambles of this society that had been touted as the next great thing.
And that is exactly what's going to happen with the metaverse.
Yeah, it's interesting.
I love it when people kind of slay dragons.
Everyone's so enamored of self-driving cars
and the metaverse.
When it comes to cybersecurity,
you believe that in addition to focusing on defense
from these types of attacks,
regulators should be concerned with the interplay
between humans and technical systems. What did you mean by that? I think we
need to focus more on the way that technological systems are not just about the computer. They're
not just about the math. They're actually socio-technical systems. And so we need to think
about who are the people building these systems?
Who are the people using these systems?
What are the sources of the information that we are using as data in order to train these systems?
What are the values that we are assuming are inherent in these systems?
And let's question some of those values.
Values don't stay static over time.
So one of the things we're trying to do
when we build a computer system
is we're trying to build it
and then set it and forget it.
Especially if it's automation
that we're imagining is going to replace human beings.
So on an assembly line,
it's one thing to have a robot arm coming down
and like putting the pin into the widget.
Like that action doesn't change all that much,
but society changes an awful lot.
And so when we have algorithms
that are governing our social discourse,
we really need that social discourse to be flexible.
Right? Like, think about things that we talk about today that we didn't talk about 10 years ago.
Well, if the algorithms were in charge, then, you know, it would be replicating the world the way
it was the minute the algorithm was turned on. And it's not really interested in being flexible and forward thinking
because it's just a tool.
It's just an algorithm.
It's just math.
It has no feelings.
It has no soul.
It does not love.
Yeah, I agree.
It's not sentient, right?
I've never bought there's a notion
that at some point these things develop intention
or, I don't know, a reason.
Let's end with a couple of hopeful things. What do you like
about tech or what do you think is the promise of technology or what encourages you about some
recent developments in tech? I get asked this question a lot. I guess people think I'm a downer
in terms of technology. And one of the things that I used to say was, I used to say that I could get behind
self-driving tractors, right? Like, I'm not all for self-driving cars, but I used to think I
could get behind self-driving tractors. And I thought that until I met a farmer at a tech
conference. And I said, oh, self-driving tractors. And he said, yeah, that sounds great.
Except the thing about tractors is they get stuck in the mud. Like heavy equipment on farms gets
stuck in the mud all the time. And then you have to get other equipment to like drag it out of the
mud. And he was like, you know, the self-driving tractor getting stuck in the mud, it just doesn't
sound like it's any less work to me.
So I was like, all right,
I can't get enthusiastic
about the self-driving tractors anymore.
So what am I enthusiastic about right now?
I'm going to give you a really nerdy answer,
which is I am enthusiastic about AI regulatory policy.
I am enthusiastic that we have the right team in the White House now, and we have people who are focused on public interest technology.
And I feel like we are on the verge of having more coherent tech policy coming down from a national level, in the U.S. at least.
And that's what we need because the problem of Facebook or the problem of YouTube or the problem
of any social network is not at a human level scale anymore. We are at the point where we need
policy to solve this problem for everybody simultaneously.
Roger McNamee has a great metaphor of chemical companies, right?
So once chemical companies started generating negative externaluting the environment, you know, regulation said you have to clean it up and you have to, you know, pay reparations to the community.
Like you have to clean up your mess and you have to be responsible for it.
And he says that the tech companies are like chemical companies at this point.
And I think that's a great metaphor. What if someone's interested, a young person coming to NYU or
just thinking about grad school or getting into the workforce is interested in the field of AI,
what would be your advice to them in terms of the type of job, the type of education? Someone says,
I'm interested in AI. What do I do next? Well, I think they should read my book.
That's, of course, the first step, I think.
I think the next step is to look into public interest technology.
This is a new-ish field.
It is just what it sounds like.
It's about making technology in the public interest.
Sometimes it means building
better government technology. So, for example, making sure that things like the state unemployment
websites don't crash when there's a global pandemic and people are out of work, right?
And sometimes public interest technology means building algorithms or interrogating the algorithms that are increasingly being used to make decisions on our behalf.
And so that's something you do as a data journalist.
The Markup is doing some of the best algorithmic accountability reporting that's out there right now.
ProPublica is doing some amazing stuff too.
The LA Times and the New York Times also have really great data journalism shops.
So if you're a young person who's interested in AI but also interested in making a difference in the world, I think that public interest technology is the place to be.
And what do you make of the Facebook whistleblower and some of the information coming out?
I am so glad that we are having this moment right now.
I also think that lawmakers are getting more tech savvy,
which is really great.
I think that they're starting to ask really probing questions, which are questions that, you know, needed to be asked a long time ago.
But I'll, you know, I'll take it.
Like, I am excited about progress, however and whenever I can get it.
However we get there. Broussard is an associate professor at the Arthur L. Carter Journalism Institute of New York University, research director at the NYU Alliance for Public Interest Technology, and the author of Artificial Unintelligence, How Computers Misunderstand the World. Her academic research focuses on artificial intelligence and investigative reporting and ethical AI with a particular interest in using data analysis for social good.
She joins us from her home in New York.
Professor Broussard, stay safe, and I'll see you on campus.
Scott, so great chatting with you. Thanks a lot.
Algebra of Happiness. So today was chaos i am doing a or i agreed to appear in a netflix
documentary around gamestop i'm trying to plan for halloween i'm trying on halloween costumes
one of my kids has a soccer game and soccer practice and is doing a mock interview for school.
My dog started throwing up and shittings literally started exploding from both ends.
And when a Great Dane explodes from both ends, it's like Chernobyl times two.
It was total chaos in the house.
People showing up, just cameras being set up.
And I know that I'm trying to sound like I'm more important
than I am. And I just got so angry and upset. And my trainer was here. If I sound very privileged,
I am. And at least I'm cognizant of it. And I got in such a bad mood and I was snapping at everybody.
And now that things have calmed down, what I recognize is that the chaos is absolutely the thing that I will miss most.
When I first moved to New York, I decided to reboot my life.
I had enough of the Bay Area.
I had enough of technology, of raising money.
I'd been working my ass off.
And I basically hit the reset button on my life. I resigned from the board of the company of raising money. I'd been working my ass off. And I basically hit the reset
button on my life. I resigned from the board of the company I was on. Actually, I got kicked off
and I resigned from the other one. I got divorced. I decided I wanted to live a different life. And
I moved to New York and I literally became an island. And that is, I didn't maintain my
friendships from the Bay Area. I didn't maintain really my professional contacts. I started teaching at NYU and I created a one-man island and I absolutely loved it, at least for a short
time. And there was a selfishness and an indulgence that I really enjoyed. I'm an introvert
and I was, I think, 34 at the time. And what I recognized is that over time,
without the contact and the messiness of other people, it's almost as if you're not here.
And that as time goes so fast, it goes too fast.
And it creates this weird comfort with being alone.
And being alone is one of the most dangerous things in the world, especially among men.
They don't live nearly as long. Men who live alone live literally
a decade, have lifespans a decade shorter than men who live with other people. It's not as bad
for women because women are more social animals and do a better job of maintaining relationships.
But I know now, slowly but surely, I decided that I wanted to have more relationships in my life.
And I started investing in friends, started investing in mates,
decided to have kids,
even though I thought for sure when I was 40,
I would never have kids.
I couldn't stand kids.
Jesus Christ, I just didn't get it.
And by the way, I get it if you don't get it.
There's no way you can explain
what it's like to have kids until you have them.
It's like God reaches into your soul
and flips the switch
and you're just sort of in love with this thing.
But what I have found is that
the messiness is a function of everything wonderful in my life. The messiness today
is a function of the opportunities, is a function of the people in my life that I love, of the
animals in my life that I love. And I need some perspective. And that is when I get stressed out.
That's fine. It's fine to be stressed out. But I know, I know when I'm near the end that that's what I'm going to miss. I'm going to miss
the chaos. I'm going to miss the stress. I'm going to miss all these different beings pulling at me.
Because when we get towards the end and the kids are out of the house,
maybe I get too old to have a dog and I'm sounding very fatalist right now.
I know I'm going to look back and I'm going to crave and really miss and wish
for just a few more moments of chaos. Embrace the chaos. Our producers are Caroline Chagrin and
Drew Burrows. Claire Miller is our assistant producer. If you like what you heard, please
follow, download, and subscribe. Thank you for listening to the Prop G Pod from the Vox Media
Podcast Network. We will catch you next follow, download and subscribe. Thank you for listening to the PropGee pod from the Vox Media Podcast Network.
We will catch you next week on Monday and Thursday.
Support for the show comes from Alex Partners.
Did you know that almost 90% of executives see potential for growth from digital disruption?
With 37% seeing significant or extremely high positive impact on revenue growth. In Alex Partners' 2024 Digital
Disruption Report, you can learn the best path to turning that disruption into growth for your
business. With a focus on clarity, direction, and effective implementation, Alex Partners provides
essential support when decisive leadership is crucial. You can discover insights like these
by reading Alex Partners' latest technology industry insights, available at www.alexpartners.com.
That's www.alexpartners.com.
In the face of disruption, businesses trust Alex Partners to get straight to the point and deliver results when it really matters.
Do you feel like your leads never lead anywhere? partners to get straight to the point and deliver results when it really matters. builds campaigns for you, tells you which leads are worth knowing, and makes writing blogs, creating videos,
and posting on social a breeze.
So now, it's easier than ever to be a marketer.
Get started at HubSpot.com slash marketers.