Prof G Markets - AI Is Making Us All Dumber
Episode Date: May 12, 2026Ed Elson brings on Cal Newport and Derek Thompson to explore how AI is reshaping the way we think and learn to the downside. They dig into how its impact on human intelligence could fundamentally chan...ge the economy, the risks that concern them most, and how individuals can ensure they don’t fall behind. Cal Newport is a Professor of Computer Science at Georgetown University and New York Times bestselling author of eight books, including Slow Productivity and Deep Work. Derek Thompson is the Host of the Plain English Podcast, and author of Abundance. Get your tickets to the Prof G Markets tour Subscribe to the Prof G Markets Youtube Channel Check out our latest Prof G Markets newsletter Follow Prof G Markets on Instagram Follow Ed on Instagram, X and Substack Follow Scott on Instagram Send us your questions or comments by emailing Markets@profgmedia.com Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Today's number, 100,000.
That's how many dollars it reportedly cost to get one ticket to the Met Gala.
Meanwhile, the price of a table was $350,000, or as attendees call it, half a facelift.
Welcome to Profudeau markets. I'm Ed Elson. It is May 12th.
Let's check in on yesterday's market vitals.
The major indices all climbed, led by a rally in chip stocks.
The S&P and the NASDAQ both hit new records.
Those gains came despite President Trump's rejection of Iran's proposal to end the war.
He also said the ceasefire was on, quote, massive life support.
Brent Crude climbed higher, as hopes for peace faltered, and the yield on 10-year treasuries rose.
Okay, what else is happening?
Since the 1800s, every generation has been smarter than their parents, except for Gen Z.
That is what neuroscientist Dr. Jared Cooney-Horvath told Congress last month.
Today, 90% of college students and 84% of high schoolers use AI in class or for their homework.
And according to OpenAI's own data, one of the most common use cases for AI is
writing. Meanwhile, a recent study found that AI tool usage among business students was associated with
weaker critical thinking skills. And this data raises an important question. And that is, what do we
lose when we outsource our work and our thinking to AI? After all, 900 million people use chat
GPT every week. In other words, is AI making all of us dumber? Now, you might remember that we
discussed this question last week. We've been investigating this question.
a little bit more. But today we want to bring in two experts who are thinking about this,
who understand these issues. So we're going to do something a little bit different. We're going to
move away from the markets for today and focus on this question. So we're joined by Cal Newport,
Professor of Computer Science at Georgetown University in New York Times bestselling author of
eight books, including slow productivity and deep work. And we've also got Derek Thompson,
host of the plain English podcast and author of Abundance. Cal,
and Derek, thank you so much for joining us. Welcome to the show. Cal, I'm going to start with you
because you have written about this and you've talked about this idea of cognitive fitness
and this potential reality that it's in decline. What do you make of what's happening on the
ground in terms of AI usage and what it's actually doing to our brains? Well, I think AI has the real
capacity to make us dumber. It's new enough and
usage of it is still growing, that we're not seeing the major effects yet, but I fear that we are
going to see it. And the way I conceptualize this world of cognitive fitness is that social media and
highly engaging tools on our phones started this trend. It moved us away from more sustained,
concentrated activities through which we strengthen our brain. AI is now taking target on the
other main cognitive activity that makes us stronger, which is writing. This is emerging as one
of the major uses of this tool is to alleviate the strain you feel when you look at a blank
page and have to fill that blank page. So if AI does, in fact, significantly reduce the amount of
writing we do, whether it's super important or just a memo, I do think we're going to see a
continued diminishment of our intelligence that began with highly distracting phones about 10 years ago.
We'll get into what we do about this, but Derek, do you agree with Cal?
Yeah, of course. Of course he's right.
Maybe we'll explore us from disagreements between me and Cal in a few minutes,
but on this, I think he's hit it right on the money.
I mean, like, if you doubt what Cal is saying and you use AI,
pay attention to your own life.
Pay attention to your own use of time.
When you ask artificial intelligence to summarize an article,
would you summarize a paper, or, God forbid, to summarize an entire book,
do you understand that article, that paper, or that book as well as if you had read it?
Of course not.
Okay, now maybe you could argue.
that, all right, well, I saved time
because now, rather than
read that one book, which might have taken me
10 hours, I can summarize
15 books, and that'll take me
sort of 10 hours to process or something.
Well, even there,
you're engaging at such a shallow
level with each book
that I'm not sure you really understand
the degree to which they agree and disagree with each other,
but also what you're depriving
yourself from the inability
to read anything for
more than five or 10 minutes at a time.
And that is a skill that leads over time to the ability to make the sort of deep connections that I think are the basis of all true, insightful thinking.
So I absolutely think that the risk here is really, I guess, as I described it, sort of of at least two layers.
One, that you're depriving yourself of the experience of truly understanding something that you think you're trying to understand.
And number two, that you fall out of a habit that is necessary to think deeply in the future.
And to Cal's sort of maybe first point to end there, because of going achronologically, you know, we're looking at things like the Flynn effect and we're looking at things like test scores over time.
Well, if we're depleting the inability of fifth graders and sixth graders to think and they continue to use AI in seventh grade and eighth grade and through twelfth grade and through college, that's not just one year of losing the practice of doing deep reading and deep thinking.
Now we're talking about a decade, a formative decade, that you've chosen to essentially not work on the kind of muscles.
I do like this fitness metaphor, not work on the kind of muscles that are so necessary in the long run for understanding something deeply to be smart about it.
So absolutely, I think the cal's on to something.
I mean, it seems like there are two main forces that we're kind of identifying here.
One is the screen in general and are increasing addiction to those screens.
and then the other is AI and our dependency on AI to solve harder problems, more nuanced, difficult problems.
Carl, just going back to you, which is the more dominant force, or is that even a relevant question right now?
Well, the biggest impact so far has come from a decade of hyper-optimized engagements on a portable device that we have with us at all times.
That has had a massive impact.
Essentially what happened is the machine learning algorithms behind,
especially short-form video platforms built an approximation of our short-term reward centers in our brain
so that it could give exactly the signal that's going to resonate strongly with those particular circuits.
This makes the phones essentially irresistible.
When it is with me, I have to take it out.
I have to look at it.
So that over the last decade or so has done substantial damage to multiple generations' ability
to actually not just sustain attention, but again, to build those circuits you can use to think deeply when the time is required.
These circuits are built through the activities of reading and writing.
These are privileged activities in the history of modern humanity, post-Pyolithic humanity.
AI is new on the scene, but I really feel like it's going to be a catastrophic cousin to what we already were encountering with hyper-engaging content on a screen.
Because if that really focused on reading, we no longer sit and concentrate on a book in a way that could build deep understanding.
Writing was its partner.
Writing is the pair to reading.
Writing is where we take the circuits we etched with deep reading, and then we apply them in reverse to create original thoughts of our own.
We have to practice that muscle as well.
And now, for the first time, we can begin to substantially outsource that activity.
So I really think about reading and writing as activities.
This is not nostalgia.
This is not, oh, we're talking about horse buggies in an era of automobiles.
I really do think those are the activities on which the post-Paleolithic modern human brain were built, the brain that gave us.
theology that gave us politics, that gave us philosophy, they gave us theology. The brain that
everything we hold dear was built around substantially depended on reading and writing to shape it.
So I'm really worried about what we already lost with reading and that we have a new tool
that's going to start to take writing off the table as well. It sounds like sort of the argument
you're making because, you know, someone would say in response if someone would have pushed back,
the argument would be, well, every technology in history has made our life easier.
in some capacity.
Like, you know, you invent the engine,
you invent the car,
it makes it easy to get around.
And the argument would be,
this makes it easier to do the job
of critical thinking,
in the same way that other technologies
do make other jobs easier.
Cal, it sounds like what you're saying
is that this is different.
The brain, critical thinking,
is on a different level.
It's so endemic to what it means
to be a human,
to the point where this is
actually a bad thing, unlike other technologies. Would that be the right characterization?
No, I think that's right. If we used a fitness analogy, reading was a great technology
to make us better at critical thinking. Writing was a great technology to make us better at critical
thinking, but to use something like AI is like bringing a forklift into the gym and be like,
you know, we've been in here for years. We've been using weightlifting to try to get stronger.
Well, I figured out with a forklift, it'll be a lot easier. I don't have to lift the weight myself.
You're actually being counterproductive to the actual goal, which is strengthening the cognitive muscle to get stronger.
So, no, I do think this is not a technology that's making us better at critical thinking.
It's allowing us to sidestep to hard activities that previously we used to make our brain stronger.
The product, the benefit being sold by this product is convenience in the moment, not a stronger brain or stronger ability to think.
Stay tuned for more of this panel right after the break.
And by the way, we are heading out on tour at the end of the month.
So for more info and to get tickets to a show near you, head to profgimarkets tour.com.
Support for the show comes from Hostinger.
The biggest barrier to entry for most entrepreneurs is no lack of capital.
It's the friction of starting.
You can spend months in the strategizing phase, which is precious time that could instead be spent actually making moves.
But these days, the rules have changed.
AI is redefining who gets to build a business.
So when you're building the next big thing, go live in minutes, not weeks, with Hostinger.
Hostinger is an all-o-un platform that brings everything into one place, your domain, website, email marketing, AI tools, and AI agents.
So you can launch online without stitching together five different subscriptions.
Start with a prompt and add your personal touch.
You can create websites, online stores, and custom apps without coding or designing skills.
Then, use AI agents to automate tedious tasks and grow your business.
Hostinger powers over 10 million websites, and there's a reason it's.
earned a CNET Editor's Choice Award.
Turn your one day into day one.
Go to hostinger.com slash the provjee to bring your ideas online for under $3 a month.
Plus, get an extra 20% off with promo code the prop G.
That's less than the price of a cup of coffee per month.
That's Hostinger.com slash the provg promo code the prop G for an extra 20% off.
This week on Net Worth and Chill, I'm joined by Tank Sinatra, the meme king, with over 15 million
followers across Tank's good news, influencers in the wild, and his personal account.
Tank is breaking down what the meme economy really is, how much a single-sponsored post pays,
why major brands are throwing serious money at jokes, and how meme culture, think Preparation
H, starter packs, and a perfectly timed screenshot is actually reshaping how we think about money
and value. Get ready for a conversation that'll change the way you scroll, make you rethink what
going viral is really worth, and prove that sometimes the most serious.
Serious money moves are wrapped in the silliest of jokes.
Listen wherever you get your podcasts or watch on YouTube.com slash your rich BFF.
We're back with Profi Markets.
So if we all agree that this technology is making us dumber, and it seems that it's, I mean, I think that.
I'm not sure who disagrees with that at this point.
I think it's pretty clear to us.
I mean, Derek, let's like model this out, game theory.
out, where does this go in terms of the economy? I mean, if we are dependent on AI, but none of us can
really come up with original ideas and we can't think critically about issues, do you think
that that steers the trajectory of our economy in perhaps a different direction?
Let me try to take this question at a really high level of abstraction, and then I'm going to
zoom in on some specifics. I think that technology.
is use.
How the effect of AI
is exquisitely dependent on
how we use it.
If you look at how
artificial intelligence
was recently employed by
the Mayo Clinic in
radiology to see
pancreatic cancer on average
2.4 years
before a doctor could see it in a scan.
You cannot possibly argue
that that is AI making people dumber.
Yeah.
That is clearly making us
better, smarter as a species at seeing pancreatic cancer.
The use of technology, the use of artificial intelligence there is to supplement the human
radiologist's eye to see pancreatic cancer.
So I don't, and that is obviously good.
So I don't want to represent my opinion here as being in Medi-Cal-A-Grees as being like,
oh, all AI is bad.
But that's not the way that artificial intelligence is being used in high schools and
college. It's being used to cheat and to cheat at a scale that is keeping students from learning
how to learn. So I am very optimistic about how this technology is being employed in some
industries, while at the same time, I think Cal is absolutely right that if you look at the use
of artificial intelligence in high school and college, I see practically no reason to be
optimistic about that generation's ability to learn, to think deeply, to write by the time they
graduate.
So technology is use.
There are some wonderful use cases of artificial intelligence, but within the education
system today, like, I think it is basically a tool for mass cheating that is, in fact,
cheating students out of the ability to think in the long run.
Yeah, you bring up an important point here, which is we should probably distinguish who
is getting dumber because of AI.
is we're mainly talking about children here.
We're talking about people who are in school or even high schoolers who are using AI to do
their homework to cheat.
And we're seeing, as you mentioned earlier, that math scores are going down, science
scores are going down, all of these standardized testing scores are going down, even literacy
rates are going down.
So, I mean, it sounds like maybe the point on which we would all agree is that AI
has fundamentally transformed what it means to go to school.
And that is the point that perhaps needs further and deeper exploration, deeper discussion,
and perhaps some regulation.
Derek, if this has meant that everyone cheats now, what do we do?
Yeah.
If I was going to write like a magazine piece about this,
I think the way that I would frame it, and I really like Kells Framing,
so I'm borrowing this from him.
But I would say that, for the...
the last 10 to 20 years, we've been running this experiment of distraction in our schools.
Like, we have very clear, correlative, but I think causal evidence that suggests that phones
are an enormous distraction that's responsible for the global, not just U.S., but global
decline in math scores, in literacy scores, and in other measures of one's capacity to maintain
attention.
Now on top of this weapon of mass distraction, you add artificial intelligence, which is
this extraordinary tool for synthesizing information,
which allows students to cheat at an extraordinary scale
that we know is happening in colleges and high schools.
If you want to fix that,
if you want to fix this weapon of mass distraction
followed by this weapon of mass cheating,
you have to solve it directly.
Take the phones out of the classrooms,
put them in pouches,
run that experiment, certainly, to see if it works.
And then when it comes to testing,
knowledge, you just have to move out of the modes of testing knowledge that can be cheated
toward modes of testing knowledge that can't be cheated.
So one thing that can't be cheated is something a little bit more like the Oxford model,
where most of the grade is dependent on in-class oral exams.
You have this system or culture of, you know, you take the history class, you learn about the
Habsburg Empire, rather than write an essay about the Hasbrook Empire, much more likely, just ask
Chapto-Bee to write it for you, you get up in front of the class and talk about the Hathsburg Empire and
talk about the Holy Roman Empire and people ask you questions and you defend and prove your intelligence
to the classroom to the teacher. So it's a little bit like, like my wife just finished her PhD
in clinical psychology. At the end of a PhD, what's the verb that we use to describe the end of the
PhD? You defend your dissertation. You get up in front of a group of experts and you don't just
give them the paper and say, read it and then, you know, giving my degree, you defend it. They
ask you questions. They say, what about this methodology? What about figure number one? And you say,
oh, well, here's where I did the methodology and here's why figure one looks the way it does.
You prove in real time that you are the author of that paper, that you understand the work that you
did. And I just think that more education, if we really want to get around the cheating epidemic,
probably has to slurp in this Oxford model or this dissertation model because it's much harder
to cheat in an oral exam.
It's a really interesting point.
Kyle, do you agree?
No, that has to be right.
I mean, this is what's happening in academia right now.
It's a combination of the Oxford model and what I've long been advocating for, which is the
explicit discussion and promotion of the ability to aim your mind's eye towards complicated
topics as the goal of school.
and it's something that we should be talking about starting at grammar school and moving all the way through the university system, that we are here not just to get content and reproduce content on test, but to teach our mind to be comfortable thinking. And that's a frame through which to see almost every activity we do. I would also throw into this. I think specificity is a really important point we made earlier. So I'm just going to throw in a sort of specificity constraint here. What we're really talking about, if we're going to use my terminology, AI is the wrong term. That's way too broad.
That includes things like the Cleveland Clinic or Mayo Clinic model that Derek was talking about.
That model, for example, has nothing to do with a large language model, like the type you would see produced by the frontier AI companies, right?
This is a prediction model that's custom trained on labeled datasets of radiology scans.
We've been doing this since the 90s and been making slow and steady progress.
Like these sort of AI models that are very utilitarian and useful aren't new, aren't currently experiencing a massive exponential takeoff in capabilities, but often the Frontier AI company,
will launder the results from these non-LLN models and sort of mix in with what they're doing.
But what we're really talking about here is large language model-based tools, and in particular
using those for the production of written text or in some sense to sort of aid thinking.
And that's exactly where we get to all the problems in the academic setting that we've been talking about.
How big of a problem do you consider this in terms of like a national economic scale?
Because, I mean, there's one side of this which is like, you know, we want to protect our kids.
it's important that our kids have fulfilling, interesting school experiences, they get a good
education, et cetera, which I'm sure we all agree. But then there's also another side to it, which is,
like, we kind of need children to have functioning brains for when they eventually lead the nation.
And there might even be like a China versus USA argument here. Like if students over on the other
side of the planet are being trained properly, their A.
chatbots are being regulated properly, they know how to use their brains.
Doesn't that mean that sort of 50 years down the line, they're going to beat us and outperform
us on every which metric? I mean, is that an argument that you see as relevant or important,
Cal? Is that something that comes up in your conversations when you discuss this topic?
Well, I have a relatively radical view on this. So I'll be interested, you know, Derek is the
economics expert here. So I'll be interested in his take on it. But I argue we have already
seen the economic impact of this reduced cognitive fitness, this has already been a major
storyline of the last 10 to 20 years. I mean, given the technological advancements we've had in the
digital, the intersection of the digital in the office, we should be seen exploding total
factor productivity, especially in non-industrial sectors like the knowledge sector. And we
haven't, right? There's been a lot of different things have been playing on that. We have the economic
crisis and other things going on. But a total factor productivity in non-industrial sectors has
been more flat or uneven than you would expect. And I would argue this is in part a result
already of massively increasing the distractions and context switching that happens in our lives
and in the workplace. We're in a world now. I think one of the most telling statistics of the
current office is now the average worker is going to check an email inbox or chat channel once
every three minutes on average. That is a disastrous cognitive context to use your brain to add value
to information, which is the core activity of knowledge work. So I already think we're seeing a flatlining.
This is sometimes called the productivity paradox of the 2000-2010s because of this impact on
cognitive fitness. So yes, if we go farther down this road and using LLM-based produced writing,
take that important strengthening activity off the plate in our educational system, it's not just
about kids' brains and some sort of abstract notion of smartness equals good. I think the
economic impact that we may already have been feeling for 10 or 20 years is just going to get
way worse. And it's, it is something we do have to really care about from a national perspective.
Derek, what are your views on these economic impacts?
Yeah. You know, as I was listening to you and Cal Talk, sort of these two, two different
statistics sort of pops in my head that I think juxtapos together, interestingly.
One is that there are a lot of indications that Gen Z is the most materialist generation.
that we've ever seen in American history.
If you ask various groups sort of bucketed by generational cohort,
how much money they consider success in America,
you tend to have about $150,000 be the norm in most generations
until you get to Gen Z, and they say it's $4,000 to $500,000.
Institute for Family Studies recently looked at a monitoring the future survey
that asked various questions about materialism,
among young boys and girls in high school.
And that line of materialism has just gone up and up and up.
And I think for the first time in the last 30 years,
women are now higher on a certain measure of materialism than men.
So on the one hand, you have this extraordinary desire
among young people to be successful.
They open their phones.
They look at influencers.
They see rich, successful, beautiful people
living their rich, successful, beautiful lives.
And so that's one train track that's coming along here.
But there's just other parallel train track, and that is students cheating constantly in high school and college.
In the short run, if you cheat in every test, you're cheating the test.
In the long run, if you're cheating on every test, you are cheating yourself.
You are removing from yourself the ability to lift the weight.
And if you want to be rich, and if you want to be successful, I myself certainly know of absolutely no
individual who is rich and successful, who doesn't work unbelievably hard, who isn't very good at what I
think of as cognitive time under tension. That is to extend the fitness metaphor, this idea that if you do
sort of one rep of 150 pounds on, you know, a bench press and it takes one second, that's a certain
amount of resistance. But if you make that a five or even 10 second up and down, it's much more
tension on the muscles. That's time under tension. And I think thinking has a lot of
a similar principle that really great ideas benefit from the ability to sit with those ideas for a long
period of time to figure them out, to find the simplicity that I think, as Oliver Wendell Holmes said,
is on the other side of the mountain. You learn about something and then you were able, through your
learning about it to make it simple and make it effective. If you are cheating yourself out of all
these tests, you're cheating yourself out of the ability to become rich and successful.
And so one thing that I'm afraid of is not just that these people who are cheating are going to lose out to the Chinese or whatever the finish or the Danes. Maybe they are, maybe they aren't. They're going to lose out to people who can think, who are doing the work, who can sit with ideas, who do have and are building cognitive time under tension. And so I just think that a world in which you have a generation of people with extraordinary expectations of material success, but underdeveloped abilities to actually achieve that success, that
that just seems like you're setting up a generation for unbelievable disappointment, anxiety, and depression.
So, you know, this goes, I think, not just to, you know, the concept of national greatness,
U.S. versus China, although maybe it touches on that.
It goes to, like, you know, what do we want from our life?
Like, what do people who want to be rich and successful?
What should they want from their life?
They should want the ability to sit, the ability to sit with discomfort, to work hard,
to enjoy complicated problems, to love thinking through them, because that's what.
where your money is made, if you lose that, you really lose out on this ability to achieve,
like, what is the New American dream?
I guess the reason that I'm so interested in the economics angle is because I feel like the
argument against what we're saying is that it's sort of this Luddite argument that you're
anti-technology, anti-progress.
And I think the thing that really resonates for me, to your point, Derek, is if you have
a generation of people who have been trained since their infancy to take shortcuts, to not sit
with ideas, to not work hard,
to just scroll, scroll, and kind of like live this sort of fleeting imaginary version of success
and you never actually build the tools or the abilities to actually go out there and achieve it,
then ultimately we'll have an entire generation, an entire nation of basically lazy, non-thinking losers,
who can't really get anything done, who can't really come to a consensus and make decisions and build things.
And I just wonder if that is the argument that needs to be made,
to those who would be pushing against this argument.
I mean, there are certainly going to be people out there
who would say, Cal is just afraid of technology.
Derek thinks AI is bad.
They're sort of anti-progress, they're anti-innovation.
And I wonder if they're missing something.
They're missing a productivity angle,
which is that if you have a generation,
I mean, an entire society of dumb people,
then just economically speaking, GDP is going to go down.
I feel like that's the only outcome.
Derek, does that resonate, I guess?
I don't consider myself a Luddite,
and I think I'm probably more positive
about large language models as a technology than Cal.
I want to be very clear about what it is that I think is bad.
Yeah.
And I think here, Cal and I don't have, like,
intersecting Venn diagrams.
I think here it's the same Venn diagram.
What I think is bad is not.
not artificial intelligence.
What I think is bad is using artificial intelligence to do the thinking for you and then
representing your thinking as just the synthetic information that you got from artificial
intelligence when you prompted it.
That is what is cheating.
That is definitionally cheating.
And my point is that in the short run, when you cheat, you are cheating the task.
But in the long run, when you cheat, you are cheating yourself.
because work is one damn task after another.
And if you lose the ability to be comfortable
with what I'm calling time under tension,
cognitive time under tension,
well, then you're really putting yourself
at an extraordinary disadvantage
in what's going to be a very, very competitive labor market.
And that's my fear for students today,
is that they are taking a shortcut
that in the long run is going to atrophy muscles
that they're actually going to need in the labor force.
Just as we wrap up here, Cal,
what would be your advice to those people?
I mean, I don't think that we're going to see real regulation on this stuff.
Open AI even built a tool that detected AI-generated work,
and they decided not to release it because they worried it was going to hurt usage.
I mean, it doesn't seem like anyone else is going to solve this problem for you.
So what would be your recommendation to people who don't want to fall behind?
Well, I mean, I think time under tension, that's a good analogy or metaphor that I think Derek is pointing out.
You should be thinking as an individual, if I want to be economically viable,
don't listen to the voices that are saying,
oh, you won't be replaced by AI.
You'll be replaced by someone who uses AI better
and say, what is fundamentally,
what do I do in my job?
Right.
Where do I actually create new value in the world?
If I'm pulling in a knowledge work type of employment,
a salary that's non-trivial,
it's not because I'm good at answering emails.
It's not because I create PowerPoint slides really quickly.
There must be some fundamental activity
where I'm taking hard-won skills and knowledge
and applying it to information to add new value,
the harder I can think, the more I can sustain my focus, the better I am at that core activity
that matters. So what I've been arguing this since, you know, my book deep work a decade ago,
don't lose sight of the fundamental cognitive activity that actually moves the wheel, that actually
moves the needle on these knowledge work types of endeavors. If you cannot add original value to
information through deep, skilled thought, then what you're doing is imminently replaceable.
If you turn yourself into a sort of cybernetic LLM prompter, your unique value to the marketplace is going to plummet.
You're putting yourself into a dangerous situation.
So don't mistake busyness for productivity.
Don't mistake speed for better.
What matters is what is the high value output I produce that I'm uniquely suited to do it and how do I get better at that activity?
There's all sorts of ways technology can help you do it, but you have to be very wary about the ways that technology makes you worse at it because it has a way in the way and the way.
the last 20 years of sneaking in the back door and making you feel more productive and you look
up and you're worse at what you do. So let the first things be first.
Cald-Newport is a professor of computer science at Georgetown University in New York Times bestselling
author of eight books, including slow productivity and deep work. Derek Thompson is host of the
plain English podcast and author of Abundance, Derek and Cal. This was fascinating. Thank you so much.
Thank you, sir. Thank you.
Okay. That is it for today. We appreciate you join.
us for another ProfG Markets panel. If you have a guest that you think we should speak to,
please drop us a line in the comments or email our producer, Claire, at Markets at ProfgMedia.com.
We hope to hear from you. This episode was produced by Claire Miller and Alison Weiss,
edited by Benjamin Spencer. Our video editor is Brad Williams. Our research team is Dan Shalon,
Isabella Kinsel, Chris Nodonoghue, and Mia Silverio, and our social producer is Jake McPherson.
Thank you for listening to ProfiMarkis from Profrey Media.
If you liked what you heard, give us a follow. I'm Ed Elson. I will see you tomorrow.
