Passion Struck with John R. Miles - Ethan Mollick on the Impact of AI on Life and Work EP 437
Episode Date: April 4, 2024https://passionstruck.com/passion-struck-book/ - Order a copy of my new book, "Passion Struck: Twelve Powerful Principles to Unlock Your Purpose and Ignite Your Most Intentional Life," today! Picked b...y the Next Big Idea Club as a must-read for 2024.In this episode of Passion Struck, host John R. Miles sits down with Ethan Mollick, a Wharton professor and author of the groundbreaking book Co-Intelligence. They delve into the rapidly evolving world of artificial intelligence (AI) and its impact on various aspects of life and work. Ethan Mollick shares insights on the potential benefits and risks of AI, including its role in enhancing productivity and creativity, job security concerns, and broader implications for humanity.Full show notes and resources can be found here: https://passionstruck.com/ethan-mollick-the-impact-of-ai-on-life-and-work/In this episode, you will learn:The importance of setting boundaries and clear roles when working with AI to ensure it operates within desired scopes.The evolving role of human judgment as AI becomes more integrated into decision-making processes.Addressing biases in AI systems and the challenges of ensuring accountability in AI-driven decision-making.Recommendations for individuals preparing for a future where AI capabilities are constantly evolving, emphasizing the need to adapt to uncertainty and plan for potential advancements in AI technology.All things Ethan Mollick: https://mgmt.wharton.upenn.edu/profile/emollick/SponsorsBrought to you by Indeed. Head to https://www.indeed.com/passionstruck, where you can receive a $75 credit to attract, interview, and hire in one place.Brought to you by Nom Nom: Go Right Now for 50% off your no-risk two week trial at https://trynom.com/passionstruck.Brought to you by Cozy Earth. Cozy Earth provided an exclusive offer for my listeners. 35% off site-wide when you use the code “PASSIONSTRUCK” at https://cozyearth.com/This episode is brought to you by BetterHelp. Give online therapy a try at https://www.betterhelp.com/PASSIONSTRUCK, and get on your way to being your best self.This episode is brought to you By Constant Contact: Helping the Small Stand Tall. Just go to Constant Contact dot com right now. So get going, and start GROWING your business today with a free trial at Constant Contact dot com.--► For information about advertisers and promo codes, go to:https://passionstruck.com/deals/Catch More of Passion StruckMy solo episode on Why We All Crave To Matter: Exploring The Power Of Mattering: https://passionstruck.com/exploring-the-power-of-matteringWatch my interview with Robert Waldinger On What Are The Keys To Living A Good Life.Can't miss my episode with Oksana Masters On How The Hard Parts Lead To TriumphListen to my interview with Richard M. Ryan On Exploring The Heart Of Human Motivation.Catch my episode with Coach Matt Doherty On How You Rebound From Life’s Toughest Moments.Listen to my solo episode On 10 Benefits Of Meditation For Transforming The Mind And Body.Like this show? Please leave us a review here-- even one sentence helps! Consider including your Twitter or Instagram handle so we can thank you personally!How to Connect with JohnConnect with John on Twitter at @John_RMiles and on Instagram at @john_R_Miles.Subscribe to our main YouTube Channel Here: https://www.youtube.com/c/JohnRMilesSubscribe to our YouTube Clips Channel: https://www.youtube.com/@passionstruckclipsWant to uncover your profound sense of Mattering? I provide my master class with five simple steps to achieving it.Want to hear my best interviews? Check out my starter packs on intentional behavior change, women at the top of their game, longevity and well-being, and overcoming adversity.Learn more about John: https://johnrmiles.com/
Transcript
Discussion (0)
Coming up next on Passion Struck.
I think in the long term, I am worried
that subtle stuff starts to matter.
If we give up more authority and control,
if we're not in the loop, then I worry a lot
about the kind of biases that AI has in making decisions.
If you're still in the loop and it's one of the voices
you're listening to, like another human mentor,
then there's a lot more value in it.
Welcome to Passion Struck.
Hi, I'm your host, John R. Miles.
And on the show, we decipher the secrets, tips, and guidance
of the world's most inspiring people
and turn their wisdom into practical advice for you and those around you. Our mission is to help
you unlock the power of intentionality so that you can become the best version of yourself.
If you're new to the show, I offer advice and answer listener questions on Fridays.
We have long-form interviews the rest of the week
with guests ranging from astronauts to authors,
CEOs, creators, innovators, scientists, military leaders,
visionaries, and athletes.
Now, let's go out there and become Passion Struck.
Hello everyone, and welcome back to episode 437
of Passion Struck, consistently ranked the number one alternative
health podcast.
A heartfelt thank you to each and every one of you who return to the show every week,
eager to listen, learn, and discover new ways to live better, to be better, and to make
a meaningful impact in the world.
If you're new to the show, thank you so much for being here, or you simply want to introduce
this to a friend or a family member, and we so appreciate it when you do that.
We have episode starter packs, which are collections of our fans' favorite episodes that we organize
into convenient topics that give any new listener a great way to get acclimated to everything
we do here on the show.
Either go to passionstruck.com slash starter packs or Spotify to get started.
Are you curious to find out where you stand on the path to becoming passion struck?
Dive into our engaging passion struck quiz Crafted to reflect the core principles shared
in my latest book, this quiz offers you a dynamic way
to gauge your progress on the PassionStruck continuum.
Just head over to passionstruck.com
to embark on this insightful journey.
With just 20 questions in roughly 10 minutes of your time,
don't miss this chance to gain valuable insights
into your PassionStruck journey.
Take the quiz today.
In case you missed my interview from earlier in the week, we dove into the world of high performance coaching with Sean Foley,
the renowned golf coach behind some of the biggest names in the sport. From his unique
coaching philosophy to the mental health strategies that can turn a struggling golfer into a champion,
Sean shares insights you won't want to miss. And if you liked that previous episode or today's,
we would so appreciate you giving it a 5-star rating and review.
That goes such a long way in strengthening the Passionist Rock community, where we can
help people to create an intentional life.
And I know we and our guests love to hear your feedback.
Today we're venturing into the rapidly evolving world of artificial intelligence with none
other than Ethan Molyk, a Wharton professor, author of the groundbreaking book Co-Intelligence,
Living and Working with AI and the mind behind the popular One Useful Thing substack.
Ethan is recognized as one of the leading experts on AI, with his insights featured
in prestigious outlets like the Atlantic, NPR, and the New York Times.
In Co-Intelligence, Malik addresses the mixed feelings surrounding AI, from the excitement
about its potential to enhance productivity and creativity to the fears about job security and the broader implications for humanity.
Through his work, Malik aims to demystify AI, presenting an honest, research-backed
view on how these tools can transform our world for the better if used wisely.
Today, we'll discuss reimagining work in an AI- world. The importance of focusing on the positive impacts of AI,
how AI can amplify human creativity,
its revolutionary role in education,
and the democratization of talent
and ability through technology.
Join us as Ethan challenges us to see AI,
not as a threat, but as a transformative tool
that when harnessed correctly,
can lead to unprecedented growth and innovation.
This conversation is not just about understanding AI,
but about envisioning a future where technology and humanity coalesce to create
a more empowered and equitable world.
Thank you for choosing PassionStruck and choosing me to be your host and guide
on your journey to creating an intentional life.
Now, let that journey begin.
life now let that journey begin.
I am absolutely thrilled and honored to have Ethan Malik on passion struck. Welcome Ethan.
Thank you for having me. Well we are going to be discussing this amazing new book of yours called Co-Intelligence but before we delve into this and for the audience I'm holding it right here in case
But before we delve into this and for the audience, I'm holding it right here in case
you're not on the YouTube channel.
But before I get into that, I understand through my research that you have a love for cheese curds.
I thought it was a really fun detail about your background.
Can you share a memory or a story that embodies your connection to Wisconsin where cheese curds are well-known and how it stayed with you over the years?
Definitely. I'm born and raised Wisconsin. I may sound like a New Yorker because I talk fast, but born and raised in Wisconsin and
the great thing about Wisconsin is it is a very grounded Midwestern state.
So one of the great joys there is going to the state fair every year.
And so I think they sell something, the state has a population of a few million.
They sell several times that number of cream puffs every year.
There's of course butter carverings of cows, but they also sell cheese curd.
And one of the things that you can't get outside of Wisconsin is a real cheese curd,
because when you bite a real cheese curd, it squeaks like you're biting into a balloon.
I mean, that's how you know it's good. So there's a authenticity piece there of you can tell a real midwesterner because they
know what cheese curd sounds like.
And so it's nice to have a grounding back there, even as I've moved to East Coast and
my career has grown in other kinds of ways.
I live here in St. Petersburg, Florida, and we have a weekend farmers market every Saturday.
One of the vendors sells cheese curds.
So now I'm going to have to go and put it through the litmus test of, is it an
authentic cheese curd or not?
I have to report back on how that works.
Well, it'd be interesting if I could have AI do it for me, but we're not there yet.
Not quite.
Although GBD4 beats humans in the salmonella test.
So it's actually the written wine tasting.
It does pretty well on, but it can't taste anything yet.
I could understand given how much effort you have to put
into being a salmonella, why it might be better
at doing that than a human who might pick up other things.
Well, I wanted to get into your academic pursuit
because you decided to pursue an MBA at MIT
and that was a really pivotal moment for you.
I wanted to ask, what were some of the most valuable lessons
that you learned during your MBA?
And more importantly, what led you to then wanna go on
and pursue getting a PhD?
It's a great question. So I was a co-founder of a software startup company with my college
roommate, who is the sort of technical genius behind it. I was the business guy and we did
it right after school and I made literally every mistake possible. Every year we were successful.
We vented the paywall. I still feel a little bad about that, but in the late 1990s,
but you know, 20 something guys selling this thing and he again, technical genius, I was a sort of
sales guy and anything we could do wrong, we did wrong from hiring to how we set up equity to how
we did. Everything was a mistake. We succeeded, but it was a tough run. And I thought I should
learn how to do this for real. And I decided to get the MBA. And one of the things I learned about
entrepreneurship that led me to the PhD is nobody knew anything, right? So there's some stuff we know
about how to be an entrepreneur, but we don't know that much. And I thought, I want to study this. I
want to figure out how this works. So that was the transition. As to the big thing I learned,
I think the thing I learned, the lesson I learned most of the PhD program, which is a basic economics
lesson, which is people do what they're incentivized to do.
And I think once you realize that actually true, it's fairly profound because people
often do the wrong things because they're incentivized in the wrong way.
Organizations are often broken so that you're incentivized to do something that's not actually
helpful and that lesson has stuck with me ever since.
Yeah.
Well, that is definitely true.
And it's something that I've seen throughout my business career as well.
Before I started this is those incentives really drive people's actions.
Sometimes in good ways, sometimes in the exact opposite way that you
would like them to be.
I think it's important for people to understand a foundation of your
entrepreneurial journey, because you were part of several startups.
One that's infamous at creating a paywall,
if I understand it correctly.
Can you share a little bit of that background?
Sure, I started to, my co-founder,
who was one of my college roommates,
and he really knew about this market,
had resources to bring it out there,
and he brought me into the company
when it was just the two of us working together.
And it was a very typical entrepreneurship story, learn by doing, right?
Among the other things I learned was we were trying to sell a product to companies that
had no idea what the internet was for publishing.
So I literally at one point had a sales call with a company that had an actual castle because
they had started their printing press in 1500.
And then I'm going to them and saying, Hey, the internet's gonna be a big deal.
You should sell things on the internet. And I'm glad there were no archers left at the castle press in 1500. And then I'm going to them and saying, hey, the internet's gonna be a big deal. You should sell things on the internet.
And I'm glad there were no archers left
with the castle from my reception.
So it was a lot of this sort of like realizing,
and it's something that's been a common in my career
that I'm usually a little bit ahead on technologies.
And as a result, it's been nice proof after the fact
that yes, we were correct on this.
Like the New York Times and Wall Street Journal
experimented with our paywall and actually started using it.
But we were very early on in software as a service
and all of these kinds of things.
And I think that that lesson has stuck with me
and I've been in that kind of place multiple times.
The funny thing about the AI piece in some ways is
I've been ahead of the curve,
but the curve's finally catching up.
So suddenly people are listening to me
where they probably didn't before.
But the struggle has been how do you communicate these ideas?
And that's been part of what being a professor is.
How do I talk about this stuff?
How do I make people feel the sort of passion and importance of these topics?
Are you familiar at all with Jim McKelvey?
The name is familiar, but I'm terrible with names.
So Jim is a friend of mine.
He co-founded Square with Jack Dorsey.
Of course.
Yes.
Yes.
But he's got this new company called invisibly.
And I'm just ad-libbing here because I'm interested in this.
I have never liked the paywalls that publications like the New
York times, wall street journal, time, Forbes, they're all using it now.
So invisibly was really a breath of fresh air because what Jim is trying to do is disrupt the whole publishing industry by actually paying the consumer to consume content.
that you access, they're going to pay both the publisher and depending on how valuable your reading is, your consumption, you also get paid.
So it's a really interesting model if you haven't examined it at all.
I think the paywall is, I think publishers needed it, but you would have helped, we would
have transitioned to a better model by now.
And I think that in general, media is stuck between models.
It's fascinating in the 1940s, people subscribe to an average of three
newspapers and eight publications for each family, like crazy numbers.
And now nobody subscribes to things and the market's just
looking for desperate solutions.
I think the payload was a bridge, but that sounds like a better
evolution forward, so I'll check that out.
So today we are going to be doing a deep dive on AI and AI has been around for a while.
I've actually been studying it for a while.
However, November 2022 marked the beginning of a new era, the great awakening of generative
AI through the emergence of creative machines.
Can you share a little bit about what went through your mind when you first realized
the implications of this development?
Because it was far different than the previous versions of AI.
So just for background, there's two paths of AI.
AI has been a big deal for, as you said, a while.
At the MIT Media Lab, I worked with one of the sort of founding luminaries of AI, Marvin
Minsky at the time.
So like AI had always been like this boom and bust
kind of cycle.
And the latest boom actually started in the 2010s,
which was around analytics.
The idea that we can do machine learning,
build predictive algorithms.
So Amazon uses that to figure out
where to put their warehouse,
because they can take all their data
or what products to recommend.
When you're going to a podcast site,
that's what's crunching all the data
and suggesting what other podcasts you might want to listen to., that's what's crunching all the data and suggesting what other podcasts
you might want to listen to.
And that was very valuable,
but those systems were really bad
at one kind of prediction,
which was predicting the next word in the sentence.
Because if your sentence ended with the word filed,
it didn't know whether you were filing your taxes
or filing your nails.
And then in 2017, this paper came out
called Attention is All You Need
that set up a new way of looking at AI
called the Transform with Attention Mechanism. They let the AI pay attention to the entire sentence, the word was in, the page, the paragraph,
and then that let us create realistic language as a result of this.
And for a while there were, so I had been following that large language model thing
for a while and there were earlier versions of it that were really interesting because
they wrote like fifth grade writers and it was like, wow, that's amazing that you can
write like a fifth grader.
But it wasn't particularly insightful.
And then when Chat GPT came out,
for reasons that are still unclear to even the insiders,
I talked to OpenAI all the time,
I talked to Google, I talked to Microsoft, all the players.
Nobody quite knows why it's as good as it is.
But suddenly we had a model that just by virtue
of being larger, even though it wasn't a technological
leap forward over previous models necessarily,
it could score high on the SAT and it could,
it seemed to be, there seemed to be someone that everyone talking to it, it could write at a high
school level. And I remember as soon as it came out, I tried it and there's a whole Twitter thread
of me doing experiments. Oh my gosh. I like, wow, it writes a poem. Wow. It could do, it writes a
memo. It can do a strategy document. And it was like a dawning revelation. I started teaching it to my classes and it really became a big deal overnight.
One of the things I wanted to go back to, and thanks for sharing that story, is my positions
were in the technology area of major corporations.
And during the mid to late 2000s, I was the chief data officer at Lowe's.
And Lowe's absolutely competed by using data.
So we were probably one of the two or three largest customers
in micro strategy.
We had a huge SaaS implementation
and a lot of behavior scientists and other PhDs
who were writing these models for those tools.
And just to give an example for the audience of how powerful this was, we created a model
around the single view of the customer, which without having a loyalty program, we were
able to get a 92% match, meaning we knew who was coming in the store.
And we also knew if you were in your second home, when you were visiting it,
because we had trigger events that would tell us, and then we would market to you.
Based on knowing who you were and your shopping habits and indicators that would
tell us whether you're starting a project or not.
And that was almost 20 years ago.
How could, just to give a sense for a listener, if we could do that then with some of these
new models that we have now, what would it add to that sophistication?
So what's interesting about that is that form of AI, doing the predictive analytics based
on customer data has advanced the models are much better than they were.
They can do better predictions, maybe be to 99%.
But that actually haven't been what's behind the latest boom of AI.
In fact, you are early to this market and other people have been struggling to catch
up with what Lowe's could do and what you could do.
That's been like the last 10 years, right?
It's been a steady stream of lights, it's getting easier to do.
It's easier to build your data stores.
It's easier to do the analysis.
There are better visualization tools.
So the stuff that you got a competitive advantage of
is now much more democratized.
The thing that's causing all the insanity around AI right now
is this completely separate thread
that would not be good at identifying necessarily
customer patterns because that's not what large language models do that well.
But what would be is would make it
so that every customer service representative
who talks to them gets a perfect script
or even the AI can speak to the customer and say,
hey, it looks like you're trying to build a house.
Let me give you some advice.
It looks like you actually bought a little bit too few of this
and for the latest design is not to have exposed brick.
So it looks like that's what you're trying to do.
Can I help you out with that a bit?
It's that idea of having almost a person there
that you can deploy on demand.
And that's what large language models do.
Okay, and Ethan, I wanted to give the audience
a background as well, in case they're novice in this area.
My understanding is that there are three different AI models
depending on their scope of capabilities and methodologies. If you're able to,
can you discuss the difference between foundation, frontier, and specialized models?
Yeah, so that's a great question. Okay, so there are many large language models out there.
Probably most of the people watching or listening have played with Chat GPT.
All right, Chat GPT, if you're using the free version,
it uses a model called GPT 3.5, which is completely fine,
but not great.
There are now, that's an example of foundation model.
There are now many models that are as good
as the free version of ChatGPT.
I actually have one running on my home computer
that's open source, that's almost as good as ChatGPT.
Those are called foundation models.
There's many of them out there, right? Different sizes. Elon Musk has his Grok and there is Lama from Meta. All of these are
foundation models and they all have a little bit different personalities. They're all,
and there's lots of different ability levels. Some of them are very small and cheap to operate.
Some are very expensive. There's a subset of those models that are called frontier models.
Because in AI right now, there's a scaling law,
which means that the larger my model is, which means the more information I fed it,
the more computing power and therefore the more expensive it was to build, the more time it took
to build, the smarter it is. So only a few companies in the world can afford to build these.
Right now, there's three frontier models, which are GPT-4, which is the paid version of Chat Sheet,
Google's Gemini Dance, which is the paid version of Google's
AI, and Cloud 3, which is also paid, which just got announced.
Now, those models are smart.
The reason why it matters that your model is smarter
is because there are lots of people
who are building smaller foundation
models that are specialized.
So, famously, Bloomberg spent $100 million
building a financial GPT.
And they trained it all Bloomberg's secret proprietary data, and it was supposed to help them with stock picks. Well a new
paper just came out that showed that actually GPT-4, the same thing that you
can get access to anywhere in the world for paying 20 bucks a month, actually
outperforms Bloomberg GPT and they spent a hundred million dollars on stock picks
even though it's not built for stock picks. Google will build a specialized
model called Palm2 for medicine. That's Palm 2 Med, which is supposed to be really good at medicine.
GPD 4 beats Palm 2 in medical advice.
It actually beats most doctors in medical advice as well.
So the larger models, the frontier models are smarter
and they tend to be very good at lots of different things
rather than going with a specialized or smaller model.
There's still reasons to do that if you're a company.
Thank you for that explanation.
And I understand that Microsoft now has something called Copilot
that I was just talking to a friend of mine from Microsoft
that they're basically integrating into their suite.
So you could have it build PowerPoints for you.
You could have it go in there and you could ask it to lay out a keynote speech and word, et cetera.
What type of model is Copilot using?
So Copilot, Microsoft is partnered with OpenAI.
So Copilot is actually GPT-4,
the frontier model that OpenAI does.
So you can get access to it two ways.
You can get access to it directly by paying ChatGPT Plus,
or you can get access to it through Microsoft's Copilot
tools, you're gonna think they charge 20 bucks a month for something like that.
Copilot is both much easier to use.
There's a little button right in the word that you can click and you
tell it what you want and literally will write a word document for you.
It'll write a 30 page PowerPoint.
It's pretty miraculous.
So it makes it easier to use, but also doesn't necessarily give you all the
power of using the model directly.
Cause you're not talking to the model.
It's helping you make it easier.
It's a Copilot.
So part of the choice, I like Copilot is a starting place for people, because
it's a very powerful way to use these tools.
But if you really want to get good at AI, you have to start using the models
directly, which means getting access to one of the frontier models as a chatbot
and working with it.
Okay.
And I know a lot of people are getting very worried about this
disrupting their livelihoods,
but there are certain areas of the market that are harder to penetrate because of regulation.
I think about this as being the government, healthcare, finance. So as we think about
these frontier models and these specialized models for niche applications, such as healthcare,
government or finance, what are the challenges so people can be aware of them,
of creating models that perform at higher levels
of accuracies that are needed in these domains?
And how long do you think it's gonna be
until they can actually enter these domains
and do it in a proficient enough manner
that it can start disrupting them?
So, it's an interesting question
because the important thing to realize about AI
is nobody knows the answer to these questions.
And I talk to all the people who build these models
because we don't know how good they're gonna get
and we don't even know everything they can do.
I signed my MBA students an assignment this last month
where I taught them how to build AI tools using ChatGPT. And I gave them the assignment, I want you to
replace the job you are applying for so that when you
go for a job interview, you can give them the GPT you created
the the AI tool you created, and say I would like a raise now
please. And what was amazing is they were the MBAs represent all
these different jobs, I had Navy pilots, I had hip hop
promoters, I had lots of private equity people, consultants.
So the private equity people made something
that read 10K financial statements
and wrote of deal memos.
The Navy pilot had a way that if filed Navy flight reports
for him in a way that saves him two hours a day.
And all of these people were creating these specialized tools
for their jobs within just a few hours.
And actually three of them told me
they got jobs as a result of this. So that was pretty neat. But the idea is that I don't know
the full range, we got the expert in your field and open AI isn't the expert in your field. So
models are already more capable than people think. Now, in the bigger question about regulation,
that's very uncertain. When I talk to regulated industries like banking them, part of the deal is
it shouldn't be working with the regulated part with AI
because it's not legal to do that yet.
But there's lots of other parts of what their business does
that can be enhanced by AI.
Their marketers could use this to do better marketing work.
There, the customer service team can do this.
You would do this in ways that are not necessarily exposed
to regulation, if that makes sense.
Yeah, no, I think it's a helpful explanation
and I just wanted to put it out there so people
understand the full spectrum that's going on here and what's going to need to be done
to take these things to the next level.
So I want to talk about motivations for a second because you brought them up earlier
and in the book you discuss the motivations behind developing AI and you highlight not only the
potential profitability, but the belief among some researchers and figures like Sam Altman
that super intelligence could offer boundless upside for humanity.
Now I hear that and I think there's two sides of the coin.
There's this optimistic view, but how do you reconcile that enthusiasm for
super intelligence potential benefits such as curing diseases or solving global warming
with the significant downsides and risks that come about with AI alignment and the fear of runaway
AIs? That's a really great question. The crazy thing about this is that the belief, it's ideology right now
that's guiding things. So there's groups of people who really believe that they could solve this for
that. They first question, can you build a super intelligence? And that's absolutely unknown at
this point. I think people are assuming it's doable. I don't know if you build a super
intelligence, would it have its own goals or would just be like a very smart machine that will do like write a really great script for listening to the
conversation I'm having with you right now and feed into my ear the optimum thing I could say
right now to be very impressive to you. That's the version of AI where it's not taking charge,
but it's helping us in different ways. But assuming you can build something that is
super intelligent and that has some self driven approaches,
autonomous to some degree.
Then there's a complete gap between people who,
they call them the doomers and the accelerationists.
And it's a real fight here.
It's amazing.
I'm calling you from San Francisco right now.
And it's a genuine debate.
They're an organization, open AI is all in.
I'm believing that they can build a super intelligent AI
that will help humanity.
There are other groups that are incredibly worried
about this and don't believe that it's possible.
Some people think the whole thing it's fascinating,
but the only way to really understand why these labs
are doing what they're doing is to understand their beliefs
because OpenEye can make a lot more money
if they were trying to do other things
other than build a super intelligence.
But that's what they're focused on right now.
Whether they succeed or not,
as nobody knows, including them,
they believe they can do it though.
Where they succeed or not is nobody knows, including them.
They believe they could do it though.
So I had the honor of interviewing Yale.
Isenstädt.
I'm not sure if you know who she is, but for a period of time, she was at Facebook and she was looking at how they were mainly using their technologies or not.
She ended up leaving, but I had a really fascinating conversation with her about how
oftentimes whether it's Sam Altman or another company who's maybe built the
Fitbit or some other application doesn't have to be AI, they come up with these
great ideas, they get funding from VCs.
They put this out in the world with all the good intentions that they are, but they don't anticipate the nefarious actions that bad actors can do with that
technology, such as groups of special forces wearing the Fitbits and another
country being able to track their whereabouts.
How do you think we go about putting advisors in important companies so that we have
people who are looking at this from a completely different view than the people who are creating
them? Someone who could see the possibilities of how these things could be used if they got in the
hands of the wrong people. So there is effort to do that, right?
And I've spoken to a lot of people in lots of industries.
There are people who are very worried about this.
For example, one of the initial conversations I had
with companies like OpenAI and Microsoft were things like,
hey, you didn't realize this,
but when you released CheckGPT,
you created a universal cheating bot.
And I'm a professor at a university
and I know what's coming and other people don't
because they don't think they haven't tried it yet,
but this will destroy every essay
and essays are a useful way of teaching people things.
They have no idea we do that.
So when you're creating something of a general purpose use,
you can't anticipate all the use cases.
The biggest concern I have received from a lot of people
that's become the focus of bad actor concerns
is that mostly terrorists
and criminals are dumb, or at least the ones with catch are dumb.
But if this does, as we're seeing in other spaces, it moves everyone's performance up
the eighth percentile of performance, then if you've got a good plan for how to rob the
bank, you learn how to write the perfect ransom note, you don't mess up, that is one of the
concerns.
I don't know what you do about that.
Right now, the systems have guardrails that try and stop you from saying how you want
to rob a bank, but I can get these systems to tell me that by
convincing them that this is where the one act play or it's helping me write a
crime novel and then it's okay.
Well, this is fictional.
I'm happy to tell you how to rob a bank.
So I think that's a set of concerns, but I don't think we can even anticipate the
full range of negative or positive implications because we don't know yet.
We don't also know how good these models are going to get.
Thank you for sharing that.
And you've been a prominent voice, as you mentioned earlier,
in explaining the practical aspects of AI.
And I just want to talk about education for a little bit
because that's one of your main focus areas.
I remember when I was going through my bachelor
and graduate studies, I couldn't stand the lectures.
It was just for some of the sensory and processing and auditory processing
issues that I have personally, the lectures would just bore me out of my
mind and I wanted something that was more interactive. My daughter is now a
sophomore at the University of Florida and she constantly complains about the lectures that are still going on.
It seems like we've been teaching things the same way now for a very long time.
How do you think AI might be able to disrupt the way that we're educating
people, maybe allowing us to do it in a better way?
That has been my real passion over the last 10 years before AI.
And the reason I got interested in AI was about transforming teaching.
It's very frustrating because we know how to teach better.
There's a ton of research, but we don't do it.
So lectures are 2000 years old, 2500 years old.
Like it is a way of teaching.
There's value in lectures, but not as much as people say.
Right.
And like passively sending people information
isn't a very useful way to teach.
So increasingly, research has been
showing that active learning, where you actually
do things in the classroom, make a big difference.
Now, there's two problems with this.
One of them is there's a great study at Harvard
with their introductory physics classes, where they divide up
the classes into lectures and active learning.
We had to do problem sets and do work inside of class.
The people in the lectures reported that they learned more and that they felt they enjoyed
it more, but they did much worse on tests.
The people who had to do active learning did learn it much better, but reported not liking
it as much.
And part of the reason they didn't like it is part of what being an active learner does
is exposes you to while you're in a lecture, you can sit back and be like, I'm being entertained. Maybe if it's
a good lecture or I know this stuff. There's a couple of challenges. One is that students
actually, even though you and your daughter are pushing ahead on this, the lectures feel
good because you just sit there and listen. But the other issue has been it's very hard
to do active learning because active learning requires us to teach outside the classroom.
We call this a flipped classroom. So the way we've had to do that is we assign your textbook reading. That's not
very good. Then we started to assign you videos. Still not great. So the really amazing thing about
AI is we can do that outside the classroom tutoring and teaching with AI. And then the
classroom time can be spent in applications, active learning, doing things, doing activities,
simulations, experiences. And I think that that is a really positive transformation that we're gonna see happen soon
Yeah to me that
When you're acting with something I don't care whether it's trying to improve your personal life
and you're trying to maybe work on your physical health or you're trying to
Learn chemical engineering either one the more you can play with it, the more you're going to learn it.
Absolutely agree.
And something I've been wondering, AI is going to be a cheat sheet for students.
Given what I know it could do, if I were studying, I'd ask it a lot of questions.
My worry about this is that when you start using another source or information, A, it can produce inaccuracies,
but more importantly, I worry about,
do we start retaining the information less
because we're relying on something?
It's like using a calculator and it's not allowing you
to fully immerse yourself with the learning that's going on.
Do you have any perspectives on that?
You're nailing a major problem.
Okay, so we already know when someone Googles something,
they don't retain it as much because they've outsourced
that part of their learning process
to getting the answer immediately.
My students have stopped raising their hands
as much in class.
I ask why, they're like, I'd rather ask the AI
to explain something like I'm 10.
But that, again, you just get the explanation.
Raising hand in class is a big part of how we have interaction.
So we're going to have to rethink some of this.
Now we've been designing.
So explaining like I'm 10 is a very bad way to learn because you just get
an answer given to you and it's often a dumbed down answer.
But the really cool thing about the AI is it can actually act as a good teacher or tutor.
So a good tutor asks you questions.
They're like, can you explain to me what you know about the sales process? And then that helps you think about what you
know and don't know. And then maybe I would say to you, actually, can you expand on the
first point? You're almost right on that. But think about why that might not work for
enterprise sales. And now that is a way of making you actively learn. And a good tutor,
that's why tutoring is magic. A good tutor is drawing things out from you. They're not teaching
you by telling you stuff, like a lecture.
We can get AI to do that.
We already have working AI tutors.
The issue is not,
we can get to that better point in education.
Actually, a study just came out a couple of days ago
in Ghana of all places,
where they use AI math tutor
for students in grade school math
and found that the tutor significantly increased
how much math that they were able to do in their math test
because they got that extra tutoring and help outside of class. Tutoring is very expensive.
They were able to do this for less than $10 a student, which is like an insane intervention.
Similarly, there's another case in entrepreneurship in Africa where being given an AI advisor helps
small business people make 20% more profits. So if we build this stuff, we can build it in a way
that gets around that problem.
But the default version,
and this is why I'm so passionate about,
like you need to figure out,
experts need to be using AI to figure out
how to use it for their fields,
because the default is it explains stuff to you
and can dumb down voice and it's annoying,
but give it the right paragraph of a prompt
and I can make it sing.
Well, this leads me to an interesting area
that I wanted to discuss with you.
And that is, I've been reading reports
that not only are social media companies
leaning towards the left,
so are some of the models that are being built
by AI systems.
And so whether we want them or not,
there's this issue of inherent biases within AI systems. And so whether we want them or not, there's this issue of inherent biases
within AI systems. How do we start ensuring the accountability when it comes to these
biases, which are emanating from what seemingly appear to be objective machines
that are influencing our perceptions and actions.
However, they're not objective.
Yes.
The funny thing is that concern actually was originally a very strong concern
from progressives that these systems would be stating things that seem to be true.
Without there's people unhappy on both sides about how these machines work.
So there's a couple of things to mention here.
One is I think that this is a genuine concern and real issue. The second is it's very
hard to measure because the AI changes its personality based on you. So you give it a
political test and you seem to be left-leaning and it'll want to make you happy and generally become
more left-leaning as a result. And for example, if you give a personality test in Korean to the AI,
it gives you answers that are more like a Korean would give answers to questions than if you did it in English. So it's very hard because it changes. Now, one of the big
sets of controversies has been about image generation. Image generation is not a very
good way to know what AIs do because the image generation systems actually use a very different
technology and it's a gimmick. They're not actually how these large language models work.
And the problem with the image generation AI is there is a bias in the systems, which is that because they're trained on images, whatever they see the most of,
they tend to produce. So if you ask for a picture of an entrepreneur, you would get a white male
entrepreneur all the time, like 100% of the time. So the AI system is trying to make that a little
bit less biased, then nudge it a little bit so that every so often it produces a more diverse
entrepreneur. Then what happened with Google is they turned that up to a very high level, not necessarily even on purpose, and it started
to produce only the extreme diversity. And if you asked for a 1940s German soldier, you would get
like a Hispanic soldier and a soldier and a freaking soldier. It was very strange. So part
of this is these systems are trying to tune them to try and make them not embarrassing and not overly
biased, and they swing the needle too far one way or another. I think the overall picture that we're talking about here, which is
about the issue of bias is a real one, but it's hard to know what you do about it because these
systems are really subtle. If you give them political tests, they actually tend to turn out
slightly right of center in most tests, but it's very easy to get them to act. The guard rails are
very... Another thing with Google that caused the problem was you asked it to compare Hitler
and Mother Teresa, and it wouldn't take a stand on which was better.
But it was very clearly they didn't think about this.
They said, don't take sides on potential, on controversial issues.
And instead it did take a side on even issues that should have been obvious.
So it's very hard to know about the bias, but I am worried about that.
I think everyone should be. But I think it's also the systems, the people who are building these systems don't have
as much control as we think about them. And I think that's part of the worries we have here.
I don't know how you create it because they're trained at all of human writing. That creates a
bias in the system. That's really hard to know what to do with because it turns out that's
what our writing is biased towards, at least in writing on the internet. So it's a very
complicated question. Ethan, I want to switch to another area that you cover in the book.
And these are your four principles of working with AI.
And I'm going to give them all and then ask you a couple of questions about them.
The first one is always invite AI to the table.
The second is be the human in the loop.
Third, treat AI like a person.
And fourth, assume the loop. Third, treat AI like a person. And fourth, assume the worst, assume that the AI that you're
using today is the worst AI that you will ever use. In light of your second principle, being the
human in the loop, how do you see the role of human judgment evolving as AI becomes more integrated
into the decision-making processes? And this is really an intriguing one for me. Yeah. And I think it right into your bias issue before, which is if the AI seems like
an omniscient advisor, but it actually has right wing or left wing or other kind
of bias, would we even know?
And I think that's worrying.
All the early evidence is that the AI is a very good judge, but it has biases.
That those biases are subtle in the way human biases are.
I just picked one example of a study that showed that when you ask the AI to write letters
of recommendation, it tends to, when it writes about women, it tends to talk about how warm
they are and for men it takes to talk about how confident they are.
It's not an easy thing to spot, but once you do the other, you're like, oh, well, there's
a bias.
Another bias, we have a study coming out where we have the AI pick what ideas are best,
out of an idea, and it's better than humans
at picking the idea that ends up being best,
any individual human, but it has the same bias humans do,
which is it likes the first idea and the last idea
better than ideas in the middle, which is a human bias.
And it's weird the AI has the same thing.
So I think these systems are very biased
in ways that are both subtle and obvious,
but they're also really good advisors because humans are also biased.
So part of the question is, I would be taking AI advice with a grain of salt. Right now, the AI is not going to be as good as you, whatever your job is.
But I have talked to physicists at Harvard who are like, all my best ideas come from talking to AI.
There's a nice paper on business strategists that say the AI does as good a job as a business school professor and giving a Shijik advice. And whether you think of business school professors, I am one, it's
a separate issue. But there is a lot of, it is a pretty good advisor. But I think in the
long term, I am worried about the exact same thing you are, John, which is then the subtle
stuff starts to matter, right? As we give, if we give up more authority and control,
if we're not in the loop, then I worry a lot about the kind of biases that AI has in making
decisions. If you're still in the loop and it's one of the voices you're listening to, like another human mentor,
then there's a lot more value in it.
Okay, I appreciate that perspective so much. And the other one that really intrigued me is
your suggestion of treating AI like a person, but it's also defining its role clearly. And
in my latest book, I had a whole chapter on the importance of setting boundaries.
And to me, there needs to be boundaries that are set with how you're using AI to ensure it's acting within a desired scope that you set out.
I wanted to ask you if you agree with that sentiment. Yeah. And in fact, it brings me to a wider point that I think is good too, which is.
If you are a good manager, if you're a good manager of yourself or people, if you do a
good job setting boundaries, explaining rules, telling people what you want, you're going
to be really good at working with AI.
People think of AI as a coding thing, but even though it's not a person, it's not
thinking, it's not alive.
It works like a person because it's trained on human knowledge.
So the best users of AI are actually people who are good managers, who are good organizers
and thinkers.
Teachers tend to be really good with AI.
So it's very interesting.
Like it follows rules really well.
So I love the idea of setting boundaries.
I wouldn't be surprised if taking the principles from your book and applying them to AI wouldn't
get the AI to do a good job for you because it tends to work on those kind of psychological principles. There's a set of papers
that show that if you offer the AI a tip, even though you can't tip it, it does better. There's
a paper showing that there's a study showing that it does worse in December than in May and produces
lazier answers because it seems to know about winter break. Telling it to take a deep breath
and think things through resulted in more accurate math answers,. Telling it to take a deep breath and think things through resulted more accurate math answers,
even though it can't take a deep breath
and think things through that way.
So treating it like a human
gets you a large part of the way there.
And then the question is role.
Telling it what kind of human to be helps a lot.
You are a marketer,
and that gives the AI context in which to operate.
So the AI sort of gives you a default sort of answer,
and your job as somebody prompting the AI is to get to do something other than the default answer.
And the way you do that is you tell it who it is.
You're a marketer who focuses on providing SaaS sales to large corporations.
It knows a lot about the world already.
So you're going to get very different answers.
If you tell it, it's a marketer who focuses on SaaS sales.
Then if you say it's a, you're a marketer focused on the youth market.
Then if you say you're a PR writer, then if you say you are a hip hop promoter
and that's going to vet rest to change the results.
And this leads me to what is really eyeopening to me.
Your principle that whatever AI you're using right now is going to be the worst AI
you will ever use given how powerful these things are already.
One of the questions that I get all the time,
especially from my younger audience who are concerned
about where they should focus their education
or their talents is how do they possibly prepare
for a future of work and their life
where AI's capabilities are constantly evolving?
And the second point to that, yeah,
would be where would you recommend
that if they were going to be focusing on areas to master,
what areas would you recommend people focus on?
So those are the questions.
And they actually break down to another question,
which we don't know the answer to,
which is ultimately why I'm not gonna be at that.
I'll give you my advice, but it may not be that satisfying, which is we don't know how
good these systems are going to get, and we don't know how fast.
When I even talk to the people building these systems, there's division between, will we
be able to achieve superintelligence?
Are we near the end of how good this run is?
Will it be too expensive to build better models?
We don't know the answer.
So the most important question is how good, how fast, right?
Is AGI, this artificial general intelligence,
achievable or not?
When would that happen?
If not, are we getting close to how good this technology can
get, and we're about to plateau, and nobody knows.
So you have to plan for a world where
you don't know what's going to happen, because I don't have
answers to those, and the people training the models
don't have answers to those.
So there isn't an instruction manual forthcoming.
And we don't even know what the AI's are good at or bad at.
I would say systems change much slower than jobs.
Jobs change, like tasks and jobs might change more quickly.
When we survey people after using AI,
they always react in the same two ways.
They're nervous about the future of their job,
but they're happy.
And they're happy because they outsource the stuff
they're least good at, they're most bored by to the AI,
which lets them focus on what they're good at, which is usually what they're happy because they outsource the stuff they're least good at. They're most bored by to the AI, which lets them focus on what they're good at,
which is usually what they're passionate about, which is usually what, um, they're,
they're better at the AI.
So people often feel really good after using AI.
I hate doing this kind of form.
I hate writing this kind of email.
The AI does it for me.
It does make sense reports.
That's an exciting role.
So in some ways it's about leaning into what you are most passionate about and
what you're best at and sharpening that skill on the idea that even if AIs get really good,
you'll be in the top 1% of that skill, you're still going to beat them under lots of different
scenarios. And even if AI gets as good as a human, you'll still be the person who can better tell
what the AI's job is doing bad or good or at. So I think it's less about changing your entire career
and more about leaning into this.
Now, I do think that means if you want to be a marketing writer, you have to be a pretty good
marketing writer to survive in this next open world. So I think if you'd like this and love it,
you need to double down and becoming an expert on this. You need to double down and be
sharpening your skills in that direction. I wouldn't change careers at this point. I think
there are some things that are more risk than others, but I think if you're a really good
photographer, there'll be room for you.
If you're a mediocre photographer, there's more danger to your job.
If you're a very good salesperson, you're going to beat the AI.
If you're a mediocre salesperson, you may not be, but if you're a
mediocre salesperson, you probably don't like sales that much.
And then maybe you're okay.
I'd say that part of your job is you can focus on the part of it
that's really passionate.
So when people ask me what they should do, I think you should do what you're passionate
about and care about, but you should double down and become an expert on that field.
You don't need to be an expert in everything. Being an expert is something important and narrow.
Yeah, thank you for that advice. And one of the companies that I like to highlight,
because I'm an alumnus, is Accenture. And I like the fact that they are looking at this to replace
And I like the fact that they are looking at this to replace skill sets like data entry, or maybe going through some of the tax models.
But instead of just getting rid of the employees, what they're trying to do is set a new baseline
and to try to retrain them so that they're now using their human intellect to work on
top of this foundational layer that AI can provide,
which I think is a great approach for companies to follow.
I love it. Leaning to using this to remove drudgery and help us thrive as people.
And companies that do that will succeed. And that is the model we have to follow.
You think about it, it's the same thing as the industrial revolution with like hand stuff, like manual labor.
Like you, what, nobody really wants
to be digging a ditch, right?
Like it's more interesting to be somebody
who could run a set of machines to do this stuff.
And I think we have to think about the same thing,
what mental, people are bored at work.
Like most people in surveys report being bored
at least five or 10 hours a week at work.
Why do we have to accept that that's the way it's gonna be?
Similarly in education,
you were talking about being bored from lectures. This gives us a chance to think about how we help humans thrive more.
And I think that's going to be the key success in the future.
And Ethan, I'd like to not give out your whole book.
And one of my favorite portions of your book is one of the ones I think that
readers need to read the most.
And that's the sections where you go through AI as a person,
a creative, a coworker, a tutor, a coach, et cetera,
because it's really fascinating to understand, if you think about it,
how these different roles are going to be fulfilled.
But you brought up Harvard earlier,
and I think one of my favorite conversations last year on the podcast
was with Professor Bob Waldinger, who currently runs the Harvard adult study of advanced aging, which shows
that human relationships are the key to happiness above everything else.
And when I think about AI as a person, creative coworker, tutor, et cetera, it's leading people to have a profound attachment and possibly
even feeling married to their AI.
What implications do you see for human relationships, which are so important for our feeling fulfillment?
Yeah, I think that is a giant question.
There's a bunch of studies coming out that showing that Hawking AI relieves loneliness
for people, but they do form attachments, right? Right now, the people who are using AI in the
early surveys report that the AI both helps them feel more socially connected, but also doesn't
reduce their, sometimes increases their willingness to talk to other people, because they get over
some of their anxiety and other issues with the AI and then talk broadly. But we don't know the answer yet. And I do worry about AI is becoming addictively interesting to talk to. For people
who want to get their hands on AI, I think it's great to have a conversation, but I think it's
important to do things. And the two things I think everybody should do is like we talked about earlier,
spending your 10 hours using a frontier model like GPT-4 and try to do all your work with it and
seeing what it's good and bad at. And at first it won't do very well and then you'll learn how it works better.
The other thing I think people should do is download a chat bot.
Like there's one called Pi from Inflection and it's a very advanced AI model.
It's optimized for chit chat.
It's free for right now.
And talk to it because I think you will be both impressed and unnerved by how good it
is, right?
But it's genuinely interesting to talk to.
It's interested in you.
It's smart. And that is the future we're heading towards. And I don't know what that means for
human relations. I hope that there'll be some pushback where connections to other people still
matter. Maybe this will free us up from our anxieties, our deeper, our darker parts. We
have this conversation with the AI, and then we have better, deeper, meaningful conversations
with other people. But I don't know. And I think everyone should experience it because it's worth
seeing where things might be heading
Okay, and I have just a couple questions left for you
so the holy grail is
Artificial super intelligence. Do you think that's possible? And if it is what happens if it's created?
Okay, so I am a long-term skeptic on this because I've lived through many AI booms and busts, but I have gotten less skeptical over the last year. Right. And I think that's
the general reason. Like if you look at surveys of computer scientists, there are many who
think this is completely impossible and it's irresponsible to even talk about because it
can never happen. But people have the computer science surveys have generally moved in the
direction of people feeling it's more possible. The timeline to AGI, artificial intelligence, the first stepstone to super intelligence,
shrank by 14 years in a survey of computer scientists
that's done every year last year.
So I don't know whether it's possible.
I really don't.
I think it's also important not to focus on it too much
because it becomes something that either happens or doesn't
and we don't have control over that.
And that makes us feel we don't have control over AI,
but we do have control over how we use it today
and in the near future.
But if artificial super intelligence happens, then we have to hope we got everything right. We build a machine god that does things that are incomprehensible to us, and then it's a question of like, does it like us or not?
And there's very scary scenarios and very positive scenarios, but we've had two million years where homo species have been the dominant intelligent creature on the planet, and that ends as soon as this happens. And we don't know what happens afterwards. It is worth noting that this is the explicit goal of
many of the AI companies is to build this thing. So I don't know if it's possible or not, but I
don't know. I do know they're trying and I don't know what the timeline would look like, but I also
don't spend a lot of time worrying about that relative to the AIs that are here now and they'll
be here in the next year or two. how do we use those to make life better?
Okay, and this is just a fun question I just thought of.
Would you be comfortable today
flying on a commercial airline that had no human pilot
and was completely flown using AI?
So I am completely comfortable in a AI- driven Tesla, but I also know the limitations of
where you need a human to help out.
If a road has a lot of construction and there's crazy drivers, I'd take over by hand.
Most of your planes are driven by autopilot right now, right?
By the way, I just saw the most amazing passenger statistic, which is we now have a fatality
rate on American airplanes has dropped to one person per light year.
We've had no fatal plane crash for 13 years without a fatal crash in America or something
like that.
I don't remember the exact number.
And that's because AI is taken over, but is also working with humans to some extent.
So I went up like I'm in San Francisco, there are driverless cars driving around all over
the place, but there's also human operators who the car can notify if there's a real problem
they can take over in a moment's notice.
So with the state of AI right now, I actually wouldn't feel that unnerved flying on a plane
that I know is being rivered by AI, but I want a human pilot as a backup for all those
issues that might come up that AI can't handle yet.
Will that still be the same thing in two years?
I don't know.
But I would feel more comfortable if an AI is mostly flying the plane for all the regular
stuff.
And then the human there.
Well, great for the Sully Sullenberg moment where something bad happens
and where I don't trust the AI.
Okay.
And then lastly, your vision, as you've talked about today for AI is both
optimistic and a little bit cautionary.
As we navigate this new era, are there any principles or deadlines that you
believe the
greater society as a whole should adopt to ensure AI's gifts are harnessed for creating a better
human future? Okay, so I think that's a really great question. I think there's a few things.
One is I think we need to recognize it's our responsibility. I think we're just used to
thinking about technology. It happened with social media. Like social media, we never took
responsibility as people for what social media was Social media, we never took responsibility as people
for what social media was doing.
People post nasty stuff and we don't condemn them
for doing it.
We never made social media a nice place
and we let the companies be in charge of it.
And I think that's a mistake.
I think we are destiny over what AI is.
We don't want to do something.
We should be agitating for law
that should never replace doctors
or whatever else you want.
And you get to use it in a way that's ethical or not.
I've spoken to companies that as soon as they get AI in and improves performance for their
employees, they fire employees until the performance level drops right back to where it was before.
I've talked to other companies and they're like, this is great.
Now our employees can do 10 times as much work and more interesting work because they
have this assistant that does the boring stuff.
We get to make these choices at an individual level.
We have agency. That's the main message I want to communicate is
that we have control over this and we shouldn't pretend we don't and we shouldn't give up
the way we did in social media and then just say, the companies will handle this all. You
don't want that happening for all the reasons we talk about bias and about decision-making.
So I think that is the main message I want to have is you need to start taking charge
of your own destiny right now. And you can use AI personally right now to make a difference in your life, if positive
or negative.
And ignoring it is the worst thing you could do because it's not going away.
There's a lot of people I talk to you like, oh, lawsuit by the New York Times will shut
down.
But it's not going to make a difference that regulation will step in and shut down.
It's not going to happen that this particular paper I read says that AI development is going
to stop this year.
That's probably not going to happen.
We have to treat this as a real thing that's occurring and take control over it.
So what I'm hearing from that is something that I talk about a lot on the show, which is the power
of choice. And we do have a choice. And the choice right now should be to start experimenting with
it so you understand what its capabilities are and get used to it being a part of your life going forward.
So I think that's wonderful advice. So Ethan, if someone is interested in learning more about you,
what you're teaching, et cetera, your book, where's the best place for them to go?
So I publish almost weekly kind of AI, try and explain what's happening in AI at a website called
one useful thing. And where you can go to Useful Things where I put all the AI prompts we have,
links to the book. I have videos on YouTube explaining how prompting works and how to use AI.
Order the book. It's Co-Intelligence. I'm very excited about it. It's coming out soon.
But also, the One Useful Thing is free and you can read a lot of the articles and updates and
information. I'm an academic, so it's nice. I don't have to put out a lot of free information
to the world with lots of papers.
So that would be a starting point you could look at.
Make sure audience should check out Ethan Subczak.
And thank you so much for being here.
It was such a fascinating conversation
and congratulations on the launch of this incredible book.
Thank you for having me.
These were great questions
and I think important ones
everyone should be asking. What an incredible honor that was to interview
Ethan Malik. And I wanted to thank Ethan and Penguin Random House for the honor and privilege
of joining us on today's episode. Links to all things Ethan will be in the show notes
at passionstruck.com. Please use our website links if you purchase any of the books from
the guests that we feature here on the show. Videos are on YouTube at both our main channel at John R. Miles and also our clips channel
at PassionStruck clips. Please go join 250,000 other subscribers and delve into and delve
into over 700 videos that we have on both channels. Advertiser deals and discount codes
are one convenient place at passionstruck.com slash deals. Please consider supporting those
who support the show.
And if you wanna catch daily doses of inspiration from me,
then go to John R. Miles on all the social platforms.
Also, if you wanna tune into our newsletter,
you can go to passionstruck.com
and sign up for Live Intentionally.
On the next episode of Passion Struck,
I interviewed Dr. Jeff Karp,
a luminary in the realm of bio inspired engineering, who's a
distinguished professor at Harvard Medical School, as well
as MIT. His journey as a curious child grappling with learning
deficiencies and ADHD to a Titan in biotech innovation is a
testament to the transformative power of being lit a state of
heightened awareness and engagement. In our interview,
Dr. Karp discusses how to tap
into the secrets of this dynamic state.
Activation energy is something that I learned about
a long time ago in a chemistry class,
and it just really jumped out to me.
You put, let's say two molecules in a beaker of water
and they don't react, nothing's really happening.
And then you add some heat to it
and the molecules start moving around a little bit more. They're not really bombarding and reacting. But then you add some heat to it and the molecules start moving around a little bit more.
They're not really bombarding and reacting.
Then you add more heat.
Now they're really moving at a little bit more heat and now they interact, they bombard, they hit each other and then a reaction takes place.
So the amount of heat that you add to the system is the activation energy, the amount of energy that you've put into the system.
And that jumped out to me because I was like, wow, this really applies to everything in my life,
all the things I wanna do.
Remember that we rise by lifting others.
So share the show with those that you love and care about.
In the meantime, do your best to apply
what you hear on the show
so that you can live what you listen.
And until next time, go out there and become passion struck. passion strut.