Leap Academy with Ilana Golan - OpenAI's Former Head of GTM: What Leaders Get Wrong About AI & What’s Coming Next | Zack Kass | E142
Episode Date: January 27, 2026Long before ChatGPT became a household name, Zack Kass was walking into boardrooms as Head of Go-To-Market at OpenAI, introducing executives to a technology they barely understood or cared about. Then... one simple shift changed everything. Put intelligence into a familiar interface, remove friction, and suddenly the future arrived. In this episode, Zack joins Ilana to unpack what truly drove ChatGPT’s explosive growth, confront the biggest fears leaders have about AI, and explore what the future of work will demand from humans next. Zack Kass is a global AI advisor, keynote speaker, and former Head of Go-To-Market at OpenAI, where he helped bring some of the world’s most transformative AI technologies to market. In this episode, Ilana and Zack will discuss: (00:00) Introduction (02:59) Zack’s Journey to OpenAI (04:46) Understanding Modern AI and Its Evolution (08:03) The Breakthrough of ChatGPT and Its Impact (17:42) Transitioning from OpenAI to New Beginnings (21:21) Challenging Common Misconceptions About AI (28:32) Adaptability in a Rapidly Changing World (34:54) Lessons from Top Innovators (38:15) The Future of Work and Purpose in the Age of AI Zack Kass is a global AI advisor, futurist, speaker, and former Head of Go-To-Market at OpenAI, where he helped bring some of the world’s most transformative AI technologies to market. He now advises global leaders and organizations on how artificial intelligence will reshape work, leadership, and human potential, turning complex ideas into clear, practical insight for the future. His book, The Next Renaissance, offers an optimistic vision of how AI will shape our future. Connect with Zack: Zack’s Website: zackkass.com Zack’s LinkedIn: linkedin.com/in/zackkass Resources Mentioned: Zack’s Book, The Next Renaissance: AI and the Expansion of Human Potential: https://www.amazon.com/dp/1394381085 Leap Academy: LeapCon is the #1 Conference for Reinvention, Leadership & Career — a powerful 3‑day experience designed to help you unlock what’s next in your career and life. 📍 San Jose, CA 📅 Feb 26–28, 2025 If you’re ready to step into clarity, confidence, and bold action, this is your moment.👉 Grab your ticket before doors close at leapacademy.com/leapcon
Transcript
Discussion (0)
Jobs changing doesn't have an economic consequence the way it does an emotional one.
And that is actually the crisis we are tracking towards.
The greatest sacrifice that our generation will pay vis-à-vis AI is extricating who we are from what we do,
because what we do will change so much and so frequently.
Zach Cass is the former head of go-to-market at Open AI,
the company that gave us Chachipiti.
And for years, he was the bridge between the world's most powerful AI in the
the businesses trying to understand it.
For a lot of people right now, the scary part is that a new technology will show up.
And they'll be like, wait, maybe I should do that job.
I've arrived now at a place where I firmly believe we are going to house people.
We are going to feed people.
We are going to educate people.
And the hardest part is going to be figuring out, why are we here?
And now the question is, how do you constantly reinvent yourself?
The answer is probably...
Welcome to the Leap Academy with Ilana Golan Show.
I'm so glad you're here.
In the Leap Academy podcast, I get to speak to the biggest leaders of our time about their career,
how they got where they are today, the challenges, the failures, and countless lessons.
So lean in this episode is going to be amazing.
I'm in a mission to help millions reinvent their career and leap into their full potential,
land their dream roles, fast track to leadership, jump to entrepreneurship, or build portfolio careers.
This is what we do in our Leap Academy programs for individuals and teams.
And with this podcast, we can do.
give this career blueprint for free to tens of millions. So please help my mission by sharing this
with every single person you know because this show has the power to change countess of lives.
Deal? Okay, so let's dive in. Today we're talking to a man who didn't just witness the biggest
technology shift of our lifetime. He actually helped orchestrate it. Zecas is the former head of
go-to-market strategy in Open AI, the company that gave us chatypt.
And for years, he was the bridge between the world's most powerful AI
and the businesses trying to understand it.
But then he made a massive leap of his own,
and he walked away from this rocket ship to become a futurist
and one of the world's leading voices in AI innovation.
And we're going to discuss why he left the hottest job in the tech,
why he believed the future is bright with AI, not necessarily scary, but how you can prepare for this era of change.
So I can't wait.
Let's dive in.
Zach, welcome to the Leap Academy show.
Thanks so much for having me, Alana.
Oh, it's going to be so, so fun.
I'm really excited about this.
I will touch your career just for a second, but we're going to dive right into all the AI stuff because I think it's fascinating.
But how do you even get to Open AI, right?
That's not easy on its own.
and you've had various roles like Figure 8 and little, like, can you talk to us a little bit of like,
how did you even get to Open AI?
I grew up in Santa Barbara, where I am now, and I was not a very good student.
I played volleyball in high school and then went to Berkeley, got in basically because of volleyball,
and ended up studying history and then added computer science.
And sort of as I grew up, figured out that school mattered as like a means to get to the next level
and spend a bunch of time in college reinforcing what I had grown up knowing,
which is that the world was a great place.
And then I found evidence to defend that position.
And that's important because it sets the stage for the rest of my career.
I graduated Berkeley and I tried to make a career of volleyball,
couldn't do that, went and got a job.
The only company that would hire me at the time,
which was a company called Crowdflower, which became figure eight
and was the early data labeling company.
we were the first company to build human-labeled data at machine scale for the purposes of building
these better models. That gave me this incredible view into early machine learning, and from there
I graduated to a company called LILT, which was building neural nets for the purposes of machine
translation, neural networks. And then from there, got to go to Open AI and was one of the early
employees, 90th employee, and helped build the company's sales solutions and partnerships teams.
How did you get to Open AI? Was it just like a resume? Did some
bring you in, walk me through. How do you even get out of the people's pilot? It wasn't as cool
yet, I think. Well, it was cool to the right people. I mean, it depends on how you define cool and
who defines it. It's true. Most people did not know about it. Exactly. And most people probably at
the time called it a cult. I was one of the only people in the world that sold modern AI.
And I'd been selling it for a while. So I knew how it worked. I knew what it did.
For people who don't know, maybe explain to them what is modern AI.
I think when Chad GPT came up, it became like wildfire.
But there was AI way before that because I came in the tech world.
And we were talking about AI way before what you were probably mentioning.
So maybe dumb it down for people who have no clue what you're talking about.
We've been talking about thinking machines since about 1900.
And we've been talking about AI, the term artificial intelligence since 1954.
and these concepts have been pretty theoretical.
We didn't really know what it would look like.
We just knew that we would eventually or thought we could eventually build machines
that were smarter than humans.
And that journey took us on quite an interesting ride that we cover in the book.
And honestly, when I wrote up the history of AI in the book,
it occurs to me that it's kind of boring to people,
but it is important to teach people because I think we will talk
about modern AI the way we talk about electricity. Every child grows up knowing that Benjamin Franklin
discovered electricity and how he did it, right? The story of the kite and the key is almost
lore. And the reason it's lore is that electricity is so important to us. And AI, I think the history
of AI is important. What's important to know about AI is that most of the AI that people have used
for the better part of the last 30 years is what we call statistical machine learning. And
And statistical machine learning is very simply a massive series of if this, then that rules.
So imagine I were to say to you, Alana, okay, we're going to build this statistical machine
learning model that is capable of doing this one big complex thing.
What you actually have to do in order to build that model is you have to build a bunch
of left turn, right turn rules inside of it.
If Alana does this, then do this, and what about this, then this?
And it's pretty good at producing, for example, product recommendations.
If someone is this tall or looks like this or has this many friends or shops this often,
they're more likely to buy this thing.
That's what statistical machine learning is.
It's not actually reasoning.
It's just big sets of rules.
And that was the prevailing machine learning practice for about 40 years.
And we got better at it because we were building bigger data sets and the computer was getting
more efficient, but it was still, it had this ceiling.
And in 2017, so I was at Lilt at the time,
Eight Google researchers wrote a paper, attention is all you need.
And in this paper, we called them the Transformer Eight, argued that we were building machine
learning models all wrong.
And then instead of building models that think in straight lines, statistical machine learning,
we should build models that think in parallel, neural networks.
And the parallelization of data, they borrowed from the study of the human brain.
They basically argued that we should be building artificial intelligence systems much more
like the human brain, and that the neural network, which again was a theory that had been around since
the 60s, I think 50s or 60s, was in fact the right way to do it. And they were right. And it
catapulted the industry forward. And it was at that time that OpenAI launched GPT, the original
GBT, which stands for generative pre-trained transformer. It was the transformer architecture
that the transformer 8 pioneered. And GBT was very good by any measure, relatively. And then GBT2,
and then GBT3.
And it was around the time they're opening eye planning for GBT3 that they were like,
you know, we should get a salesperson to sell this thing.
They looked around.
They were like, it's all researchers.
No one here can sell this thing.
We should get someone who can help the customers.
It turns out there was basically one other person in the world who was selling large
language models at the time, me.
And I was selling them for the purposes of translation.
I was selling large language models that Lilt was building to help companies translate
their content.
So I got introduced and met the team and,
spent some time with Sam and got hired as basically the first business hire. And for a long time,
if you emailed, not for a long time, I guess for four months, if you emailed support at or sales
at or info at OpenAI, you arrived at my inbox, which is fun. And it gave me a pretty good view
of how everything should work. And then that gave way to building a team. GPD3 comes out in
February 2021. And it's our first dollar. So first dollar that Open AI will ever make. Prior to GPD3,
by the way, there was no product you could buy from Open AI.
A lot of Open AI's history was building robotics or even multi-agent games like Dota.
It was trying to solve real-world stuff, and then the large text model became the particular interest.
But GVD3 comes out, and it's popular among researchers, but it's not particularly popular among
companies.
And I'm actually traveling around the world at the time with one of my best friends on Earth,
Boris Power, who officiated my wedding.
And Boris and I are showing, he's the head of research engineering,
and we're showing people this GBT3.
We go into the boardrooms, and we put it up on the board,
and we demo our product.
And we finish the demo, and we go, oh, what do you think?
And these guys go, I don't care.
I don't care.
And for a long time, and this happened 10 or 15 times before we figured out.
For a long time, we were like, man, these guys are corporate bozos.
They're going to get smoked.
They're going to get left behind.
And then we realized that actually,
something else was going on, which is that the model, GPD3 was still slow, it was expensive,
and it wasn't actually that good.
It wasn't actually commercially or economically viable.
And so after a while in these demos, we'd show a demo for like five minutes.
In fact, we did this once in June 2021.
We did a demo, closed our laptop, and Boris said, imagine that that product got a lot better,
a lot faster, and a lot cheaper.
And I said verbatim, imagine if we built GPT4 or GPD5.
And what we wrote was a paper called Unmetered Intelligence, which is actually the basis for a lot of the book.
And it proposes that at some point machines would get so good and so cheap that the collective cognitive power would make the individual cognitive power pale in comparison, that it would be very hard to make money as the smartest person in the room for much longer.
And that companies would have a very hard time competing on an intellectual or cognitive basis.
This was foundational to my thinking of what I write about and what I do now, but it also helped
us start to tell better stories about the technology and where it was going, such that by the time
GBT 3.5 came out, which was in May 2022, we were sort of ready to start saying, look,
it'll be good at this, but it won't be good at that. And meanwhile, in this time, we were launching
products like Dolly and Whisper. So text to speech, speech to text, and text.
to image or image to text.
And people are starting to figure out that actually there was a lot of value.
There was like a lot of good stuff going on here.
And then something exciting happened, which is we realized that people weren't using the products,
not because they weren't good, but because they couldn't get it.
And this is where the chat jibati story is fascinating to me, because what I have to remind
people of is this, which is in May, 2022, we launched GBT 3.5.
which was by any measure or a state-of-the-art model.
And so exciting and commercially viable.
And it still didn't really matter.
So Boris and I were like, man, why, well, everyone at this point was like, why are people
not using this more?
And for about four months, we sort of stared at this thing and we're like, you know,
it does occur to us that people don't really know what they should do with it.
So what if we built an experience that allowed them to make it dead,
simple, just stupid simple, to use this. And the chat CBT breakthrough, I remind people, was not a scientific
one. It was an application one. It was actually just a marketing breakthrough. We took a product
that had been available for almost six months publicly and put it into an AOL instant messenger.
And people said, oh, my God, because the application is important.
in order to change behavior.
You cannot just say to people,
ta-da, unless it's very, very easy for them to understand why that matters.
And this is, you know, stage performers understand this.
The payoff has to make sense to the audience.
In fact, one of the most incredible problems that a lot of stage performers have,
Cirque de Soleil, is that a lot of their tricks are so complex.
The average audience member doesn't know.
And for many incredible stages,
performers, they actually have to modify their performance either to make it less impressive
technically or to make it flashier than necessary in order for the audience to understand how
hard it is that they are doing. In other words, they have to create a laugh track. They have to tell
people, this is when you should applaud. And chat GBT is in many ways that problem. GBT 3.5 was so
impressive and so cool, and people were like, who cares? And so we said, here, we'll put it in a
thing that you've used for dozens of years at this point, AOL instant messenger. And then people
were like, wow, this is it. Now, the incredible thing about the timing of Chad GBT when it comes out
on November 30th, 2022, is that it also happened to come out three months prior to GBT4, which was the
most groundbreaking technology. And so you couldn't have lined up these things better.
chat gbt which was not a breakthrough on a research basis it was gbd3.5 if slightly aligned in an
application gave people the tool to be like oh my god i get it i understand why this matters
just in time for us to actually release a truly phenomenal underlying model gbt4 and that gbd4 once it
comes into chat dbt then people go whoa and that's when you see we go from 10 million
to 100 million to a billion downloads.
And it was insane.
I'm a techie, so I was tinkering with things, I think, before it was cool.
And then suddenly, it was incredible to see how it just spread.
So what is it like that first day, the first week, the first month?
What does that December or January look like?
Because the whole thing is just exploding in a pace that I don't think anybody's ever
witness.
What does that feel like?
it was certainly overwhelming.
It was probably much more overwhelming to the engineering team than almost anyone because
there was an incredible load dependency.
I don't think it was until GBT4 that things really went crazy.
It wasn't probably until March.
Keep in mind, obviously a lot of people paid attention at that point, but it wasn't until
GBT4 and the model got really good, that people were like, whoa.
because for a while still people were complaining about,
oh, it's slow or the text isn't good enough, whatever.
And then we solved a lot of these issues,
and that's when it really went, you know, it got big.
Do you remember some numbers just for the listeners
who are not sure how fast this grew?
Because it was mind-blowing.
The exact numbers, no.
But it was like a million in the first two weeks,
10 million in the first three months.
And then it was a sprint to $100 million and then a billion within the year.
Oh, my God.
That's incredible.
So at that point, you operate as head of go-to-market.
What is your biggest goal?
Because I'm sure that changed the entire pitch, your role, your goals.
Like, it changed probably everything because suddenly Open AI was known, right?
So how does that change the conversation maybe?
There was a debate on whether there was a product, whether the model was a product.
and the API, of course,
has turned out to be a great product,
but there was some debate
whether it was actually something you could sell.
And then there was no debate
that chat GPT with your product,
but then the question is,
when will it be an enterprise product?
How long will it take to turn into an enterprise version?
And that took some time for all sorts of reasons.
But the actual remit did not change, right?
It was get this technology into the hands of the enterprise
in safe and effective ways.
The truth is, ChadGBT became important to the enterprise because the API is still and remains, to a certain extent, hard for people to figure out how to use, right?
How it fits into the business is still tricky for a lot of traditional companies.
Whereas Chad CBT, I know where to put this AOL, this magical AOL instant messenger.
We need to pause for a super brief break.
And while we do, take a moment and share this episode with every single person who may be inspired by this because this information can truly.
change your life and theirs. Now, I want to check in with you. Yes, you. Are you driven,
but maybe feeling stuck in your career or a fraction of who you know you could be? Do you
secretly feel you should have been further along in your income, influence, or impact? Do you
ever wonder how to create not just a paycheck, but the life you want with the paycheck,
the thought leadership, the legacy, the freedom? Because that was me and that's exactly why I
created the Leap Academy program, which already changed thousands of
of careers and lives. Look, getting intentional and strategic with your career is now more important
than ever. The skills for success have changed. AQ, adaptability, reinventing, and leaping, are today
the most important skills for the future of work. Building portfolio careers, multiple streams of
income and ventures are no longer a nice to have. It's a must have, but no one is teaching this
except for us in Leap Academy.
So if you want more from your career in life,
go to Leapacademy.com slash training.
Check out this completely free training
about ways to fast-track your career
and you'll even be able to book
a completely free strategy call with my team.
That's Leapacademy.com slash training.
So, Zach, you're on this rocket chip.
What makes you at some point say
I want to create my own leap
and I want to get out of Open AI.
I want to be that futurist,
the person that actually becomes a voice of AI to some extent.
The story I tell is fairly vulnerable now,
which is my parents got sick.
I actually was not well.
I had run so hard and so fast
that I felt like I was burning out,
but then my parents got quite sick.
and I woke up one day and was like, I don't want them to die and not having spent material time with them.
And I actually moved home and moved home to Santa Barbara and moved in with them to support them both.
They've gotten sick in different ways at the same time.
Essentially took a sabbatical and never went back.
And it was a long time before I was willing to talk about this story because I think for a lot of people, they could not comprehend it.
And for me, I was basically espousing the importance of community.
and talking about how important it was in a future of abundance that we actually care deeply
about the spaces in which we live and the people that we live near.
And I was like, I need to live this truth.
And then I discovered, oh, actually, what I also want to do is I want to tell the story
about how this technology can have a profound positive consequence on the world.
And I can do it from a lot of different advantages.
I can write about it.
I can read about it.
I can talk about it.
And that then sort of put me on that journey.
but it came from a place of basically needing to go home.
Are they okay, by the way?
They are, yeah, thanks for asking.
Yeah, my dad survived a brain tumor and my mom survived her own incredible complication.
Well, that's incredible that you've been with them there.
I literally moved in with my parents.
I lived in a guest bedroom for six months, which was amazing.
Reset my life and put me on a whole new course.
Wow.
Well, we'll talk about a new course, but you also have.
of a newborn, speaking of new courses.
Yeah, so I met my wife.
I came back and I was like, you know, I really want to reconnect with my town.
I grew up in Santa Barbara, a place that matters a lot to me.
And I had sort of forsaken my body and my soul and spirit on this 16-year journey in AI.
And so I reached out to my, I'd play volleyball in high school and college.
And I reached out to a friend who was this sort of legendary volleyball player.
And I said, hey, well, you give me a lesson every day.
And she said, no, I won't.
But my cousin, actually, when I, Katie Speeler, who's a famous volleyball player in Santa Barbara and runs the East Beach Volleyball Academy, and I reached out to Katie, and I said, hey, can I get a lesson? She said, yes. I said, can I get one every day? And she said, my cousin will do it for you. And so she introduced me to her cousin, my now wife, who started giving me a lesson every day. And we became dear friends. And I gradually fell in love with her. And then one day I told her I was in love with her.
And the rest is history.
And so we got married and then recently had a baby.
So I have a six-week-old hanging out just behind me.
Well, talk about a little overwhelming there.
But congratulations on that.
That's super cool.
Thank you very much.
So I want to talk a little bit of my eye and then maybe your own leap as well
because I think our audience are all trying to figure out how do I create not just a paycheck,
but the life that I want was my paycheck, right?
And it is a lot about that.
Like, how do I reinvent myself and create a bigger possibility for me?
because now you became the voice of AI.
You wrote a book of Renaissance.
So let's talk about for a second, AI adoption.
What do you think leaders are getting wrong about AI?
Or if you had time with a CEO that is terrified,
what would you say to them?
First, I would classify the question in a couple camps,
which is there are leaders who are getting things wrong about AI,
the way they talk about it.
And then there are leaders who are getting things
wrong about AI the way that they deliver it or use it. And then there are leaders getting things
wrong about AI the way that they policy it. So the first camp is broadly leaders, thought leaders,
community leaders. The second camp are actual CEOs and business practitioners. And the third are the
policymakers, the politicians and the folks who actually enforce this stuff. And the first camp, which I
think we also broadly fall into, or leaders broadly fall into, those who are talking about AI,
I am afraid are getting a lot of things wrong.
There are sort of a couple failure points here.
The first failure point is that so much discussion about AI
is about the business implications,
and critically these days, sort of like the market of AI.
And I find it to be not just annoying and pedantic,
but also totally distracting.
I understand why CNBC has to talk about it,
because CNBC tracks the market.
Why do we need to spend so much of everyone else's energy
talking about opening eyes code red and the financial engineering of these companies.
It's not my argument that these things don't matter to someone. It's that they don't matter to most
people. It sucks up so much oxygen when there is so much else to be talking about. That to me is one
of the distractions. Another distraction is that the conversation seems to be getting very, very negative
and dystopian. And there is a lot of discussion around malicious or malign super intelligence,
around incredible job loss and job displacement, which we can talk about, around idiocacy,
around misinformation and disinformation. And it doesn't come with any nuance because media,
most narratives, don't come with nuance. And so it breeds this exceptional unease.
And quite honestly, anxiety.
There's like a cultural malaise and anxiety right now around AI.
And the third is a general lack of precision when it comes to language and words.
We wrote about this recently.
One of my biggest complaints right now about the way that AI is being discussed
is how poorly people are using syntax.
Misplaced, misused words in the context of complex concepts is very expensive.
not just because it cheapens the dialogue, but it actually reinforces total misconceptions.
And there are a lot of examples of this, but one recently is someone was like,
humans must always be in the loop.
That's how we will maintain our humanity.
This was like a very prominent leader.
And I was like, that's not true at all.
I don't need a human to always be driving a car.
There are plenty of loops that we should remove humans from.
But when you say things like this, people go, yeah, it was like a very popular soundbite and
then LinkedIn post and then a bunch of people started saying, we need human.
And it's like, when you say things like this, you're totally cheapen the discussion.
And it's not even that there's a lack of nuance is that there's actually a gross misuse of words.
And so broadly, my message to most leaders is, one, we have a more.
moral obligation right now to talk about a better world.
Optimism, I argue, is a moral obligation.
You cannot build a world that you cannot imagine, and you cannot imagine something you
cannot describe, and we should start talking about solutions, not problems.
I will die on the hill that we are doing an incredible disservice to our children and their
children by not describing a better world.
And it does not mean that the world does not have problems.
It means that we should be talking about the solutions to those problems, of which there
are many, and we can talk about those things. The other thing that I am pressing people on is broadly
to have a more nuanced conversation where you actually say the thing that you mean. And there are,
again, examples of this in one way or in another, and what we really, really, really, really right now
need to reinforce is that there are incredible consequences to misusing AI, and there are incredible
downsides that we need to be honest about. And also, there's a ton of opportunity and that we should
actually in order to get there navigated.
And so that's broadly my message to most leaders.
As for leaders that are actually practicing with this technology, right now, there are two
big failure points.
One is in the architecture of the data.
A lot of people are trying to do things with machines where they do not have underlying
data that actually allows them to do it.
And the other, then, is in cultural.
So if you think about, you go to an organization, you look at the layers that AI touches.
It starts with strategy, then it's culture, or maybe it's culture, then strategy,
then process, and then operations.
And for many businesses, they do not have a culture of failure.
And so they can't actually experiment.
And if you cannot experiment, then you can't actually try something new.
And that is where a lot of companies are just getting totally locked up around what they
should or shouldn't do because they'll say to an intern, hey, everyone gets a co-pilot license,
but also don't fuck up.
And people are terrified to actually try things when there is this incredible culture of fear
around the cost of failure. Moreover, a lot of leaders grossly overestimate the importance of
moonshots and grossly underestimate the importance of incremental gains. And so what ends up happening is
so we're like, hey, you know, I can do this 20% more productively. And they're like, oh, that's cool.
Why don't we do this? Now, the tyranny of incremental gains is interesting, which is that a lot of things
can't afford incremental gains. A lot of businesses just need to get 20% better every year for them to
become amazing companies. We actually need health care to get a,
thousand times more productive. And so the issue with things like healthcare is the tyranny of
incremental gains, which is that actually the current state of affairs is untenable unless we have a
radical change. And the third, which is sort of like this camp of policymakers, policymakers are talking
about a lot of things right now, and only three of them matter. Alignment, explainability,
and bad acting. If you are not talking as a policymaker about alignment, explainability, or bad
acting, and exclusively, ideally, those things, and you're probably wasting breath. And a lot of our
policymakers are talking about everything else. And this destroys the whole narrative and message and
means that we can't actually pass the important policy we need to pass because we're too fixated
on basically everything else. And it's why you're going to see this economic cake curve between
Europe and the United States, where Europe has basically sprinted to policy this stuff without
actually asking what are the consequences. What I find really fascinating, and I think
you alluded to experimenting, et cetera, is one of the things that we're noticing, because we work
with a lot of professionals, is that the entire measures of successive change. So if it used to be
all about IQ, intelligence, et cetera, and then it switched to EQ and emotional intelligence,
the skills right now is more around adaptability. So it's ACU. It's like adaptability quotient. Like,
how fast can you adapt? And in order to adapt, you're going to have to have to, you're going to have
to shift the way you show up, the way you experience, how you gain experiences, play with things
because at the end of the day, otherwise, we lose relevant at the speed we've never seen before.
Like, it is, the gap is actually going to go really fast if you're not ready to be on the cast
of innovation. What would you say to people that are right now, try to figure out, like, I mean,
it is inundating. There's trisillion softwares. There's, like, every other day, there's a company
that says there's something with AI, and it's like you feel like you can never stay ahead of the
curve. So what would you say to people, Zach? Well, okay, so we wrote a newsletter recently called
the adaptability trap. There is such an incredible emphasis right now placed on adaptability.
And I think it's doing a disservice to people because for the reason you just described,
ask your friends, who is your least favorite friend? And you don't have to write it out,
but who's your least favorite friend?
And almost always, people's least favorite friend, again, this is not someone you know,
this is your friend.
It's someone who is infinitely adaptable.
One of the most common characteristics of people's least favorite friend is that they are
nice enough and they trust them enough, but these are people that don't stand for anything
because they are constantly trying to either satisfy everyone else or accomplish something
else. They are infinitely adaptable. They are a candle in the wind. And the problem with telling
everyone that you need to be more adaptable is that it is actually reinforcing this idea that we should
all constantly be changing, which is like not really true. What we should do, I argue now,
is commit more than ever what I call anchor to our mission, vision, and values. There are some things
in a world that is constantly changing, especially right now technologically, that we should be
unwavering about. And for many people, my argument is your exercise should be, what am I insistent on?
What is my North Star? What is that point in the distance that I want to achieve? And that is the thing.
So my argument is we should be committed to our mission, vision, and values, what I call anchored to our
mission, vision, and values. And those mission and vision and values should be unwavering.
for us. Now, this is an exercise unto itself that a lot of people have not done, right? Actually,
sitting with yourself and saying, what is it that I believe? What do I want to accomplish? And how does my
future look tough for a lot of people? What we should be adaptable to, I argue, is the ways and means
with which we accomplish that. Adaptability should show up in our actual tooling and the ability
to use different services and different technologies to accomplish our mission, vision,
values. And so for a lot of people right now, the scary part is that a new technology will show up
and they'll be like, wait, maybe I should do that job. Or like, oh, maybe I should go move to San Francisco
because that's where AI is. And it's like, wait, what do you want in this life? It may be that San Francisco
is where you should go, but it may also be that that's not true, that the technologies, by the way,
that allow you to do whatever you want to do can show up everywhere. If you can be really clear on what it is
you want to accomplish in this life, and what it is you want your future to look like,
and you can be amenable to how you arrive there, then you will have a good time.
And there are a lot of examples of this, but the tooling, I argue, is actually just making
it easier to accomplish what you want to accomplish.
What it is complicating, and you pointed this out, is that if you don't know what you want
to be, if you're not clear on what your future should hold, it is terrifying
every day to see a new person become a billionaire,
or a new person start a great new company,
or a new person achieve a dream you think you might want.
Because you sit there and go, wait, maybe that's the thing,
or maybe that's the thing.
And it's like, no, just because someone else doesn't mean that you should do it.
And just because the technology exists for you to do something,
doesn't mean you should do it with that thing anyways.
And that, to me, is actually the hard part right now.
And I absolutely agree.
Like, I think clarity is a big piece that we're seeing.
I think 70, if not more percent of the people in Leap have no clue where they want to go.
And for me, it's like, okay, so how do you find that mission and vision, et cetera, and the values, right?
But also, for me, that adaptability piece, and you talked about the tooling, but it's more of, if you look at careers, I think that, you know, it's pretty clear that we're going to live in a portfolio career arena.
and people will not have so much that one thing, you know, for 40 years.
And what you're going to start seeing is those side hustle, side thing, they're going to do
some public speaking or they're going to do advising or board seats or whatever.
And now the question is, how do you constantly reinvent yourself?
So for me, that adaptability is not so much about being that person that always says yes to everything.
That's not it.
But it's like, based on where I want to go, how do I create the life that I'm.
want. How do I now venture into new things that will take me there faster and higher?
Curiosity is a big thing. How do you shift yourself to explore these things? And I think you talk
about curiosity a lot as well. First of all, I rate adaptability really highly. I just couch it that
there is plenty of things you do not need to be adaptable to. You don't need to adapt on your
principles. You don't need to adapt on your values. You don't need to adapt on your vision.
but you should adapt is how you actually accomplish what you want to accomplish.
We need to pause for a super brief break, and while we do, take a moment and share this episode
with every single person who may be inspired by this because this information can truly
change your life and theirs. Now, every cool opportunity you will ever find is most likely
from a hidden market. It's the people who think about you when you're not in the room and
bring the right opportunities to you. This means that the people,
you hang out with truly matter. That's why we created our flagship live event in San Jose,
California, in the heart of Silicon Valley. It's February 26 to 28th and is the number one
conference for reinvention, leadership, and careers in the United States. It has speakers like
the former president of Starbucks and many other leaders, including yours truly, myself. And I'd
love to personally welcome you, give you a hug and hear what you think about the podcast. We'll
have many networking opportunities, photo opportunities, and we already know every single person
after this event will go supersonic on their reputation and career. So grab your tickets
quickly because this event always sells out. So go to leapacademy.com slash leapcon or you can
search on Google leapcon 26. It's L-E-A-P-C-O-N. Don't miss out where the most impactful
leaders hang out in February. So go to leapacademy.com slash
Leapcon, I will see you there.
You work then with some of the greatest innovators.
What would be one lesson that shaped you
and how you see the future of work?
On how I see the future of work is a different.
So I'll tell a lesson,
and then I will tell you how I see the future of work.
One of the lessons that has been sticking with me lately
because the book comes out January 13th,
when I set out to write the book,
I was like, I want to make this a huge book.
And we went to publishers,
and they said, well, listen, a lot of copies is 10,000 copies.
I said, well, that's not good.
I said, I want to pre-sell 20,000 copies,
and I want to sell 200,000.
And they said, okay, well, that's a lot.
And then I said, you know what, is that even a lot?
Is that actually a lot?
And one of the things that I realized I'd been conditioned by
was the Sam Olman School of adding a zero.
Sam takes anyone's biggest number and just adds a zero to it.
And he did this with energy.
I remember I had a meeting with him in an energy CEO,
Sam was like, we're going to need 30 gigawatts.
And the guy was like, excuse me,
he looks at Sam like he was an idiot.
And then he looks at Sam like he was an asshole.
And he was like, I don't think, by the way,
30 gigawatts for those paying attention,
I think it's roughly the amount of energy that Florida has.
Maybe it's a little, I think it's roughly right.
Maybe it's more than that.
Or I think Florida might have a little less or more.
I can't remember, but it's roughly right that.
And he's like, well, we don't even have that much energy.
He's like, well, we're going to need that much for this site.
And he was right.
and we're going to produce it and we're going to do it.
And I am reminded constantly that there are two ways to look at this.
If we keep talking about this huge numbers,
then we actually distort the value of numbers and nothing matters.
My other argument is, why can't we make, for example,
housing, health care, and education,
a hundred times less expensive?
These are these ideas that are so weird to people
where I'm like, why does housing have to be prohibitively expensive?
The reason it's perivably expensive today is because we policy it into oblivia.
It's because we don't tax second homes, which is a very controversial opinion that I have.
It's because we don't have non-resident tax.
It's because we allow people to hoard housing and then we don't allow other people to build it.
If we allowed people to build more housing, it would not be prohibitively expensive.
It doesn't have to be this way.
And by the way, in five years when you drive across L.A.
in an autonomous vehicle for $5, if a trip to the emergency room still has the chance to bankrupt you, five blocks away in an ambulance,
you can confidently know that what technology has done for autonomous vehicles it has failed to do for
the health care system, and that is not a technological failure, that is a policy failure.
And challenging people that we can actually build a very affordable healthcare system,
not through government intervention, but through technological intervention, is one of the things
that I spend a bunch of time on now.
And I'm like constantly reminded that many people have been conditioned as children to think
in incremental games.
but actually we're approaching this exponential future.
And I really, really, really believe that we are tracking right now
on a remarkable trajectory where we should start adding zeros to things
in ways that are quite positive for humans, right?
Start adding zeros to life expectancy, start adding zeros to net worth,
start adding zeros to the speed at which we make education inexpensive.
I mean, all these things.
So that's one of the lessons.
I love that.
My hot take on the future of work, I have a lot.
When I speak, when I go to audiences, I will often ask an audience, I will say, please raise your hand if you think that AI is going to take your job.
And it's usually about 5% of the audience that raises their hand.
And then I say to people, please raise your hand if you think someone in this room will lose their job to AI.
And then about 50 to 60% of hands go up.
And I remind people that we are observing what we call the zombie apocalypse phenomenon.
And the zombie apocalypse phenomenon says that most people believe they would survive a zombie apocalypse,
and their idiot neighbor would certainly die.
And this is how we observe a lot of problems in the world.
We talk about how everyone else is getting stupid, but we're getting smart.
We talk about how everyone else is a bad parent, but we're a good one.
We drive around the planet and we go, everyone's a jerk these days.
I'm a nice person.
I remind people of this because it explains one of these incredible phenomenons,
which is that the world is, in fact, getting better all the time.
We are just overexposed to the bad stuff.
And I should have raised this issue because the zombie apocalypse phenomenon plagues our brain.
There are a number of biases that cause this.
But it really does reinforce this idea that the world is getting worse, but I have salvation.
I am okay.
And actually, most people are okay.
Most things are gone okay, right?
If you live in the United States, and frankly, if you live in most of the West,
world. And actually, at this point, if you live in most of the world, you're doing okay.
There are some pockets that are not, and there are some people that are not, but on the whole,
you are doing way better than your great-grandparents and all of our great-grandparents for that
matter. And you are simply now overexposed to the few people that are not doing well,
and you're very sensitive to the things that break. Where I think the future of work in Java
displacement gets so tricky is that we're stepping left foot, right-foot right now, and we cannot see
past ourselves. And I tell a story to reinforce this point, and I tell the story in the book,
and it actually in many ways galvanized the book for me, because it brought into focus one of the
most complicated and controversial issues at a moment that I think needs a lot more clarity and
nuance. In October 24, the Longshoreman went on strike, and the head of Longshoremen and
Harold Taggart informed the 45,000 of his longshoremen to walk off the job site and shut down
the ports on the morning of October 1st, basically crippling our import-export business.
And he went on CNN on the morning of October 2nd for an hour-long interview.
He sat in front of the camera and very calmly pointed at the camera to open the interview
or at some point early in the interview, and said,
The American people have no idea how powerful and dangerous I am, I will cripple you.
What did he want?
He didn't want more money, and he didn't want safer working conditions.
The two tenants of every labor union since we organized labor,
he wanted a guarantee from the ports that they would not automate their jobs.
The picket signs read, robots don't pay taxes and automation harms families.
And I mean, I stared at this and I was like,
I had this unforgettable sense that this was the first of many such incidents.
And so I sent him a long, thoughtful email.
I talked about how I understood the plight of the worker and, you know, et cetera, et cetera.
And he sent me something very different back.
He did not want to meet with me.
But we met with a bunch of the Long Sherman, and we interviewed these guys.
And we interviewed about 30 of them quite extensively, off the record.
And the results are in the book, so I won't spoil too much.
But suffice it to say, these conversations changed how I understood.
this problem. And we asked them a battery of questions, of which four I will tell you about.
The first question we asked that told us everything we needed to know, was anyone in your family
in the union? Of the respondents, 91% said yes. Second question, do you believe you could be
gainfully employed outside the union? Over 90% of the respondents said agree or strongly agree.
Third question, do you want anyone in your family to join the union? 90% of the respondents,
or plus said yes, and one said they were going to.
continue having children until they had a son so that their son could join the union.
And the last question, what is the most important part of your job? Rancorder question,
multiple choice, seven answers, most common, stacked answer, community.
Ahead of pay. And that answer, among others, reinforced this idea that it is not an economic
issue, that in fact they are not fighting for money, they are fighting for purpose.
And one of the questions that really helped me understand this was one that I sort of
throughout at the end of every interview, which was, by the way, who benefits economically if we
automate the ports? And their answer, reflexively was the capitalists. They said, these are the
people who benefit economically if we automate the ports. I said, okay, who are the capitalists?
They said, well, the port owners and the shippers and, you know, I said, well, okay, fine,
but who else might benefit if we automate the ports? Everyone is the answer. Everyone,
including eventually the longshoremen and their kids.
We are all descendants of people whose jobs,
we're automated to our economic benefit
and we never think twice about them.
And actually, not only that,
we wander the earth all day, every day,
asking when is this good or service going to be better, faster, or cheaper,
without realizing what we're asking is,
when is a human going to be extricated
from the manufacturing of that good or service?
And we don't do that because we're,
jerks, we do that because we are rational economic actors conditioned to believe that the world
should get better, faster, and cheaper. And it occurs to me, as it should occur to most people,
that on the eve of an incredible technological evolution, revolution, where we automate a
tremendous amount of our work to exceptional economic benefit, the problem is not going to be,
is there more and better food on the table? The answer is probably yes for everyone. The problem is
going to be, figuring out the answer to the question, who am I? For most people, our work will change
so much and so frequently that attaching our work and identity, or rather attaching our work to our
purpose, is going to be very hard. And I talk about this to anyone who will listen. I basically make the
point now, I'm willing to argue there will be more work. I'm sure there will be. We will constantly
find new ways to work, but I actually don't even think that's the point. I think the point right now is that
we should not be making the economic case or concern. We should be making the emotional one.
What I write about in the book is what we call identity displacement. The theory of identity
displacement says that jobs changing doesn't have an economic consequence the way it does an
emotional one. And that is actually the crisis we are tracking towards. That, in fact, the greatest
sacrifice that our generation will pay vis-a-a-I is extricating who we are from what we do,
because what we do will change so much and so frequently,
that the true measure of adaptability will be willing to say,
I have more time with friends and family,
I have more food on the table,
it's okay that my career just went from this to this.
And when people go, there's no way,
how will anyone who loses their job have more money
or more food on the table,
I remind them that that is literally the course of human history,
that their grandparents' jobs are obsolete now
so that we can have way more than we've ever had.
And actually, I will go even further and say this, which is that when I decided to write the book,
I decided to write it as an homage to John Maynard Keynes, who wrote in 1930 the most important
paper that I've ever read called The Economic Possibilities for Our Grandchildren, which is framed
behind me. John Maynard Keynes wrote this paper against the backdrop of abject despair.
Everyone in the world was suffering under the Great Depression, and many people were literally
dying in the streets. And he went on this lecture circuit in Europe and then came back to the U.S.
wrote this paper, in which he argued, and I quote, I must now disembarrass myself to imagine a future
that I will certainly not live to see, one in which humans will have solved the economic problem
and be faced with something more profound. The father of modern macroeconomics was arguing that
things would get really cheap, and then we wouldn't know what to do. That, in fact, our future
battle wasn't economic, it was spiritual. My words, not hits. And that I think is actually now my case.
I've arrived now at a place where I firmly believe we are going to house people, we are going to
feed people, we are going to educate people, we are going to care for people, and the hardest part
is going to be figuring out, why are we here? Like, why are we actually here if we can solve all
these problems so easily? And I don't have a ton of answers. But part of the argument in the next
renaissance that I lead people to eventually or that I try to help people arrive at eventually
is a reminder that this is not going to be easy, but maybe not for the reasons they think.
Zach, this was incredible. We are a huge believer in reinventing and leaping a career, and that's
exactly what you're talking about. And again, thank you for being in the show. Like, I can probably
talk to you for hours. So thank you for being here and for sharing your wisdom.
Thank you so much for having me, and I look forward to being back.
Remember this episode. It's not just for you and me. You never know whose life you are meant to change by sharing this episode with them.
And if you love today's episode, please click the subscribe or download button for the show and give it a five-star review. This really means the world.
Join me in helping tens of millions of individuals reinvent their career and leap into their full potential.
Look, getting intentional and strategic with your career is now more important than ever. The skill,
for success have changed. AQ, adaptability, reinventing, and leaping are today the most important
skills for the future of work. Building portfolio careers, multiple streams of income and ventures
are no longer a nice to have. It's a must have, but no one is teaching this except for us in Leap Academy.
So if you want more from your career in life, go to leapacademy.com slash training. Check out our
completely free training about ways to fast-track your
career. You'll even be able to book a completely free career strategy call with my team.
So go to leapacademy.com slash training.
