On with Kara Swisher - What Everyone Gets Wrong About the Future of AI with Nick Foster
Episode Date: November 10, 2025Futures designer Nick Foster spent decades of his career helping tech companies create products many of us didn’t even know we wanted. As the head of design at Google X — a.k.a. Alphabet’s “Mo...onshot Factory,” which is now known simply as “X” — he led teams working on brain-controlled computer interfaces, intelligent robotics, even neighborhood-level nuclear fusion. He also designed products for Apple, Sony, Nokia and Dyson. But in his debut book, “Could, Should, Might, Don’t: How We Think About the Future,” Foster argues for a more measured approach to thinking about big disruptive technology, like A.I. In a live conversation recorded at Smartsheet’s Engage Conference in Seattle, Kara and Nick talk about the pitfalls of the current A.I. hype cycle, why executives need to think more critically about how everyday people are using AI, and how companies can more thoughtfully adopt the technology. They also talk about Foster’s argument that all of us need to take a more “mundane” approach to thinking about AI and the future. This episode was recorded live at the Smartsheet ENGAGE 2025. Questions? Comments? Email us at on@voxmedia.com or find us on YouTube, Instagram, TikTok, Threads, and Bluesky @onwithkaraswisher. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
I love your pants.
That's very sweet of you.
Yeah, I'm loving soft pants these days.
Yeah, me too.
We need them.
We need them.
Today, I'm talking.
Today I'm talking to Futures designer Nick Foster.
He spent decades of his career designing for
huge tech companies like Apple, Sony, Nokia, and Dyson. Most recently, Foster was the head of
design at Google X, Alphabet's R&D company, known as the Moonshot Factory. He led teams working
on brain-controlled computer interfaces, intelligent robotics, even neighborhood-level nuclear
fusion. Foster recently wrote his first book. It's called Could, Should, Might Don't,
and despite his big tech background or maybe because of it, he argues for a more mundane
approach to thinking about the future and how to design products for it. He wants all of us to
treat transformative technology like AI as something will incorporate into our everyday lives
rather than something that will radically change the way we live. I think Nick is right because
it's really important that we stop thinking about AI as this hype machine or about the
end of the earth and start to think about what it can do for us and what guardrails we need
to put in place. All right, let's get into my interview with Nick Foster, our expert question
comes from Ethan Mollick, a professor at Wharton School at the University of Pennsylvania and author of
the book, Co-intelligence, Living and Working with AI. Today's episode is brought to you by SmartSheet,
and my conversation with Nick was recorded in front of a live audience at SmartSheets' Engage
conference in Seattle last week.
Support for this episode comes from SmartSheet, the Intellectual.
work management platform.
Today's episode was recorded live and Engage
an annual conference for change makers
hosted by SmartChete.
I was joined by Nick Foster
to take a philosophical dive
into how AI is set to transform business
and what it might look like in the future.
But beyond the prediction,
SmartSheet is turning big ideas
into practical, tangible solutions,
helping leaders turn strategic vision
into execution at scale.
See how SmartSheet
accelerates smarter outcomes for your business
at Smartsheet.com
Lash of Box.
So, Nick, thanks for coming on on.
We're going to talk about how AI will reshape business
and we'll use as a framework that you've developed in your book,
could, should, might don't,
how we think about the future as our starting point.
So let's just dive in.
Okay, sounds good.
You say most people are bad at thinking about the future.
and our collective inability to imagine what the future will actually be like,
quote, might evolve into a definitive and crippling shortcoming in the years ahead.
Talk about why people are about anticipating what's ahead, and what makes that problematic.
I've been in conversations about the future for my whole career.
Right.
And people tend to talk about the future in very imbalanced ways.
And I think that represents a critical shortcoming in the world that we live in,
which is changing very quickly, and we're all starting to.
I have this feeling that the desire to think about the future is here, high,
and our ability to do so is low and underpowered.
And I'm interested in trying to close that gap a little bit.
Right, but why is that?
Yeah.
I think the future is obviously very contextual,
depending on who you are, where you live,
and whether we're talking about technology or community or society or culture or whatever else.
Things move at different speeds.
But I do think there's this general feeling that things are speeding up.
More changes happening now than perhaps it has done before.
There's no real metric for that.
but it does feel like that.
So the population of Earth, for example, has doubled since I was born.
Children could ride in cars with no seatbelts,
and gay marriage wasn't legally recognized anywhere on Earth when I was born.
So we've undoubtedly seen a lot of change.
And I think our response is to try and talk more about the future.
But the reason I think there's this gap, this disparity, this shortcoming
is I think that we tend to find ourselves in one of these four corners
that I've described as could, should, might, and don't.
Yeah.
So you've been a futures designer for Google,
Apple, Nokia, Dyson, and Sony.
And you had a hand in helping technologies imagine and prototype emerging technologies.
They are always leaning into the future.
They have, you know, all the phrases everybody use, you know, adaptability, flexibility,
et cetera, et cetera.
Tech about what that work is, since most of us don't think about the future.
And, of course, they're very thirsty for the future at tech companies, almost irritatingly so.
Yes, and just to be clear, the term future.
designer is something I think I made up.
Oh, you do? Okay. Good. I think so.
But yeah, I've been, I trained as an industrial designer back in the day.
I went to art school. I've been interested in designing things, physical things, is where I started
out working for James Dyson and things like that. But in the companies I've worked in,
I've been explicitly focused on longer term, more distant, more emergent, nascent technologies.
And so the sort of genre of design work that I do, I call futures design. And so that looks like
the sort of prototypical, sketchy, scrappy, V0.0 type of products that you might be familiar with
if you've followed this world. So, yeah, that's the kind of work that I do, just to try and kick
something off and seeing if there's a there there and what that might look like.
And when I talk about thirsty for it, but they really are, and sometimes almost performatively so,
right? I mean, I recall when I was visiting early Google, they had all different kind of office
setups. They were testing different things, the pot, the sleeping pods, the different,
different foods, different ways people ate.
Is it easier working in a tech company like that?
Because most companies have a look and feel, a physical look and feel,
but in terms of pushing into future things, and can there be too much of it at the same time?
I think you're right on the performative side of things.
And if I'm being really frank, I think for the first part of my career,
I did a lot of that stuff.
Because I thought that's what you were supposed to do.
That's what all the magazines showed that I was reading and all of the shows that I watched,
when talking about and imagining the future.
This is what I call could futurism.
This idea of this sort of bombastic, over-the-top,
energetic, sci-fi-inflected work,
I think I did quite a bit of that stuff.
But as I've matured and I've become a bit more comfortable
in my own skin and my own reckons,
I've tried to be a bit more critical about that work.
And since leaving Google in 2003,
I've taken a bit of a reflection on that.
And I do think that a lot of companies,
they struggle with that question of what is the future
and they sort of throw money at it, and they do projects,
but it doesn't seem to lead anywhere.
I also recall it Google.
Remember the bicycles?
Yes.
They would have bicycles all over the place,
and they were multicolored.
There's always something weird happening there.
You'd always walk up, and you're like,
oh, God, an elephant, of course.
And I see a lot of those Google bicycles repurposed in downtown Oakland now.
Of course you do.
Well, Sergey had a whole formula of people taking them.
He said, if I put 100, they'll take 99.
and then I'll put another 100, whatever.
But one of the things, I said, your bicycles are getting out of hand, and they're like, well, do you like them?
I said, no, I'm thinking of driving my car through them because I hate them so much in some fashion.
Anyway, I'm a little hostile at some point.
But the way we think about the future is shaped by intellectual waves and cycles, as you noticed.
And right now, obviously, we're in an applied AI wave, right?
And that's the new thing.
And, you know, they love the latest thing.
and they move on, whatever the phraseology is,
but it's hard not to get carried away with it now.
And I joking say everything has to be about AI,
but it does. Right now, it seems overwhelming.
So how do we develop the capacity to recognize
we're trapped inside of this right now
and whether it makes any sense?
Because one of the things I think about a lot
is that everyone needs to calm down about what's happening.
Yeah, I think that's right.
And again, this comes with a bit of age.
I'm not quite 50, but I'll be 50 in February.
And I think as I get to reflecting a bit more,
I realize I've been through a few of these cycles of hype and excitement,
particularly around technology, but around everything else.
And the example that I use when talking to people about this
is, you know, most cities around the world have an 80s night
where they sort of wear the 80s neon clothes
and they dance ironically to the music
and they sort of make fun of it.
But we have to remember that those things sort of made sense at the time.
Right.
But with a bit of hindsight and a bit of time,
we start to learn, actually, those bits are a bit silly, those bits are a bit odd, those bits are a bit quirky.
I think the same goes for technology. If you look back at 80s and 90s and 2000s ideas of the future
of technology, they look a bit silly, they look a bit data. Which one do you? I mean, we could go long
on this. I think the number of photographs I've seen of, say, early nascent VR or gestural computing
mock-ups, for example, it didn't sort of end up like that. It doesn't mean to say that people
shouldn't have looked and shouldn't have explored. But I guess what I'm saying in regards to AI is a more
sort of pragmatic and rational and less hype-driven version of that, trying to think, like,
what are we talking about now and how are we talking and what language you are using now that
might seem silly in the future.
Well, the hype thing is, I mean, remember the metaverse?
Yeah.
No one wanted to live there.
No matter what Mark Zuckerberg did, everybody likes legs, I feel like. Everybody likes legs.
Everybody likes legs.
Yeah. That was sort of the strangest design choice to have legless people floating around a universe
that was antiseptic.
So right now, what do you think is happening
in this AI slurry
that is problematic
and what isn't?
I mean, that's a deep question.
I think the challenge that I've got is,
again, without wanting to sort of lean back
heavily on the title of my book,
I think people are falling into one of those four buckets.
Right, I'm going to ask you about that in a second.
Okay.
But I think that what I'm saying
is that it leads to unbalanced
and sort of biased versions of the future.
And I think when looked at in aggregate, across all of the different people that are talking about AI as a technology and what it might mean for society and what products we might make, you sort of get an aggregate view, but the chances are we're not listening to all of that. We're listening to one or we're listening to two or we're talking in one way. And I think that means that we just head off down one road way too far and don't think about things in the round. And something like AI, which is a, I prefer machine intelligence actually, but we can dive into that. Yeah, that trips off the tongue. But go ahead.
Well, I think there's nothing good about artificial things in the world.
No, I think it's a stupid name, too.
They're mimicry rather than letting them be themselves.
Right, yeah.
But I think that the challenge that we've got is trying to find a balance in the way that we talk about these things,
which are, you know, the technologies are transformative, potentially, in good ways and bad ways.
And you talk about benefit and weapon or something like that.
Yes, it's either a tool or a weapon.
Yeah, and I think that that definitely exists in this.
And I think we need to find that level of sort of humility and balance in talking about these things.
and try to move forward with caution and apprehension,
but also excitement and energy.
It seems to me at this minute
it's largely fueled by tech billionaires
who are trying desperately to control it,
that they're sort of trying to shove it down everybody's throat.
Like, you really want this.
You really want this spending.
You don't know this, but you won't really want to eat this.
And you don't.
Like, you actually don't.
Let's go to your book, this could, should, might don't.
There are different mindsets that we fall into
when we're talking about, well, a lot of things, but AI and how it's going to impact the future of
business. And the four words in the title book represent these approaches. Describe them very
quickly. We'll get to one, each one specifically, but first explain how they interact and
influence each other. Yeah, I'm just trying to identify these habits that I think we all fall
into. So I've called them out as could, should, might, and don't, which is any conversation
about the future, I think, tends to fall into one of these pockets. And so could is about
about being very sort of positive and excited about the future. It's often inflicted by things
like progress and technological progress. Should is about having some sort of certainty about the
future, either an ideological destination or a prediction. And a lot of people ask me for predictions
all the time, which I don't like making. Might is about uncertainty about the future.
Lots of scenarios, lots of potential outcomes. And then don't is about focusing on the things where
we don't want to end up, where we would like to avoid, or the things we would like to change in
the future.
So we go down one of those paths rather than integrate them fully?
I think so.
In my career, I've been fortunate.
I'm a designer.
I went to art school.
I've been around designers, but fortunately, I've also been around investors and scientists
and engineers and, you know, people from all different ilks.
And I think that each of the conversations I've had falls quite quickly into one of those pockets,
either based on the kind of training that somebody's had or just the nature of the place that we are.
So let's apply them to AI.
We'll start with could futurism.
You say that it's full of flashy and exciting ideas.
Founders always fall into this category.
And you warn it tends to emphasize hype over honest exploration and veers into empty calories.
When it comes to AI, let's talk about the line between inspiring people with a positive vision.
You know, you're going to have a jet pack.
They still have not delivered the friggin jet pat, just FYI.
And, yeah, it's right after.
we're going to have a million optimist robots serving us.
Yeah, that's not happening.
Although just the other day, Elon was talking about a floating car again.
Okay, amazing.
Yeah, it's not happening.
So where's the line between positive vision of the future and misleading them?
Because it's exhausting when you deal with founders when they do this.
Like, with AI, you know, Sam Altman is perfectly fine.
It's a low bar in tech that he's perfectly fine.
But, you know, he's always like, we're going to be.
to solve cancer. We're going to do this. We're going to, like, it's a dessert topping. It's a
floor wax, if you recall that S&L joke. So talk about this could futurism. Is it necessary to be
ridiculous and dreamy, even though it's mostly marketing gobbledygook? Yeah, I think all four
of these things that I've identified have their own benefits. And I think could futurism does,
because you do need to sort of explore the potentiality of any new technology. It's going to plan.
Yeah, and getting people excited and motivated and driving them forward and saying these are all the things we could do.
I think it's important. It's good for motivation of a team. It's good to get people to see, oh, there's some upsides here.
And, you know, the canonical examples are cures for cancer and other things that we see as sort of generally regarded big problems in the world.
I think there's a way that AI and other technologies like that could address that.
The challenge that I've got is it does quite quickly tip into fanciful, classical futurist tropes.
and a lot of it is inflected by science fiction cinema.
And a lot of people who meet me and find out what I've done or where I work or whatever,
they think that I'm very enamored by science fiction.
And I don't really watch it.
I'm not really a fan of it.
I certainly don't take it as a brief,
but I think that puts me in a minority.
And actually, I think the challenge with that is a misreading
of what the purpose of science fiction cinema really is,
which is entertainment.
Right.
And this desire to will those mcuffins, those things,
those devices, those experiences into the world,
often takes good futurism into this realm of, yeah, fantasy and sort of boyhood dream-like places,
as opposed to...
Right. Which they're informed by, by the way. They're very deep into it. I mean, Musk is very
deep into sci-fi people don't realize. It also affects the ways that they name things.
So the falcon rockets that Musk has are named after the Millennium Falcon. Right.
You know, Cortana is named after a character in Halo. Like, the meeting rooms at Google X were all named
after sci-fi robots.
Yes, I know.
You know, so these are sorts...
Netscape was my favorite.
They named rooms after diseases of the skin,
which I appreciated.
Yeah, and it might seem like a small, harmless thing,
but what it does is it sets the tone
for what the company is interested in,
and the people at work there start to talk in these ways.
And like I said, I've been in so many meetings
where ideas from science fiction cinema get brought up,
and I think what it actually represents
is a crisis of imagination.
Right.
They grab at these little placeholders
that they saw in a movie or a TV show
and put it in the room as like,
this is what we should make.
Now, sometimes that does come to pass.
I mean, Star Trek informed a lot of these people initially.
And so the communicators, all this stuff, of course,
look like what we have now.
Like, there is a link to taking some of the things, correct?
I think so, but without wanting to get too into the law of it,
I think the challenge that I was,
got is influence is one thing. Being influenced by something you saw is one thing. But saying
that the science fiction predicted those things is just a falsehood. And I think that also
the something like the StarTac flip phone, everyone said that's the Star Trek. If you actually
look into the history of that, mobile communications was around long before things like Star Trek.
Sure. Maybe the form factor was slightly influential and the development of the Star Trek,
the StarTack flip phone. That's hard to say.
But I think we sort of muddy those things up, and people are willing to overlook the 100,000
false predictions in science fiction for that one moment when somebody held up a glass rectangle
and went, oh, iPad.
Right.
You know, it feels like we have a habit of focusing in on the thing that was closest to the
bull's eye, but ignoring all of the millions of other things that were never so.
Right, right?
But it's a way to inspire yourself, right?
Yeah, I think the problem with it as well is it assumes that everyone else has seen the same
movies and read the same books.
And I think that that can be an exclusive or excludes, you know, excludes a lot of people from the conversation.
Correct. Right. Who aren't along with the memes or along with the language or along with the law. Or you don't want that future.
Or you don't want that future. Right. So let's talk about should futurism. This is competent, action-oriented mindset that uses logic and numbers to predict what comes next. And you often see it in the C-suite, but you write that corporate strategy can be little more than intuition backed by data. I agree with you here. So so far, ROI and AI is limited, to say,
East, in fact. We're in a negative territory here, but we're in the middle of this crazy arms
race. And so what approaches should business leaders use to navigate that? Yeah, so should
futurism is just defined by some form of certainty about the future. And I think that can come
from two places. One is a sort of ideological position, either from your religions or the state
of morality that you'd like to see in the world, and we should point towards there. That's the world
we should build. I think that's sort of off to the side of what we're really talking about.
The should futurism that I see played out a lot in business is this notion of observing the world
through numeric practices, creating models of how we think the world works, and then the sort
of the temptation to project those models out into the future becomes almost irresistible,
and people do that. But the challenge that I have with should futurism is that once that
solid line turns to a dotted line, it ceases to be data. And it becomes a story. I call it
numeric fiction. You know, my job as a designer is to make things and make movies and make
prototypes. And I consider those stories about a future that we might produce or we could produce
or we should or whatever. But I think the confidence that we attribute to numeric fiction and
algorithms and data-driven futures is way overblown when placed back against reality.
Right. So I think a little, there's this phrase that a lot of people with MBAs like to use,
which is skating to where the puck is going to be. The Gretzkyy.
quote. And it sort of might work for ice skating. I'm not a big ice hockey fan. But it sort of
rejects the fact that the world is just an inherently volatile, uncertain, complex, and ambiguous
place. Yeah. And a lot of the things that we're trying to measure now, particularly things involving
humans, are just naturally chaotic. Right. So any kind of dotted lines striding confidently
out into the future is a story, and we need to treat them a bit more like that. Yeah. Rays I use is
frequently wrong, but never in doubt. You know, like they're often.
wrong kind of stuff, or else just lying. Sometimes it's just flat out lying. So when you have a
should future, it's not, again, it's not bad to imagine, but they base certainty on, rather than
just, we're going to just do this because we make this design choice. I'm thinking of, you know,
Steve Jobs, he took things off of the computer, and he took off one of the things that you put
in the side, and everyone lost their mind. And I said, well, why did you do this? He goes, I just didn't
like it. Like it was a great way to make a decision. He didn't like it. Yeah. And he goes, I have no
data. I just don't want it there. I just decided. And he goes, people can like it or not or use my
products or not. I don't care. And it was kind of like, oh, all right, that makes sense. But he didn't
base it on we should keep it there because of this and that. I think that the over-reliance on
data and feedback to make your decisions just becomes crippling really quickly. And it can
totally freeze your product, whatever it is. Because you sort of do a little test, you get a bit
negative feedback because people usually refer to react to change in sort of negative ways,
whatever it is.
And so you don't do it.
And your product just becomes ossified and stuck in ice.
And sometimes you do need to make that idea that veers away from that dotted line of
where the data says we're going.
Or in a should thing, it was another encounter I had one time with Bill Gates when the iPod
came out, the iPod.
And I was showing to him, it was starting to gain some traction.
and he said, what is it? It's trivial. It's a white box with a hard drive in it. That's how he described it. It's a white box with a hard drive in it. It's trivial. And I said, if it's so easy, why didn't you think of it?
Yeah. I mean, you're talking to somebody that used to work at Nokia. Yeah. So, like, I've been around that kind of confidence. And our data says this. They'll never, the market for this, the price is too high, the whatever, you know. And sure enough, we know, ask Blockbuster, ask Kodak. You know, these are companies.
that had very confident dotted lines striding off into the future.
Yeah, he did the same thing with Google.
What is it?
A box on a white page.
I was like, what's your problem with white pages and white boxes and stuff like that?
And he said it's easy.
And I'm like, again, the kids seem to like it.
We'll be back in a minute.
Support for this episode comes from SmartSheet, the Intelligent Work,
management platform. If you've attended this year's Engage conference, you're able to see this
episode live in front of a crowd. But beyond the show, you're also able to check out how
SmartSheet is leveraging AI to accelerate the velocity of work. Even with all the talk about
disrupting the status quo and new technology available to us, business is still business. And that
means the world's largest enterprises are still looking for a competitive edge, the thing that
separates them from the rest. But with AI, it's no longer about working harder. It's about working
and faster. It's one of the reasons business leaders call SmartChete, the intelligent work
management platform, and why it's trusted by 85% of the Fortune 500. SmartChink can help your
organization move faster adapt with confidence and drive smarter business outcomes. This is the platform
that helps you turn big picture strategy into a tangible roadmap for profound impact. See how your
business can lead with intelligent work at smartsheet.com slash Vox.
Let's talk about might futurism.
It presents itself as a reasonable adult in the room.
It lays out possibilities, calculates the probabilities,
and you see it as think tanks, lobbyists, government agencies.
When it comes to AI, we'll call for global summits,
published frameworks and voluntary guidelines.
Netflix just published some around ethical AI use of it.
Where could that go wrong, and what do they miss?
I mean, the way I define mite futurism is sort of the opposite.
of should futurism, which is looking at the future as a huge landscape of probability and
possibility, sort of from the Rand days of the Cold War scenario planning and game theory.
Like when we're playing chess, you make a move and you figure out, oh, well, we could do that
and we could do that, we could do that.
So it becomes this huge terrain of multiple stories.
And I think on a commercial level, it probably represents best-in-class futures work.
And if you were to hire a strategic foresight partner, that's the kind of work they would do
in just tons of data and build lots of...
This might happen. This might happen. We think this is more likely. We think this is less likely.
So that terrain of mites about the future is what I define here. The challenges with these ways
of thinking are many. The first is that no matter how much data you pull in or how many
opinions or how many weak signals you draw on, you'll never have it all. Right. More importantly
than that, your adversaries, particularly in things like the Cold War, they might be deliberately
deploying false data to throw your scenarios off course. So the competitive nature
of future scenario planning becomes a problem. And I think that's sort of where I stand with
my futurism. I think, like I said, if you were to hire a company to do that kind of work,
you'd get, that's the kind of work that you would get. But it does sort of lack, and it has
the same sort of confidence. Like, we've seen this whole terrain. And it can just get very complex,
very, very quickly. Sure is. And not lead you to a kind of decision, just lots and lots of options.
It could also lock you into constant analysis, right? Analysis paralysis. Analysis paralysis. And again,
for every, when you just think, right, we've got the 50 scenarios in front of us,
somebody will walk in and say, teens in Korea or something, you're like, oh, now we need to do
another 10, you know.
So it just becomes this self-perpetuating mess of potential future scenarios.
Which doesn't lead you to a decision.
And it makes making a decision that much harder.
So finally, there's don't futurism.
You call it, quote, unwelcome guests in communities built on optimism, positivity and forward
momentum.
These are Dumers, critics, activists, and you quote one of my favorite philosophers,
Paul Varylio, who wrote, when you invent the ship, you also invent the shipwreck.
We definitely need some don't futurism in the AI conversation, but it can lead to the
dystopian scenarios, which come rather fast and furious.
So how do business leaders create space for nuance and meaningful dissent without falling
into the catastrophizing trope?
So the fourth corner that I call don't is that place of looking to the future and the futures
that you don't like or you don't want to end up in. And again, religions and science fiction
cinema does a lot of work in dystopia. Religions have places like hell and purgatory to try and
steer you in the present away from undesirable futures. I think we do have that in AI and in technology
writ large, but it tends to be extrinsic. It tends to be from a position of critique. It doesn't
happen enough within organizations. And I think we're starting to see a slight shift in that,
I think, in some sectors where people are starting to understand the negative externalities of
the things that they're creating. And they're starting to understand the second and third
order implications of the things they're building. But I think that the challenge with don't
futurism is if you spend too long in that space, or you only do it, it can become crippling,
particularly for, we're seeing this a lot in our young people who have this term ambient adolescent
apocalypticism, which is just being surrounded entirely by bad portrayals of the future or terrifying
portrayals of the future that just become crippling. And it makes it really hard to sort of be
hopeful or excited about the future. Yeah, I think at the same time, tech often never has enough.
I mean, it's called consequences or adult behavior, right? That you can say, wow, if I do this,
then this. And it's interesting because they tend to lack any of that. One of the things I wrote
about. My book was, I was in a meeting about Facebook live, and they show it to reporters before,
and I said, okay, you know, and it's always like, you know, some fun cat doing something,
like some example. In that case, it was the Chewbacca mom, remember her? And so I said,
well, what happens if someone bullies on this? What happens if someone commits suicide? What happens
if someone beats someone up or uses it or worst case scenario straps a GoPro on their head and
start shooting people. And the whole room looked at me and one of the tech people were like,
you're a bummer. And I went, yeah, humanity's a fucking bummer. So they don't do enough of that.
I think you're right. And I think part of my job when I was working inside big tech companies
was to do that, that kind of work with people or at least try and encourage them to think about
if you put this out in the world, people will use it in these ways. And you either need to mitigate that
or be at least aware of it. And I think that the strange thing with AI is,
is it feels like the public or the consumers or the real world is ahead of the technology
in that regard.
There is a lot of doubt and skepticism and a lot of dubious opinions about what is this
and do I actually want it and I feel bad about it.
And I think that the people producing these technologies have to come up to meet that.
And again, as part of this sort of balance of thinking about the future from a could, should,
might and don't perspective, you need that sort of balance from all four corners.
Right.
So, all right, these are the four different mindsets that people bring to analyzing the possibilities around AI.
Let's get to your expert question.
Every episode we have an expert, send us a question for our guests.
Let's hear yours.
Hello, I'm Ethan Mollock, a professor of Wharton, an author of the book, Co-intelligence.
And my big question is that with AI, for the first time, we actually have a truly unimaginable future
that seems to be the mainline forecast of the various AI labs.
They think they'll build a machine smarter than a human at every intellectual task in the next five years, maybe 10 at the outside, which would have very large changes to how we work, live, do science, and everything else.
So the question is, how do we start thinking about the potential for an unimaginable future when we have trouble even articulating what that is?
So the question is, how do we start thinking about the potential for an unimaginable future when we have trouble even articulating what that is?
Yeah.
Yeah, I think it's a, I mean, that's a very hard question to answer.
obviously why it's been asked. I think the challenge that we've got in articulating the future
at the moment is that the present is so volatile. There's that Gibson quote that well-formed
ideas about the future are difficult because we have insufficient now to stand on. And it does
feel like that. And in the conversations I've been having around this book, it does feel like
there's this sort of instability around us. And my argument to that is like, well, should we just
stop then? I think that means we need to do more thinking about the future. And it doesn't mean
to say we have to get it right, which is why I shy away from things like projections or predictions
or prognostications about the future. I just think doing the work is important, sitting people
down and finding space in the daily life to have that conversation about what are we building,
where is it going, what don't we know? And what do we need it for? What do we need it for?
What kind of world might we leave behind? You know, I think that that level of understanding and that
level of respect for thinking about and talking about and doing work about the future just doesn't
exist almost anywhere. Does that get worse with the frantic nature of the tech people in terms of
spending that they're doing, the talent stuff? It seems demented at this point. And I think most
regular people feel that or intuitively feel that, but always go, well, they're the rich people
they must know. And I'm always like, they don't know. They don't actually know. I think you've got
a strong take on this that they're rich, but they're not geniuses. Like, does this sort of position?
Yeah, of course.
That's their talent, is money.
Yes, and convincing folks.
Yeah.
And I think that that exists, and we have to be aware of that.
But I think the reason I don't call myself a futurist
is because I think a lot of people that do call themselves futurists
or show up on stages at events and give talks.
There's a lot of snake oil stuff there.
Yes.
And I feel uncomfortable with it, so I don't want to be part of that cohort.
So I think those people's work needs to get better.
And when I say better, I mean more balanced, more honest, more open, more rigorous.
But I don't think that will happen naturally because the audiences don't push back enough.
They don't raise their hands enough.
They don't maybe feel confident enough to say, wait, hang on a second.
You've been saying this for 10 years.
Where is it?
Right.
What makes you so sure in that projection?
Right.
I would love the reason I wrote this book, I could have written a book for other futures practitioners or an academic book.
I've written a broad appeal book because I'm actually encouraging people to say,
just a second here.
You have a job to raise your hand and say, this isn't good enough.
tell me more.
Right, or it doesn't work for me.
But it's meant to overwhelm and, you know, stupefy you, I think, in a lot of ways.
Even asked about artificial general intelligence or AGI.
As a designer, what features do you want to see in an AGI beyond the obvious goal of aligning
it with human well-being?
If AGI really does arrive, what would it need to get right about human values and behavior
to actually make our lives better and not just more automated and distracted?
without wanting to get into a taxonomic hole here.
Also, have you noticed how many people in Silicon Valley
are now really interested in philosophy
and the nature of humanity
and suddenly experts on Descartes and things like this?
I have.
It's fascinating.
They never took the courses or read the books.
Sure.
That does not stop them.
So to that point, I'm interested in machine intelligence
as opposed to artificial intelligence.
And I think AGI is a sort of weird totem
that sits out in the future
whose definition is constantly evolving,
depending on what books people have read that week
or what definition or benchmark somebody's given to it.
So I think it's a sort of, I don't know,
fools errand, is that the right term,
to say we're aiming towards that thing?
Because the question is, what is that thing,
and then what happens the day after?
And I think that it sort of doesn't take us any way useful.
And I think the idea of separating machine intelligence
from human or mammalian intelligence
is they are fundamentally different in many ways.
You know, human intelligence is bound together with experience and mortality and hormones and beliefs
and all of those other things that these systems don't have.
So I think treating them as their own thing allows them to stop being mimics.
Synthetic, is the word I would use.
Synthetic, there we go.
I think they're going for God.
They're going, they are.
And I have a weird theory that a lot of the people doing this right now, a lot of the most, many of them are men or most.
And I have this theory that can't have children like women can't.
and this is their version of pregnant.
Like this...
Right.
But also, think about it.
I'm thinking.
I don't want to think about it too much.
I know, but this is it.
But I also think if you say that your business is creating gods,
you can register your company as a religion
and therefore you get some good tax breaks too.
Correct, exactly.
And many of them are becoming more, which is really bizarre in a wild way.
So let's talk about more specific about how it affects business,
because most people have to deal with it on a daily basis
and overspend and spend things they don't need to
or getting forced on you.
And often when business people ask me, what should I do?
I go, sit still.
Don't do anything.
And the phrase a lot of people in tech use are NPCs,
which is from video games,
non-playable characters and video games.
You flip that formulation on its head
and said we should think about the future.
We should actually focus on NPCs,
regular people, how they use technology.
I think MPCs is actually a very good way
because you all don't count.
Like, they're not.
thinking of you. So how should business approach designing AI for the average person, an NPC?
Yeah, I've got a long lecture I can give on this, which I won't. But when I was a junior designer,
I did a lot of the bombastic, escapist, sci-fi-inflected futures work, because I thought that's
what you were supposed to do. Then I would go home to my parents' sort of damp Midlands
house, go to the pub with my dad and just think, what was that I was doing in London at the
weekend during the week. It doesn't make sense. And I think that it just started to not reflect
my own experience of the world. And so I lent into this way of thinking that I call the future
mundane, which is a way of thinking about either NPCs or background talent or extras. I think in
the opera, they're called supernumeries or something. But the people that exist in the background
of scenes are the people that are also going to be existing with this technology. So I love to
think about technology in sort of a mass-adopted, ordinary and
everyday part of people's lives. And I think that helps ground the conversations. And it helps you
lean on things. Which is not thought of, actually. It's not thought about who people are going to use
it. It's just trying to sell you something before you know what you need it for and snake oil.
We have a habit of talking about the future, again, as some other place occupied by other people,
probably more heroic people than us. Whenever you see something like a future device or a new gadget
or whatever, it's really helpful to start to think about it, less about, you know, you always see these
videos of somebody using VR to like fix a heart or build a city. But actually people just watch
YouTube and play games on them. They do. So think about it in somebody's backpack on a bus in a wet
city, you know, Seattle is where we are today. Like think about it in those terms and suddenly
it grounds everything and it normalizes things and starts to say, actually, I have a hundred
questions now. Right. It stops it being a fantasy land. So does that create a situation for business people
feeling enormous pressure to buy, by, buy before they know what the actual, this is something I say,
I'm like, don't buy it until you know what you want to use it for.
Use it to see how it works and then ask questions.
But it's often foisted on you in this inevitability.
Yeah, there's the FOMO side of it, which every company feels like they need to put on their slides.
Like, we're doing AI, we're doing AI, and I've seen 100 people on LinkedIn,
put AI designer now in their bios and whatever else.
It feels like you have to do it.
Right.
And it's because we're in a sort of, whatever you want to call it, a Cambrian explosion of technology or whatever.
It is.
I think we need to play with these things and we need to explore them.
environments that we're comfortable with, not reject them and not saying no, but just play with
them, start to make sense of them and say, actually, there's something here there that
either me or my team or my company or my society or my culture could benefit from, and then
start to, you know, make the big investments and run towards these technologies.
The trillions of dollars they're spending here is really breathtaking in many ways.
Your work shows that humans are messy and designing intelligence systems that can handle a lot
of things. A worker does a day is probably very complex. Talk about it.
about what gets lost when machines replace people?
I mean, that's a good one.
I think there's a really weird thing going on at the moment.
The people look at the world as it is and thinks that's what it is.
And then they apply AI to it as some sort of admin anphetamine.
Just do what we used to do, but just faster and more efficient.
But I don't think that's actually where we're going to end up.
I think that's where we'll start because people see the world and they have their list
of problems that they want to answer or address and they say,
oh, AI can do that in half the time or half the cost or twice as often.
and whatever it matters to you.
The challenge for me is what new things come after that.
What new jobs, what new ways of working,
where creativity plays into all of this.
I think that isn't what's being talked about enough.
And I think we need to have that conversation.
Meaning we don't know.
We don't know.
Right, exactly.
I mean, one of the things I always say
is could you have imagined Uber
when they invented the internet,
when the internet started become commercialized?
No.
Right.
Nobody, or when the iPhone came out,
could you've imagined Airbnb?
Could you have made it?
Someone did.
The example that I tend to use is the guitar amplifier, actually,
because it takes it away from sort of Silicon Valley Tech.
And when the guitar amplifier was created,
it was designed to reproduce the sound of a guitar or a voice louder.
Obviously, what came with that was distortion,
which is a bit like the artefacts we see with AI.
And a lot of engineers said, that's a bad thing.
We need to engineer that out.
But a lot of creative people saw that and heard that and said,
there's something interesting.
And lo and behold, they leant into it and said,
this is actually something new, and it's a different thing.
It wasn't trying to reproduce.
And so you end up with grunge and rock and roll and heavy metal,
and that birthed a whole new industry and a whole new art form.
So I'm less interested in saying, we have this problem,
let's throw AI on it, and we can do it twice as faster with half as many people.
I'm more interested in saying the world is a place filled with people
with lots of things we're trying to achieve.
We have this new set of capabilities, what new things might it birth.
It's also because they hate the word friction.
they find friction to be offensive.
They're going to make this seamless for you, whether it's AI relationships.
We'll give you a seamless relationship.
We'll give you one that's not a problem.
We'll give you answers that are easy, that kind of thing.
And I think one of the things I'm pushing back on is friction is what makes everything interesting, right?
And distortion is friction.
Yes.
Right?
So that's a really big concept because they're constantly pushing.
We're going to make it convenient, seamless.
and I am always, any time they do that, I'm like, you know, sex is friction.
Thank you.
But then I think, oh, wait, they now have chatbot girlfriends, so well, that's the end of that.
So, but you can see how easy it is to fall into the frictionless environment.
Yeah, it comes from a mindset of viewing the world as a series of problems to be solved.
Right.
And I've always found a problem with the term solution.
We always have this, like, companies name themselves, so-and-so solutions or whatever.
And I think it's a very reductive way of looking at the world.
So we can take something like the transition from petrol cars to electric cars,
and we say, well, it's a problem solved.
Far from it, particularly if you're a nine-year-old boy
being forced down a hole in Congo to dig up lithium or cobalt for these batteries.
A problem and a solution only exists as far as you're willing to look.
And I think thinking about the depth of implications
about all of the things that we're bringing about on the world
and the second and the third-order implications
of the things are bringing about on the world
is where responsible companies start to flourish.
It's actually understanding that friction and complexity
is part of business.
You can't just streamline something and say point A to point B.
There's joy in the mess,
and there's also responsibility in the mess too.
So talk about the unintended consequences
of rapid AI adoption, besides getting it wrong.
I'm thinking of something like expanding a highway.
For example, people think more lanes will ease traffic,
but research shows it actually makes it worse.
So conventional wisdom says AI will lead to job losses
and potentially less work for those who have jobs,
but maybe the increased capacity leads to greater output,
more growth, more jobs.
I know you have an aversion to making predictions,
but as you look around,
what are the counterintuitive repercussions you can see?
Yeah, aside from the fact that we don't really know
what the labor market will look like in 20 years
because of the introduction of these technologies.
Right.
I think one of the mistakes, again, we make
is the idea of thinking of these technologies
are somehow compressive.
Like, there's a funny sort of academic term,
compressive or donative technologies.
Right.
So people go rowing for fun,
even though an outboard motor is more efficient
because it's donative act.
I think one of the things that we're struggling with
at the moment is thinking of things like AI
as a compressive tool.
And I think it's the same mistakes we made
with labor-saving devices in the home,
in the 30s, 40s, 40s, 50s and 60s.
And the dream was that it would emancipate people,
particularly women, from hard labor in the home.
And all it did was just increase the expectations on women
to do more things with the time that they had.
They weren't off playing golf or having dinner parties,
as the advert said.
And I think we need to think about that more.
It's like, if we're going to compress all of this stuff down
and make it simpler, we don't get the afternoon off.
The expectation is then that we produce twice as much.
We'll be back in a minute.
Support for this show comes from SmartGee, the intelligent work management platform.
It's no surprise there's a lot of talk about AI in business these days.
I mean, you're listening to an episode right now where I talk about that very subject.
We're all scrambling to figure out how AI can help our businesses.
Just like any tool, you'll need to find the AI that's right for your needs.
It's about precision.
and that's what SmartSheet is all about.
If you got to attend the Engage Conference
where we recorded this episode,
then in between the podcast taping and the food,
you also got to see SmartSheet unveil
an entire suite of new AI-powered capabilities
designed for enterprise-level use.
It's a sweet purpose-built to accelerate the velocity of work.
And when you combine that with enterprise-grade security and governance,
smart sheet is known for,
it means you're looking at a solve for scaling responsibly
and scaling confidence.
SmartSheet is bringing people, data, and AI together in a single system of execution,
and the results can mean big things for your company.
Discover how SmartSheet is helping businesses lead with intelligent work at smartsheet.com
slash box.
Now, Google, Met and Microsoft reported record spending on AI.
It's astonishing the amount of money they're spending.
It's clear at least, and right now it's supporting the U.S. stock market, which is dangerous.
to say the least, but could and should futurism, AI futurism, is winning.
But if we're in an AI bubble and it bursts, how do technologists who believe the
promise vague create momentum for their ideas?
Because we've seen multiple AI winters before.
This is not a new.
AI is not new.
So if the bubble burst, does that mean the don't futurist narrative takes over for some time?
I think it gives it oxygen for sure to say, you know, we were right, we should have been
more concerned about these things.
I think the distribution of this work is something.
I know you're fond of talking about the distribution of where this work is happening and by whom
and what their motivations are is a question that we need to have more broadly.
And the concentration of influence, because of the amount of money
and the amount of resource it takes to build these models, that needs addressing.
I don't necessarily have an answer to that.
At the beginning of the Internet, as they said, it was inexpensive for innovators to create a website,
create businesses.
And in this case, only the large companies can do it.
And therefore, it will be far less innovative if it's homogeneity.
group of seven companies and with a very non-diverse group of people create everything, we're going
to get the same chicken dinner from all of them. Like you're not going to get innovation out of
large companies making decisions for the rest of us. It just seems logical. Yes, and they're mostly
motivated by achieving the same sorts of goals too. Right. And I think that's the challenge is like
how do we how do we build these systems that by the way that they're built right now require this
amount of capital and resources to create and energy, yes, of course.
How do we take that and somehow, I won't say democratized, but offer an alternative
paths that allow smaller companies, different organizations?
Because they're all the same.
All the LLMs are the same.
I'm sorry.
Like, they're building the same thing together, and it's just one of them will win.
It's sort of like Highland or the movie.
There can be only one at some point.
Who's got the biggest sword?
Who's got the biggest, well, it's money.
That's what it is.
And it could be money well spent, and the winner will be.
benefit from it, everybody else will be the loser, which doesn't make, from a resource
perspective, it's idiotic. So you write about the importance of resisting extremes, the uncritical
hype of could-futurism, and the paralysis of don't-futurism can lead to dead ends,
both of them in different ways, because people will either be disappointed when you don't get
the dreamy future, or you could feel like you can't be paralysis. So what does a healthy
relationship with our AI-enabled future look like for businesses and people here are making
these decisions? Yeah, I think finding a way to encourage everyone you meet and all of your teams
to have conversations about the longer-term implications and the longer-term futures that we're
interested in. And encouraging people to think about things in the round, rather than just getting
trapped into one of these corridors of thinking, it's very easy to say, we should do this because
the data says so, or we could do this, I'm very excited about it, or we don't do this because
it's scary. But I do think a well-rounded AI strategy, for want of a better term, is one that
incorporates all of that. And also incorporates the views of everybody in the organization who's
building it. I do have a problem with this sort of othering of futures work and labifying
of thinking about the future. You're here and you make the products. We're over here and we're
thinking about the future. Because I've been in those environments for 25 years. And I think that
there's sometimes a necessity to doing that for secrecy, for privacy, for lack of distraction.
does other futures work in an unhealthy way, and it stops it being sort of integrated into
the way of thinking. When you're in this sort of environment where the world is changing as
quickly as it is with these new technologies that have huge capabilities, I think it is
everybody's responsibility, not just job, it's everyone's responsibility to start thinking
in longer-term ways, start thinking beyond the quarterly returns, start thinking beyond the one year,
even two-year, you know, start to really think.
Well, that's hard, though, when your CEO is like, AI, we've got to do it.
Sure. Well, AI, we've got to do it. Fine. That's the sort of thing that CEOs say. But then what? After that, let's get into the detail. Let's talk about what we're actually going to do or excited about or fearful of.
But it has become very hard to say no, though, correct? I mean, I would think it's mostly FOMO at the moment. Is the CEO feels like they have to stand up and say AI, AI, because if they don't, the company's like everyone else that I go to dinner with is talking about AI. Why aren't we talking about it? But I think that represents the experimental phase that we're talking about.
I would hope that a smart CEO would say, we should look at this, it has the potential
of big transformation, I think those could look like this, let's explore that.
But if it turns out not to be true, in five years, just like moving to a different
whatever server base or whatever it is that we thought was going to be a big thing that
ended up not being, we might see a lot of companies going, do you know what, LLMs, not for
us.
It's not really how our business works, doesn't make financial sense, customers don't want it,
we might move away from it.
So I think at the moment we are in that space where it feels like you have to have some skin in the game
and explore and experiment just to see if it's for you.
But it doesn't mean to say we have to just grab the tiller of the boat and point it entirely over there.
Yeah, I had a bunch of people like, you've got to get into the Zeyo podcast, and I was like, yeah, I'll pass.
And they were like, I'm not doing it.
And they're like, well, you should try it.
I'm like, yeah.
I mean, the other option is to wait until it matures and then see it for something there.
You don't have to be part of the experiment.
Right, exactly.
It seems like a giant waste of time and life is too short.
Ultimately, you want us to identify a set of narratives we fall into when we talk about the future
so we can think about it more rigorously.
And that means thinking about all the mundane ways, because the boring stuff is where things
actually happen, which how we'll interact with technology in the future instead of this
sci-fi, you know, we're all going to be wearing shoes that make us float, that kind of thing.
I always use the example of electricity.
Nobody today thought about electricity.
No one went, oh, I'm on the electrical grid.
today as I turn on the light. You don't think of it. It becomes, to me, the most successful
technologies are invisible. Yes. How do you, that's how I look at it. So what do you see for
AI going forward? Sounds remarkably like asking me for a prediction, but we'll go with it.
I think it's, like you say, when we stop talking about it, when it just becomes part and
parcel, when it becomes embedded in the software that we're using, when we're truly honest
about telling stories about the future. I use the example of ABS. When I was a kid, ABS breaking on cars
was a big thing. And cars had little Chrome ABS badges on the back. And now it's just sort of
standard. It's not even listed. And we certainly don't see ABS badges. So I think getting past
the badging phase of AI and just saying, that's how computers work now. They work in a slightly
different way. And it gives you all these other things. I think that we need to be honest about what
people really do with their software, which we're not. And then once we're honest about it,
we need to see it sort of disappearing into the background and stop shouting about it so much
and stop branding everything AI.
Well, their market-shared numbers depend on it.
That's why they're doing it.
Of course.
Last question, what's the most interesting deployment of AI
you've seen of existing technology?
I mean, I come at this from an art school background.
I've mentioned art school twice as some sort of caveat for my answers,
but that's where I come from.
Some friends of mine have been training AI models on images of spoons.
And they've been, yeah, spoons, you know, that we eat with every day.
and created a huge training set of images of spoons
and then asked AI to create three new spoons.
And, you know, that's sort of interesting.
And they were kind of weird and distorted and asymmetrical
and they had weird blobs because of all the distortionary forces
I was talking about.
But what's lovely is they actually made them
and they produced them and they found a silversmith in Italy
to produce these spoons.
And for me, that really epitomises that thing I was talking about
about focusing on what's different about this
rather than just allowing us to make the perfect spoon
faster or quicker. It allows us to think differently as a creative partner, as a way to
sort of stimulate new directions of thought. And it's a very simple thing, and I'm sure the market
share is zero, but it's interesting to me. And I think that is this little peak into the world.
To rethink something. To rethink something and to push back. Computers haven't really pushed
back on us very much. They've been very sort of servile. But now they're in this sort of negotiation
phase of computing, which I find really interesting. So it's a small group of designers who've been
exploring this. Was it a better spoon? Define better. I have a trouble with the time better as well.
Well, you could still drink spoons. Spoons work pretty well. Spoons work great. I'm not a huge
fan of spoon innovation, but you could still, you know, drink soup with it. Yeah. Thank you, Nick.
Thank you. Thank you. Thank you. Thank you. Thank you, everybody.
Eloy, Megan Bernie, and Kaelin Lynch.
Nishat Koura is Vox Media's executive producer of podcasts.
Special thanks to Anika Robbins.
Our engineers are Fernando Aruda and Rick Kwan, and our theme music is by Trackademics.
If you're already following the show, you're an NPC.
If not, you are banished to the metaverse without legs.
Go wherever you listen to podcast, search for On with Kara Swisher, and hit follow.
Thanks for listening to On with Caro Swisher from Podium Media, New York Magazine,
the Vox Media Podcast Network, and us.
We'll be back on Thursday with more.
Thank you to SmartSheet for supporting this episode.
Today's conversation about how AI will transform business
was more than just philosophical.
It reflected the challenges that IT and business leaders
are facing in their day-to-day right now.
SmartSheet offers a purpose-built platform
that unites people, data, and AI.
and so you not only get work done, you accelerate the velocity of work itself.
This isn't just about being efficient.
It's about moving business forward with speed and precision.
It's about making sure your team is working smarter.
Find out more at smartsheet.com slash box.
That's smartsheet.com slash box.
