Think Like A Game Designer - Ethan Mollick — Navigating AI in Game Design, Ethical Integration, and Supporting Artists (#70)
Episode Date: August 8, 2024Ethan Mollick joins us today to share his insights into the rapidly evolving world of artificial intelligence. Ethan is an associate professor at the Wharton School of the University of Pennsylvania, ...specializing in innovation and entrepreneurship. He also co-directs the Generative AI Lab at Wharton, which focuses on developing prototypes and conducting research to explore how AI can help humans thrive while reducing risks. His body of work includes the book Co-Intelligence, a New York Times bestseller that delves into AI's current state and future, as well as numerous published papers in top academic journals.In this episode, Ethan takes us through his journey from working at MIT's Media Lab with AI pioneer Marvin Minsky to becoming a leading voice on the impact of AI on work and education. He shares practical advice on how creatives, including game designers, can wield AI to enhance their work while navigating its ethical complexities. Ethan and I reflect on co-designing the Breakthrough Game, which has been used by organizations like Google and Twitter to boost innovation and creativity. There’s a lot to learn from this episode, so get those notebooks out—Enjoy! This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit justingarydesign.substack.com/subscribe
Transcript
Discussion (0)
Hello and welcome to Think Like a Game Designer. I'm your host, Justin Gary. In this podcast, I'll be
having conversations with brilliant game designers from across the industry with a goal of finding
universal principles that anyone can apply in their creative life. You could find episodes and more
at think like a game designer.com. In today's episode, I speak with Ethan Mollick. Ethan Mollock is an
associate professor at the Wharton University in Pennsylvania, where he studies and teaches innovation
and entrepreneurship and also examines the effects of artificial intelligence on work and education.
His papers have been published in top journals and his book on AI co-intelligence is a New York Times
bestseller. His research is highly cited by other academics. It's been covered by CNN, the Wall Street
Journal and other leading publications. In addition to his research and teaching, Ethan is the co-director
of the Generative AI Lab at Wharton, which builds prototypes and conducts research to discover how
AI can help humans thrive while mitigating risks. Ethan is one of the most thoughtful,
and fun people that I get to talk to.
He and I actually got to work together
through multiple projects,
including the breakthrough game,
which helps to improve innovation
and creativity in groups,
which we've run for not just students
at the Wharton School of Business,
but also companies like Google,
Zillow, Twitter,
and other major companies.
We've also worked on an entrepreneurship simulation game,
which helps teach the principle of entrepreneurship.
And Ethan was actually going to be
one of my first guests on this very podcast.
He was going to be in episode number three,
but our audio recording did not
make it. So we had to re-record now five years later. And I'm really glad that we got the opportunity
to do this because AI is everywhere. If you have paid any attention at all to what is going on in
the world, you know, AI is changing the face of work, the face of creativity, the face of what
it means to be a human. And Ethan is the foremost expert on this that has a very thoughtful, very
reasoned way that he approaches things. He is regularly in touch with the CEOs and the leaders at
companies like Google and Open AI, Ananthropic who are building the frontier models of AI. He has a
front row seat and has been doing a lot of deep research into how to interact with AI and we get
into the details. We talk about the ethical implications of AI. We talk about the changing nature of
work and how it impacts you as a creative, whether you're an artist, a writer, a game designer,
an entrepreneur. We talk about how to incorporate it into businesses. We talk about how to think about
the next six months versus the next 12 months and the next three years. We really dive
into how you can use it as a great game master and what types of game designs are possible that
weren't possible before. There is so much incredible, useful information here to help deal with a
very fast-moving, very impactful technology that I am ultimately very optimistic for, even though
there are a lot of concerns and a lot of challenges that we have to face. And so I hope you find
this as inspiring and useful and practical as I did. And without any further ado, here is Ethan Mollick.
Hello and welcome. I am here with Ethan Mollock. Ethan, it is a long time overdue to have you on the podcast.
I am so glad to be here. Thanks for having me. Yeah, we had, we had, you're actually my third guest on the podcast now, but five years ago, but the audio ended up being unusable, which is where I learned the lesson to have backup recordings, which, thankfully, you're now doing as well. So I'm glad to have you back. But man, things have changed a lot since, uh,
since we had that conversation.
Yeah, for both of us, lots of involved in the world and in technology and in games.
Yeah, so I think, you know, right, we have so many areas of interesting overlap,
like the projects that we've worked on together, you know, both the breakthrough game and how,
you know, the things about creativity and teams working together, innovation generally.
But I have to start with kind of the proverbial elephant in the room, the thing that you are now most known for.
which is co-intelligence and the sort of understanding of how AI is affecting our lives in our modern world.
First of all, let's just, I'll have already given some of your bio at the beginning of this,
but briefly, like, how did you come to be a guru of AI?
Just walk us through a little bit why this has become an interesting topic for you.
So I was AI adjacent for a long time and nobody cared.
So at the Media Lab at MIT in the early 2000s, I worked with Marvin Minsky,
He was one of the founders of AI, and I was like the AI whisper.
So I wasn't the technical guy, but I helped explain the technical stuff to other people and have conversations.
It was another one of the AI winters right before the latest AI boom.
And so I kind of kept my eyes on the AI systems as we're building training and teaching games,
but I'm not really a machine learning kind of expert.
But it turns out large language models are a very different kind of beast.
And just having been aware of them and using those kind of tools,
for education for a while, put me in this really interesting spot.
Yeah.
And so first, I'm going to, you know, I'm going to encourage everybody, you know, to read your book
Co-intelligence.
It's a fantastic and very clear description that helps people understand not just like where
we are in AI current development, how best to interact with and use AI as well as kind of, you know,
predicting as best we can, you know, kind of plausible paths forward.
But, you know, in my industry, man, I have heard.
everything from, I'm sure this is true across the board,
but this is going to be the greatest thing of all time
to this is the end of the world.
Generally speaking, very few people take the kind of reasoned middle position
that it seems like you do.
If you're speaking just generally to, let's say, creatives,
you know, and people who are, you know,
let's say most of my audience is in the gaming industry,
people who are looking to do good creative work,
what's the way that they should think about AI
on a kind of practical level nowadays?
So, I mean, right now with the level
of AI is at whatever you're best at,
you're definitely better than AI.
But it's probably better than you at a lot of things
you're not that good at. And game designers,
entrepreneurs have to wear many hats.
And a lot of our time has spent trying to find good
people to help us out.
AI can be those good people.
Plus, the only tool we ever had that really
is proven to accelerate creativity is coffee.
It turns out, just in case you're wondering,
pot does not. It makes you think your ideas are better,
but they don't actually get better on quality scales.
But we probably have a prosthesis that
actually improves in creativity and is engaging and we work with in this kind of way. And I think
there's a real power there. So especially in the nexus of thinking about games, this is a really
interesting space because game designers tend to be pressed for time, whether they're doing
video games or board games. They tend to not have all the skills they need to get something
done. They need everything from playtesting to creative support. And AI is really at the sweet
spot of ability right now to help with all those things. So it's an exciting time, I think.
Yeah, and I've found in, it takes, this is something that you've written about, but I found this to be very practically true.
It takes like several hours of working with AI to really try to find that sweet spot of utility versus kind of, you know, it's a toy or you're wasting your time with it.
There's like it just like it doesn't behave like other pieces of software where I can just sort of engage with it.
And once I get past the initial UIUX, I can I can find the value right away.
Whereas here, I think what you refer to as the jagged frontier feels like it keeps changing
and knowing when AI is going to be useful for me and when it's not, it's not, it's surprisingly
non-intuitive.
Very non-intuitive because like an AI system turns out to be really bad at counting a 25-word sentence,
but can write a pretty solid sonnet.
And that's hard for us to deal with, right?
They're not human abilities or missing things.
Although I find the best analogy is actually to think of the AI like a person.
And what that gets you is the ability to think through, you know, its flaws and it's BSing, almost like a person would.
So, you know, just you manage many people over your career and grown organizations.
And you can have an intuitive strength.
Like, this person is good at this.
They're bad at this.
I could throw this task and then they'll do a great job.
This one, they're going to say they did a good job, but didn't do it.
And there is that kind of aspect to the AI piece.
So I usually say about 10 hours of the model playing around, but it is changing, right?
It's hard to get the maximum out of it.
On the day we recorded this, yesterday, Gen 2, or Gen 3 of Runway came out,
which can now do incredibly good video, like just on prompting.
Last week, Claude's Sonnet 3.5 came out, which was a really powerful model
that really changes the game on writing.
And so it is hard to stay up with stuff.
I think the thing to realize, though, is over the lifespan of game development,
especially, I kind of think you have to because the frontier is moving pretty quickly.
And, you know, the stuff that you do and the stuff that I've done,
done with you, it's not fast. It's a, it's not a, you know, no one makes a quickie game outside of it,
you know, kind of jam situation that is like ready to go and player tested and goes through all the
steps you describe in your books, which are these excellent, excellent overview of the problem.
Yeah, and I think it's like, if you tackle it at each stage of the process, right, so the way I talk
about with the core design loop, right, and this, this kind of, you know, mirrors off of design
thinking where this sort of inspiration, framing, brainstorming, prototyping, testing, and
iterating. And I think you can approach every single piece of that, and AI can add value to that
part of the puzzle. In terms of finding inspiration and just like pure like raw idea generation,
I don't think I've ever seen a tool as powerful as AI. I mean, I'm professionally creative now.
I've studied my life study, got to be creative. And I'm not saying that AI is going to come up with
the best idea that I'm going to use. But in terms of just like the pure volume of like reasonable ideas
and suggestions that can become a springboard for the stuff that you want to do, like that it's like,
amazing in seconds what it would take, you know, a team of people hours at best to come up with.
Yeah, I think that, you know, it's very funny.
We started our co-designing career together building the breakthrough game, which is sitting
behind me right here.
And I'm super proud of that, right?
That was used in organizations all over the world to generate ideas.
And I think there's still value in it.
But at the other side of the fence, like, actually, I can give the AI the directions to the
breakthrough game.
And it does a pretty good job of just running it on its own and generating a list of
200 ideas for you.
So, I mean, I think as a creative tool,
we've got something very new in the world.
And I think that's both really exciting
and but also a little nerve-wracking
because computers aren't supposed to be creative.
Yeah, yeah.
And this is, I guess, I guess let's address some of this stuff now.
I don't want to linger too much on it
because I prefer the sort of more optimistic use case.
But this is some of the dark side that feels here, right?
There's a lot of people.
And I know this is the most common thing I see in the game industry
comes around illustrative work specifically.
the people are worried that the, you know, the AI is going to destroy jobs,
it's going to destroy the creative work.
You know, the classic idea of the AI robot is that it's just going to do all the grunt work.
It's going to do all the busy work.
It's going to do all the things that we don't want to do.
And the reality that we're seeing from the current level of AI technology is actually
really good at the creative work.
I mean, it can create beautiful pieces of art in seconds with a few words of prompts.
It can create sonnets, as you mentioned.
It can do, you know, I mean, it's not great at making games yet, but I have no reason to believe
it won't be.
for the people that are out there that are afraid of this, you know, really just destroying the jobs that we actually really like.
Like how should we think about that or how should companies be thinking about whether they want to be using these tools or not using these tools?
Like where's the ethical line we should be drawing here or is there one?
So there's like three or four complicated questions you asked there.
The first is the ethical line.
That's a tough one, right?
Because these systems have ethical compromises throughout their whole process.
They're trained on, you know, stuff that you and I wrote without our permission.
How do we feel about that? Is that legal? Is that ethical? Then they're fine-tuned using labor
that is fairly exploitative, you know, sometimes using our own data, right? And then on top of that,
it produces work that can be derivative in some way or reproductions of other copyright work,
or it just displaces our labor and stuff we care about a lot. Those are a lot of ethical
constraints. What I think the problem is, is that the systems are so good. And there's, you know,
systems have systematically tried to address these. So Adobe Firefly, for example,
example, is only trained on data that they supposedly have legal and ethical claim to,
right, where they can, they've sourced that, they pay licensing fees. So that gets rid of the,
oh my God, it's just copying people's art. But at the same time, we can still produce displacing
art. I think it's a really difficult challenge, especially in the creative world that we're in.
Now, the positive view of this is, you know, the same kind of thing happened when the synthesizer
came out, right? It was like going to destroy professional music and it turned out to create new forms.
Living through that sucks, right?
It's not like living through industrial evolution is good,
and I think there is going to be displacement.
I don't think we can say that jobs are going to be unchanged.
I do think that it enables new kinds of creative work.
Most, I think freelance art is going to change.
What do you do to be 10 times more productive?
How do you offer guidance and taste is something the AI doesn't have right now?
So it might start to be more that you're supervising a small firm of AI
rather than doing your own work or your own illustrative work
gets turned into other cases.
But that's going to require some change,
and that's not necessarily comfortable.
For the individual creator, I've been in contact with one-man game studio.
He's built a lot of games before,
and now he's doing one piece of time,
you know, one game entirely on his own that would have been possible for.
How do we feel about that, right?
Is that great because there's more creative content coming out from more indie developers?
What does it mean that people who couldn't code can code,
that people who couldn't draw can produce art of a sort?
I mean, I think we're just in for a very large change.
And I think that the anxiety,
is and distaste is understandable.
I don't know if it's going to make a difference.
Yeah, and that's, you know,
a lot of this is where I come down,
you know, look, we spend hundreds of thousands of dollars on art.
We spend, you know, even at least that much
and more on programming and engineers and all this stuff.
And like, that's a massive, massive overhead to have to make the kinds of games
that we make.
The idea that someone can make those games at a fraction of that price is just a
reality now, and it certainly is going to only become more true over time.
And so for the people, you know, for some people, that means, you know, they're not going
to get as much work or their work has to change.
But for others, it means that they're able to create things that were just completely
implausible or impractical otherwise.
And there's, that means there's new types of games that can be made.
That means there's entirely, you know, plenty of people out there with amazing ideas that
don't have a million dollar budget to launch a massive TCG or a digital client.
And that could theoretically be done.
And so to me, I'm more excited about those possibilities.
As someone who's never learned a program, but I've had to manage a lot of engineers,
boy, would it be nice if I could build my prototypes on my own and move quickly
without having to necessarily hire engineers and spend their time and just be able to iterate cheaper and faster?
That's the name of the creative game.
And it seems like exactly the type of tool that we've been presented with, you know,
whether or not it's opening Pandora's box, it seems like one we can't close.
Yeah, I mean, I think that not closing is an important part, right?
Like, there's going to room for artists for people with taste.
I don't know whether the market looks the same, though.
I still hire artists to do the work that we do,
and I intend to keep doing that,
but I'm going to be competing against people
who are not hiring artists as much, right?
And one by one, the barriers fell, right?
Steam was holding out for three months,
and then afterwards it was like, okay, it's fine now.
And the tools by companies like Leonardo
that are doing stuff specifically for digital art assets,
I mean, they're there, right?
So I think we have to decide if we're doing a rear-guard action,
what that rearguard action looks like for the people doing that.
But realistically, for anyone who's running a business in this place, right,
or wants to enter the space, suddenly it's changed.
I expect to see a burst of creative energy of new game types we never saw before, new approaches.
And there's secrets to be discovered, right?
How do we use this to do digital game testing,
which is something you and I have talked about before,
having personas play the game, and can we accelerate the process of doing this?
Getting a game to market is incredibly expensive and difficult right now.
What happens in this new world?
Part of what we get to do is think about how this is what an ethical use looks like
and start to model the kind of behavior that we think is the right kind of model.
Having human art directors with AI help looks a lot different than I'm just going to cheap out
and use cheap AI art everywhere.
We have to do some thinking about this.
Yeah, yeah, I think that's right.
And I think, you know, so let's let's dig into this because I think it's fun and interesting
and immediately practical to me.
and I'm sure to much of the audience, right?
So on one level, there's the, hey, look, you're an individual creator, you're somebody
out there, you know, you should be playing with these tools and using the frontier models
and just getting a sense of like where it can support you in the things you do.
And it's worth some amount of investment to stay current on this because it's moving quickly.
As a now as a CEO and entrepreneur, I have the challenge of deciding how to integrate this
into my company, right?
I will, I like you, I'm committed to continuing to use human artists and, you,
human and I would much rather expand the capacity of my current team to do more rather than,
you know, reduce team size, right? But how I found it to be difficult to create that
encouragement and to create that culture to allow people to use AI, to encourage people to be
able to be better at using it. And there's always these practical barriers, right, where it's like
you try to use it, doesn't quite do what you needed to do, or it's just wrong often enough that
it's not reliable.
Like, when you're advising organizations, and, you know, in my case, sort of this, you know,
small to mid-tier organization, but whatever tier makes sense, right?
How do you, how should we be thinking about this in terms of being able to maximize for a
team of creatives that we want to be able to be more empowered collectively?
Again, I think there's a lot happening there, right?
So one thing we've seen is that there are lots of ways of accelerating this that aren't
100% clear sort of out of the box, right?
So you helped teach one of my classes at Wharton,
where I had a, just after chat CBD came out,
when it was still GPD 3.5,
and we ran a class where we taught people to be game designers
and had them build a game.
And I had MBA students who, you know,
many of them are in creative industries.
I know there's a view of like MBAs or, you know,
but like there was, you know, hip hop promoters
and all kinds of interesting people there.
And they were said, if you can't code,
I said you need working software.
If you've never built a website,
you need a working website.
You need to have built a full game.
aim, you know, with music, 3D printing and sintering and all of his other tools.
And they did it.
And some of them were actually quite interesting with an interesting design space.
And they used AI to sort of supplement their situations.
And, you know, I asked them to write about some cases the AI helped the same case it hurt.
So right now, the title of the book is co-intelligence, partially because right now it's a supplement.
So you spend your 10 hours learning what this does.
And you'll find that the shape of the frontier.
And you'll be like, listen, it's really great for some stuff and bad at other things.
And that doesn't mean doing the work for you.
I spoke to a Harvard quantum businessist who's like, all my best ideas come from AI.
And I'm like, does it know quantum physics as well?
And he's like, no, no, no.
But it asks me good questions, right?
And so I think part of this is like, where does it unburn in you?
And you will find that there's things that can't do.
I cannot tell it, make a compelling board game because, for example, all the AI systems hate
conflict because they've been trained to avoid that to not be caused controversy.
So everything ends very nicely in an AI game.
But you can kind of convince it to do other things if you want to.
It falls on tropes quite often because there's a lot more tropes in a training set.
So you have to be the one who kind of pushes it out of its tropeiness into something interesting.
So it is a companion.
It's a person you're working with that adds resources.
It doesn't solve the problem for you.
Now, there is a secondary issue here, which is like, how good do these systems get?
And that's the big unknown question we have right now.
Yeah, yeah.
So it's like I definitely want to get into the kind of how good is it get?
What's the future like?
but I'm really eager to linger on the like kind of short term,
like here's where we are,
this,
you know,
the next six months,
the next year,
even though sometimes that's hard to predict.
And if that's,
if that horizon is too far to be practical,
then that's worth flagging,
right?
But like to me,
there's the future world,
which is very,
very hard to assess for.
There's the current world,
which I think there are practical steps.
I think a couple things I want to highlight about what you said,
right?
One is using the AI as a tool to ask good questions.
and like be a stand in for either your customer and your player or your reader for a book or
your, you know, whatever, you know, your customer for a new product or even like, you know,
for, you know, I want advice from, you know, Gandhi and the Buddha on what I should be doing with
my relationship. Like even like those things, like it's, it's shockingly good at giving you a kind
of coach slash reflective mirror on what's happening and what's going on.
I found that to be just like this incredible tool, just for own for self-reflections.
So I think that's like a very powerful piece.
And then I wanted to, the second issue that is when it comes to sort of,
you use the phrase like sort of doing what would have been impossible before
and pushing you to like just try to do things that are like way outside the scope of what you thought you could do.
For people that are like practically, I'd love to just sort of dig in more like practically speaking,
what does that mean for someone to actually use it?
So either the tools of, you know, kind of self-reflection in general like the quantum physicist or in terms of like,
okay, great. Now, you told me I can do things I couldn't otherwise do in my organization.
What would be the next steps to say, okay, let's do that? Is it a game jam where people
have to have something, a working program and 3D printed stuff at the end of the week?
Or is it, you know, some other tasks that I make unreasonable assignments for or something?
Yeah, so let's break it down. I mean, you can go through each step of your process almost and
think about what you would use it for. So I'll just give you a couple highlights.
So first of all, ideation. You should be using this for ideation, right? Come up with 45 games.
no, more like this, more like that.
It's very important to interact with the AI.
You're not just typing a command and getting something.
You're saying, no, no, more like the fifth idea, you know, like make it more diverse.
We have a paper showing how you prompt the AI matters.
And if you prompt it to do a chain of thought approach where at first, you know, list out
some ideas, then come with even more diverse ideas, then modify those again, you end up with
better ideas if you're just like, give me 30 ideas, right?
And all the techniques that you talk about in your book and that I discuss, you know, and
with you on these things.
about asking it to come with constrained ideas.
Imagine an infinite budget, how would you do this?
Imagine a digital computer system in the future.
How would you do it?
That all helps you come with really good ideas.
So ideation is great, and that interaction is really good.
Another thing that's very good.
It is prototype, especially you use something like Cloud 3.5 Sonnet right now.
If you haven't played with it, it does artifacts.
It actually creates images and other stuff.
Mock up what the website would look like.
They'll make this dial functional.
It will actually do all of that for you.
So you can actually play with some prototyping, which is an important part of the game process very quickly.
Then it actually works reasonably well as a game tester.
So if you show it a screenshot of the game and say, what confuses you?
Here's the instruction manual.
What's weird?
Pretend to be a high school student playing this.
Pretend to be an experienced board game designer.
What would you say?
You get pretty good testing data, not as good as a human test, right?
Because you can't see everything that emerges.
But for that first order catching the errors that would have taken you two to three rounds, it's pretty good at doing that set of stuff.
It's also very, very good at advice.
So there's this great paper showing experienced entrepreneurs in Kenya who were doing well beforehand,
got an 18% improvement of their profitability when they got advice from the AI.
Now, if they were doing badly, they actually did worse because they couldn't even implement the ideas or no questions to ask.
But if you're an experiment, design it, right?
Like, you know, what should I be thinking about in terms of how to get production outsourced on this?
What are some things to worry about in this contract?
You know, those kind of basic questions, you get a second opinion from an experience if slightly, you know,
know, slightly overconfident colleague.
That's a very useful cough to throw in throughout the entire process, right?
And so you have this kind of multiplier mixed in,
that's actually quite valuable in every step of the way, in a very practical way.
Then the two best ways to use this are just get GPT4 on your phone,
pay the $20 a month, and use the voice mode, and just talk to it.
You'll be surprised at how far you get by just like, I'm in the car, I'm on a walk.
I'm just like, let me chat with you about my design problem.
You know, at the very minimum, you get the rubber ducking experience at QA
designers have of talking to a rubber duck and it turns out you come with good ideas for
just talking through with someone. At best, you get really great stuff. It's also good of motivation.
I spoke to the guy who invented positive psychology and he's retired now, but he was one of the
famous therapist on the planet. And he says he uses GPD4, a version of having read his books
as giving all of his psychological advice down because it beats him. So I think that there is like,
you know, there's an emotional intelligence piece. There's a QA piece. But the, the,
the valuable, at least for right now, where AI is, is it's not that great at tying together all
these pieces. That's still you need the human designer or the human entrepreneur to make all this
happen. Whether that's long term true or not, I don't know. So, I mean, I feel like artist is
the most directly threatened, although if you talk to any artist, though we'll absolutely say,
like, look, it's not consistent enough yet. You can't quite pull the pieces together. And true,
if you really want high quality work, you're going to need humans in the loop right now.
So, you know, I think that's kind of a liberating perspective at some ways.
Yeah, yeah, in many ways, and thank you.
That's like amazingly detailed, practical, implementable right away.
So I hopefully, you know, people are taking notes or re-listening to the section
because it's exactly where I wanted to get at, like things people you can all do right now.
About half of those I've been doing and I'm happy about and half of those I have not.
I haven't played with the 3.5 sonnet.
So that's basically what I'm doing right after we're done here.
So thank you for that.
But now I think it is, it's worth moving into this next phase of like,
the future, the near future to medium future, because you're right, it feels like we're in this
golden age, and it may just be a few months of golden age, where you as a super talented,
as a talented creator, this thing can accelerate you enormously.
And as someone who is very new or has no skills, it can accelerate you enormously.
I think the people in the middle of the pack, if you were just okay at stuff, you're kind of
in a bad spot right now, because now everybody's just okay at everything.
But as this evolves, there's no reason to believe, right, that this was not going to keep getting better.
And there's a lot of reasons to believe it's going to keep getting better.
And that threatens a lot of the foundation of what we do for work and how we value ourselves as humans.
And so there's a lot of steps to this, right?
There's baseline, even with the technology we have today, a lot of industries could completely get eclipsed, even things like accounting and law.
and it's giving better medical advice than a lot of doctors.
And this is just a matter of kind of regulations, I think.
It seems like it's just a matter of regulations clearing the path.
And then there's this area where, as you said, right now, the human making the disparate
connections and the expert being better than the machine is true.
But who knows if that's true a year from now?
And so, you know, there's a lot of areas we could go here.
But I'm fascinated by this kind of like how we should think about the next, you know,
not just the next six months to a year, but the next, you know, year to five years range.
And I mean, I talk to all the major AI companies a regular basis, and I will tell you, nobody actually knows how good it's going to get.
I will say that having had a lot of conversations over the last three months, I think that people at Microsoft at OpenAI,
anthropic seem to have gone from like, who knows how far this goes to like we've got another year or two of exponential growth left.
Whether they're right or not, I don't know, right?
And where that ends up, I don't know.
You know, it was funny, you were saying if you're just average, you're not going to do great.
But you weren't going to do great in games anyway unless you got very lucky.
because the people in the space, just like in Hollywood and other places,
to get where you are sitting right now, Justin,
and the kinds of people you interview here have to be really good.
I mean, luck helps, you know, perseverance helps,
all the personality characteristics of being a successful person, help,
but, you know, not the only way to get ahead.
But there is some talent that has to be at the heart of this thing.
And if you are making it in games, you are, you know, a top 1% talent at the very least,
probably top 0.1% or 0.01% in something.
It may not be at everything.
Maybe you're not great at the art piece,
but you have a knowledge of systems.
Maybe you have an innate sense of what's fun.
Maybe you're just good at executing stuff and pulling stuff together in a way other people can't
and are just willing to grind through the playtesting.
Maybe it's the classic sense of taste.
But we're ways off from threatening the 0.01%.
That being said, right, that's the explicit goal of all the AI companies, right?
AGI is a common definition as a machine better than a human in every creative task,
better than the best human in every creative task or knowledge-based task.
And if that happens, it's not just gunned.
Game design is the least thing that gets gunged for.
In some ways, you know, the bankers get replaced and the lawyers get replaced long before the game designers
because there still is a matter of taste and difference.
And having a human guiding a process still matters, right?
And so I don't know the full answer to that question, right?
It's hard to know.
But I think we need to plan for scenario where that's possible.
If I was building AAA games, you know, in a video game world with a four to five-year lead time,
I'd be spending a lot of time thinking about this problem, right?
Because I've got hundreds of coders, and first of all, they're already all using AI to help them code.
We don't know.
At this point, the code may not be good enough for a large-scale code set.
We don't know.
But I would be using it.
But then the question is, okay, not just this, but I've been seeing the first really interesting generative games come out.
A lot of people probably have heard of AI dungeon, which was using a very early version of GPT to sort of be a DM.
The current versions are very good dungeon masters.
And you can ask it to come up with a completely novel powered by the apocalypse game, and you get really good results.
Even funnier, I threw one of the most complicated manuals for any game I've ever, a role-playing game I've ever played,
or I haven't actually played it because no one I know actually plays this thing.
But it's like 800 pages and manuals, one of those labors of love kind of things.
And it was able to roll up a character and doing it required understanding page 112, 274, you know, 380.
And this is Google's Gemini Pro 1.5, which has two months.
million tokens of context windows. It could hold the entire manuals. It's already doing some pretty
amazing things. The most fun of the actual AI powered games I've seen so far is an infinite
crafter game, you know, where you take cards and stack them on top of each other. And there's an
AI version which can just keep generating new things to craft, basically, that makes sense
in context, right? So I dropped the moon on the earth and what happens. It can keep generating
new ideas. So we're at the early days. This is a really interesting design.
space being enabled by this.
So, like, you know, what does it do when you have an AI companion?
And Justin, you and I have slipped through this a bit is we've got this very successful
game and we're trying to adjust in the early days of AI.
We're like, okay, the AI will be a tool to help you out.
Now it's like, oh, no, the AI plays the game on its own.
What is the role for the human in the loop?
And I think these are things we all have to think about more as we move forward.
Right, right.
Well, and this is exactly, let's use this as the illustrative example, right?
Because, like, as you said, like, the breakthrough game is something we're both very proud of.
We've run it for, you know, Fortune 500 companies.
We've had a great success with it.
It definitely makes people more creative.
It makes teams more creative.
I still use it with my team.
And then as AI came in,
we went through this process over the period of just a few months from chat GPT,
from 3.0 or 3.5, I forget when we started having the conversation of like,
hey, okay, let's have the AI be a seat at the table.
And then otherwise the game is the same and that's perfect.
Oh, my God, so cool.
And then now within a few months of us starting to work on that,
it was like, oh, actually, no, AI, we just feed all the rules of the AI,
let the AI do almost all of the work.
and then feed output a result there.
And it's basically just a, you know,
kind of a GPT, whatever the hell they call them,
terrible naming conventions for pre-built rules, right?
It's not really even a business anymore.
So if you're trying to build a business that builds on AI
or that uses AI technology or that's a game that's going to launch in three years,
like how could you possibly plan for that?
You can't.
You're taking a bet.
I mean, we're used to, entrepreneurs are used to taking bets.
Designers are used to taking bets, right?
We're betting, you know, it's creative work.
There's a lot of ways things can go wrong.
So you're making a bet.
I would want to bet at least on a little bit on the side of technology of improving, right?
I think this needs to be added to your bet equation, right?
So maybe that means you move faster than you did before and you use AI to move faster,
and we're going to iterate quickly, right?
You know, you look at, you know, like maybe there's a space I want to move into right now.
Maybe I take advantage of the fact that there's now so many more options for printing on demand
or such in the board game space and implement, you know, an AI companion piece that's useful in some interesting way.
But, you know, I think there's going to be categories of games that get more affected than others.
There's going to be categories of entrepreneurship that are more affected than others.
In some ways, I think entrepreneurs are not being ambitious enough in what they're attempting.
There's a lot of, like, little wrapper kinds of things that the AI companies are going to just take away from us.
Like, if you've got a really good workflow based around GPT4, I don't know what to tell you, but GPT4 is not the endpoint of OpenAI's products.
And they want their product to do everything you do and more.
That is their explicit goal.
They believe they can do it whether they can or not.
So I think part of what you want to do is bet on something where there is taste and creativity
and you're pushing something further out past the border.
Yeah, yeah.
It's, man, it's such an interesting space.
So my general philosophy here is that there's two resources that really matter in the modern
society for us, which is attention and trust, that there's this ability to earn people's
attention that they have a reason to be here and reason to pay attention to whatever you're doing
and that they're going to trust when you actually say you're going to do a thing, your product
is going to deliver a thing, your games deliver experience, they are going to, that's going to be the
case, right? And that those things you can leverage with you have AI back end helping you to do
stuff and make things. In terms of the, you've mentioned, you know, kind of taste making as a,
as a skill set that is, you know, something to lean on, I, I'm skeptical of that. Like, you know,
the AI algorithm for Spotify sending me, you know, or YouTube or whatever, choosing what I'm
going to like next is pretty darn good.
Like, is tastemaking really going to be something that's a sustainable advantage for humans?
Maybe not.
I mean, it's hard to put it to point at something, right?
There is something ineffable about being really good at this stuff and anticipating
where the market's going, you know, in kind of in ways that other people can't do.
But, you know, it is a hard problem to solve, right?
I mean, for right now, we're in this great complimentary space.
I'm in a Dudges of Dragons game that meets every week.
And it's awesome because we've got an AI listening to our entire conversations
and summarizing everything that happened.
You can query the game about what happened in the past.
So we don't have to know what this character said under these circumstances.
We could actually ask it in character what happened or not.
Everyone's generating songs and illustrations and none of us could draw.
So this is super flaw.
It's a huge enhancer.
I think part of what you need to do is thinking about enhancing stuff.
I think the other thing you didn't mention is that most of the games,
especially in the board game space that you do most of your work in, right?
Or the hybrid kind of, you know, I don't know what we call this now.
But like there's a people element to it that I think thinking about how AI removes drudgery
from that process is also really interesting.
I think a lot about the trend and sort of legacy board game kind of approaches,
which is all about kind of like adding.
in complexity to make it video game-like.
There's no doubt that you could more elegantly implement a big box game in a video-game format.
But people want to have these kind of experiences that are different.
So, when I talk about taste, part of it's also a taste of what humans might want in that
kind of interaction together to have really interesting human-to-human interactions in the
room.
And, you know, the board game is a way of getting people into that human-to-human interaction.
There are ways to play with that as well.
Yeah, yeah, I think that's, I think that's well said.
The, you know, it's frankly, I'm shocked that I have a job.
And just forget AI.
I'm shocked that like board games and tabletop games and RPG games are so popular nowadays.
They're more popular than they've ever been.
But when I saw like the iPhone and the iPad come out and I saw, you know, VR technology and everything getting to where it is, I was like, oh, well, people are just going to want to do that now.
They're not going to actually want to play tabletop games and not going to get together and play RPGs.
And the exact opposite was true.
and I'm very grateful for it.
And that speaks to the point that you're making,
which is like, no, people want to be with humans
and they want to have physical, tactile experiences
and be able to enjoy that connection and what that provides.
And so how does, you know, it'll be a while
before the robots take over that role, I think.
But I think that how can we enhance that is a big deal.
Yeah, but I think you have to be opinionated.
Like that is a perspective that we wouldn't have guessed, right?
Like, who would have guessed that there would be a board game
and, you know, Guns of Dragons, Regents, Renaissance,
that was powered by the pandemic and a whole and Kickstarter and stranger things all sort of happening at the same time.
And live plays, like it was a bunch of elements that all came together to make this thing happen.
That is, you know, and once people start playing it, they're into it, right?
And I guess as the as millennials age out, Gen Z will find this dumb again.
But it also, it also has to do with the failure of like, you know, as RPGs got more expensive on computers.
It just didn't become the genre that people thought it would compared to, you know, computers.
became FPS shooters. iPads became all about, you know, pay to play free games and leaving this
huge design space available. So I think you could have an opinion about where things are heading,
and we could be wrong about this. You know, you and I have been around long enough that we've
lived through various crashes of various game systems, you know, types where it's like,
this is going to last forever. And then it's like, oh, actually, no one ever wants to play this kind
of game ever again, right? You know, mobas were supposed to be a big thing for a while. And sort of like
there's still a few players, but it wasn't like it became the dominant space in the same kind of way.
You look at, you know, video games are weirdly stagnated, even as tabletop has taken off.
And if you look at the top 10 games on Steam, it's the same top 10 as a few years ago.
Like, there's this weird kind of loopiness happening.
I don't know what AI unleashes, but it's not like this is an untroubled industry that has
never had cycles before and difficulty.
You know, there's a bet you're making in a world where, you know, this matters.
Right, yeah.
And I think, I think just emphasizing that.
that, you know, this is a deeper point well beyond AI, right? You know, you're a designer,
you're an entrepreneur, you are taking a bet that you are right about the way the world is going
to go and that your bet has to be, you know, both like non-consensus and correct for you to
be a success, right? You have to have an insight that, you know, other people don't all share
and then be right about it. And in creative industries, that's the roughest because most ideas suck.
So most people, your idea is generally non-consensus for the beginning because everyone's like,
I do not want to play, you know, a feat-themed game in which I'm a cheese monger in medieval France,
and there are 7,000 collectible cards, you know, like, you're like, yeah, look, I'm doing something
non-consensus, yes, also terrible, right? So the problem is, you know, having, like, so that's
going back to the things that you've talked about in your books, and I'm sure in other podcasts,
which is testing and learning is really important, too. Like, one of the great things about
these tools is they lower the friction for testing and learning. And I think that that is just so
vital because that's where we are right now. How do we test and learn faster than anyone else?
Becomes the most important, relevant question. And if you could do that without quitting your job,
before you know, before you invest a lot of money in the new product, like that's exciting in
itself. That's where AI can help accelerate things too. It's not just about accelerating good
ideas. It's about accelerating bad ideas. One of the things that we've noticed, you know,
when we study things like my combinator of these famous accelerator spaces is they don't actually
make you necessarily more likely to win, you just win or lose much faster.
And if you lose, you're still part of a community of people that give you another chance.
And I think there's some value in that kind of approach too.
Yeah, this is, I mean, honestly, if I were going to boil down everything that I teach into one thing,
it would be pretty much that, like, accelerate the rate at which you can iterate and learn.
And like, that's it.
Like, the faster you do that, the better you are at what you're doing.
And that's like, you know, there's a lot of nuances to unpack that.
But I really do think there's nothing more important.
to creative work,
entrepreneurship, anything.
Like, how fast can I test my idea
and how well can I take the information
that I receive and actually implement it
without losing my ego and my entire sense of self-worth
the way as my ideas.
Yes, and that's what you train people to do in your lessons
and what I try and train people to do in my entrepreneurship classes
is how do you listen for failure and interesting things
and do that in a way that you actually make progress.
So it's part of the reason why when we do
entrepreneurship, we force people to have hypotheses that they're testing. Because if they're just going
out and looking at how people like their product, they're only going to hear the good stuff and the
bad stuff's going to be from idiot. They're going to be from, you know, idiots who don't understand
my genius. And they don't really get that it's not going to look like this in real life. It's
going to be much better. And they, and you delude yourself, right? So part of this is about how do we get
people to listen and fail in interesting ways that they learn from pretty quickly? And again, I think
that's where the AI, you know, boundary is actually helpful. Yeah, the AI is actually really
great at this. And I think you've mentioned this in one of your, one of your, some of your writings,
but correct me if I'm wrong, but that, you know, the people, I just, I'll speak friendly for myself
anecdotally. I'm a lot less impacted when the AI tells me and critiques my stuff than I am when a human
does. It doesn't, I don't mind sharing my, you know, I mean, I've shared my entire upcoming book
with AI and gotten feedback on chapters and different things. And like, not that I agree with everything
the AI says, but the ability, the psychological impact of negative feedback is a real factor. Right. It's one of
the hardest parts of dealing with it. How do you manage and emotionally regulate and then be able to
sift out data that's worth acting on from, you know, from the noise? And AI's like, it's an incredibly
powerful, like, there's almost no, I feel almost no emotional resistance to it. And I think,
am I correct that there was some kind of study or something about people being willing to like put
their stuff up and work with it more? Or is that? Yeah, we're finding this kind of algorithm joy of like,
also personally because it's so sniveling. It's like, oh, you're so great, but like maybe these ideas
would be good ones. You're like, I am so great. And thank you for recognizing my genius and giving
me these suggestions. Right? So, you know, and we find me, by the way, when people, the kind of AI we
had was algorithmic, where it was just telling you yes or no, decisions right in the map, like,
doctors hated using it because it would tell them 83% chance of cancer. And you'd be like, no,
I don't believe you. I'm better at this and you're not understanding what the real situation is.
But when you get advice from these talking large language models, they feel more like companions.
That's only going to grow more, right? Like, it's like, you know, in your,
genius, let me help you.
It kind of, you know, like, oh, it's such a well-ridden piece and you're so smart,
but, you know, paragraph two could use some help.
So part of it is telling it to be more critical is actually kind of helpful too.
Yeah, yeah, I definitely have you have to prompt it to be more critical.
I think some of the pieces of advice, again, I've read from your book and elsewhere,
which is amazing, where you like, give it a persona, right?
You are a casual consumer of board games that doesn't really like to read rules.
Tell me what you think of this.
Or you are a hardcore player who loves these kinds of games.
and you only want the most competitive and most, you know, crunchy thing,
tell me what you think of this.
And it does a very good job of like morphing itself into that persona
and giving you at least a framework by which to be evaluating your work.
Absolutely.
And I think that so I think that that idea of like it fits into the bigger picture too,
which is one of we have to do R&D on this thing.
Like I could tell you the opening eye people have never thought of it's used for games.
When I tweeted out about the RPG book,
the CEO of Google retweeted me on it.
They clearly never thought about using it for games.
So if you can nail this, you're in a place nobody else is.
And one of the things I would urge people to do is figure out what communities you're part of
that you could talk about this with.
We have to do R&D as a group to figure out the right way forward.
Otherwise, we're going to be, you know, like people are going to say you're obsolete,
but it's really that we're not discovering fast enough use cases.
Yeah, yeah.
Now, creating a culture where this is, you know, we can share discoveries where we can be open about this where we're not afraid, right?
Like for companies that are not afraid that their employees are using AI because I guarantee you they're using AI, whether they tell you or not.
Like, it's just too, too tempting of a tool. Being able to have groups where we can, and we can talk about what we're doing without also being afraid publicly.
Like I definitely have a fear around like saying, hey, we're going to use a game with AI art because we're going to get lynched by people from the community, even though we spend.
tons of money on art, most of ours all done by real artists,
but we are considering using it for some projects that would otherwise never see the light
a day.
I think being able to create a culture that we can be open about those discussions,
I think is really important to be able to move forward in a way that's fair, of course,
to everybody and understands the tradeoffs, but, you know, that helps us actually leverage
the incredible, powerful tool that we have available.
Yeah, and I think trying to think about how we think about that,
to help people thrive in our industries is also important, just like you were talking about
there, right?
So, you know, you want to support artists because there are your,
your lifeblood and because you care about art and you care about what we're doing in these fields,
but we also want to be in a situation where you can build games with the kind of budget and,
you know, capabilities you have and you need to keep everyone in your organization fed too, right?
And, you know, like, costs matter. So it's this kind of logic that's hard on both sides.
And I think we have to work together to figure out what thriving looks like. What do we,
how do we make sure artists continue to do their work and don't get stuff stolen from them?
And how do we make sure that the high end has room for artists doing the kind of work that they're doing?
Maybe the kinds of work they do shifts a bit and the kinds of commissions they do shift.
Maybe instead of doing, you know, speculative art, they're doing one set of projects where they're monitoring art for many other people.
Maybe instead of having, you know, there's a lot of like what we do that came from constraints, right?
Like, you know, having a single portrait picture of each, of each, you know, character on a character sheet, suddenly I can do 10, 10,000 portraits.
What does that do for representativeness?
And what gains can I get from that of having an artist oversee that process
instead of overseeing, drawing a single picture?
You know, like, what do I?
There's questions we have to answer,
and we need to start answering them as a group,
or we're going to get divided up,
and it's going to be the lowest cost winner who wins,
and everyone else is going to lose.
Yeah, and I love seeing what's possible
at the boundaries of technology and design, right?
Like, this is what we've done, you know,
throughout our process as a company,
what we've done with, you know,
Soulforge Fusion as a hybrid.
Debt game, we're using what, you know, digital technology, printing technology that wasn't
available a decade ago. And we're having it, you know, connect to a digital game. We're having
it connect to blockchain technology. We're having it, you know, we've played in worlds of VR.
Like, AI clearly allows for the types of games design that didn't exist before. And I know a lot of
companies I've spoken to that are using it for, you know, NPC generation and for helping to
create, you know, more free flowing content. But, you know, what does this mean in the tabletop world?
What does this mean in other genres? What does this mean for procedural, you know,
generation and different aspects that previously had to be pretty formulaic that now doesn't
have to be.
The core of what we find fun doesn't change, but the ways of which it can express and the ways
and the tools that are available for designers is just something I'm incredibly fascinated by,
and I can't wait to both explore myself and see what other people do with it, right?
Just more fun games is a good thing.
Yeah, I mean, and I think about this, and the video game space, I'm pretty familiar with,
right?
And you think about what a team, like a remedy or something that did Alan Wake in control,
where every detail they can afford to make every detail matter, right?
What's on a whiteboard matters.
And if I'm doing this as an indie game, right,
I have to have repeating textures and like I can't do what I could do before.
So we make a virtue out of the problems, right?
It's like, oh, well, it's very clever because I've only used pixel graphics,
not because that's my artistic choice, but because that's a much cheaper way to do the art, right?
And I think a lot of what we're thinking about is artistic choices are choices with constraints.
And when the constraints shift, that gives us new choices for art.
that we can keep doing the same kind of labor
but push to a new bleeding edge of what that looks like.
And I think changing the constraint structure
is a super interesting way to think about
how creativity changes.
Yeah, no, I think that's great.
I'm, okay, I want, there's one,
I know we're running a little short on time.
I want to talk about one other topic that is of interest,
which I don't know how much you've,
you know, I don't know if you've covered this as much,
but the, you know, not only is the AI continually getting better
at every kind of skill set that it has
and all the different pieces that it can contribute to.
But it's also getting better just feeling like a person,
like feeling like someone that you talk to,
someone who you can be in relationship with.
And so this, you know,
in the context of obviously NPC characters,
in context of NPC social relationships,
of, you know,
having different kinds of things.
This feels like this is just the beginning of where we're headed.
Do you have a sense of like,
as these things become more personified,
what that does to our culture,
what that does to our,
to this next chapter in society?
We have no idea, right?
I mean, early evidence is, you know,
the people who use this,
who have used like things like replica so far
have been fairly desperately lonely.
The early evidence, which is qualitative studies
that you have to take with a grain of salt,
have suggested it makes people less lonely
and actually more social because they get their anxieties out
with their replica, you know,
and then they are better in real life.
But we don't know.
Like, we're kind of unleashing another experiment
like social media that's kind of unstoppable
at this point of like,
What will we do?
My impression is that people will have relationships with AI of various kinds,
both big capital R and small R relationships,
but that those will be balanced with other kinds of human relationships.
I don't think it becomes so addictive that you only want to talk to the AI all the time,
but we don't know, right?
And we don't know who's going to use it in what ways.
Now, from an NPC generation perspective,
the interesting problems becomes how you can strain it to be on, you know,
and not, so it's not like you've ever made of the problem.
I fell in love with the first bar, you know,
with the first bartender I met in the first village.
It's sort of, you know, like we talked for seven hours and like I forgot about the dragons.
It's sort of an interesting problem, right?
So I think we have to think about those kind of things.
Yeah, yeah.
I think it's like, you know, you mentioned social media as this sort of, you know,
unleashed platform where it's like incredibly powerful, you know,
am able to stay connected with people all over the world.
But we have seen, you know, increasing rates of depression and challenges,
especially amongst youth and young girls in particular, you know,
how this tool plays into that positively or negatively is something that, yeah, it's, it's,
obviously nobody knows where it's headed, but it is interesting and fascinating to speculate on.
I mean, part of this is we could talk about it all we want. This is already going to happen.
I mean, the open source tools are already out there in the world and are legal,
will be, even if we restrain and restrict use in the U.S., which I don't think is going to happen in the near future,
will still be available from other countries for, you know, like, that's done already.
There's a lot of change baked into the system that we're just going to have to live through
and come with social norms and laws to deal with the consequences of them.
But, I mean, I don't think there's a realistic, you know, one thing I sometimes see in the creative
world is a, this thing will be destroyed.
Like, something's going to stop it.
And I think that's kind of the most, like, there are negative effects.
We have to be honest about it.
But I think that the next step of like, you know, like I hear people say, it's talking about model collapse,
the idea the AI is going to train its own data and then collapse.
That doesn't look like what's going to.
happen, at least in the next few years. Training on synthetic data, the AI creates is perfectly fine.
I think people hope that the copyright cases against, you know, AI system are going to shut it down.
Unlikely, and even if that happens, then they'll move to Japan where there's no copyright restrictions
on AI training material. I just, like, I think we have to be clear-eyed about this. The social
pressure systems don't use AIR or will turn on you. I don't think that's a long-term feasible thing.
I don't think most people care that much and they're going, you know, and if they do, it's still going to
not matter as other, you know, people, consumers are not going to turn on AI systems the way we
expect. Governments cannot regulate the way we expect. We could talk about these things as if we want
change to happen. And I totally get it. But I worry that there's a lot of people with a blind
eye of like, this will go away somehow. And I think they sort of view, you know, the early stages
of some other technical moves. Like everything is, you know, I think there's still room for sort of
blockchain stuff in games. But the idea that everybody is going to be 100% blockchain based for
all. Forget trading card games. It's going to be.
you're going to have a portable gun between video games.
Like that stuff, the fact that that blew over,
I think people might take too much of a lesson from in some ways.
Yeah, that's right.
I think there's the sort of common hype curve, right?
Where people think, oh, my God, this is going to do everything.
And then there's this disillusionment.
And then there's like, oh, okay, wait, here's the real value, right?
You know, the internet.
I think the hype.
Well, so I did ask GPD4.
I gave it all the hype curves for every other technology.
And it found that there was no actual pattern of hype curve.
I think that that's absolutely wrong.
I mean, I think you can always find curves of like people are really excited and they don't find it's as useful.
But then that's another case of coping, right?
Of like, oh, you know, it's not that useful here in this case in this case.
But there's plenty of people who it's hyper useful for.
This is the fastest adaptor technology we've ever seen.
Like this genie is not going back in the bottle.
And even if it's like, oh, look, it can't create an entire.
Our standards of what AI can't do is a disappointment of changing rapidly from like, oh, it can't produce prose to like it can't write like Hemingway.
from like it can't do, you know,
character,
original character design to it can't do original character design
as good as the best people on the planet.
Right.
We never used to go yet.
Yeah, you know,
I remember everybody making fun of the AI because it couldn't do hands, right?
It was kept making fun of the AI's handshake, you know,
capabilities.
And then it was like,
okay, well,
within three months now,
that problem is solved.
It's sort of like the,
I feel like, you know,
back when, you know,
kind of the,
you know, religious doctrine started getting pushed back as science
sort of explained more and more things.
It was like, well, okay,
this is going to be over here.
Now this is going to be explained over here.
And the shrinking kind of rearguard action just is doomed to failure.
The god of the gaps, it's called, right?
Where you just are, you know, like whatever you can't explain, that's with the AI.
And I think that's a dangerous thing for humans.
Like, we have to kind of, like, one of the things I talk about in the book is you kind of need an existential crisis to work with AI.
It is weird.
It is upsetting.
Like, we're kind of talking like it's no big thing.
And it's creative.
Except for some people who are listening, they're like, no, it isn't.
It is not.
And I totally get that.
And I would just urge you to spend five or six hours with, you know, Claude 3.5.
or with, you know, or with, you know, GPD4,
if you'd use the free GPT six months ago,
you have no idea what these systems can do.
And you're just going to have to get through the crisis.
Like, we have to figure out a way forward
because these systems are not going away.
Yeah, yeah, I think that's right.
I think it's like, you know,
we should be moving forward with open eyes,
understanding that any industry disruption,
we should be trying to make sure to protect,
you know, the parties that are getting hurt
and try to be conscious of that.
And I'm all for people disclosing,
you know, when they're using AI,
when they're not,
people can opt in to choosing products that they want or don't want, but the idea that this is not,
you know, I think it's crazy to poke your head in the sand and pretend this isn't a thing that's
transforming the world. Even if the technology literally froze today and never got any better,
it will still transform the world over the next couple of years as adoption spreads.
Absolutely. Absolutely. And we can feel however we want to feel about that. And there's a lot of
room for social critique and a lot of concern over what this means and what this means for our social
systems and what this means about capitalism and what we do next. But I also think there's a practical
aspect of like we just got to get like we're living in this world now and we can pretend we're not
living in it, but that's not going to be helpful. Yeah. And I mean, when it comes to like,
you're just speaking back just to creativity for a minute, like it's hard for me to really entertain
any of the arguments that AI is not creative. I mean, when I talk about what I break down creativity
too in general, it's taking two different ideas and combining them together in a way they weren't
combined before. You're just like taking these different fragments and putting them together.
And that's all I've ever done in my career and then occasionally filter away the stuff that sucks and find something that's good.
And that seems like exactly the way AI works, you know, just from a pure practical standpoint.
Like, it's just, it is, it is, I cannot, I can't understand any arguments that says it is not a creative force because it just is it's like, it's incredibly powerful tool for that.
And there may be some categories of creativity of true breakthroughs that may or may not be able to do.
But as you point out, most creativity is combinatorial, like we're combining stuff together.
You know, there are occasionally running these games or experience where it's like, how would anyone ever think of that?
But in most cases, it's like, oh, yeah, this really innovative idea came.
I can see the origin of it in these other three ideas, right?
And we look at how we describe games to people.
It's like chess meets magic at the gathering with a smattering of, you know, of monopoly or whatever.
Like, that's a recombination.
All right.
Ethan, I always love our conversations.
I think it is amazing to be able to, you know, see your,
Whatever, you had a brilliance and deep insight into creativity and work and the, you know,
the sort of frontiers of entrepreneurship long before this new frontier of AI has pushed you
into the limelight.
I think it's well deserved and exciting to see.
I encourage everybody to not just pick up your book, Co-intelligence, which is the, again,
I think by far the best, most practical breakdown of how to think about AI today, but also
your substack.
Where else can people go?
Or what other tips would you have for people that are, you know, have been convinced,
that this is something they need to be paying more attention to
for both your work and AI in general.
So, you know, for better or worse, Twitter or X or whatever you want to call it,
is where a lot of the interesting discussions happening in this space.
But also, whatever your game design space is,
I'm part of like three or four different communities.
I'm sure there are others near you.
Like, you need to be talking with colleagues.
Like, that's always been important to success anyway.
And one of, you know, Justin, one thing you're amazing at is networking.
Part of why this podcast is such a success is there's so many people
who know and respect your work.
and you've treated well in the past,
so they are excited to be part of something you do.
For people starting off, networking is the only thing
that is just guaranteed to help you succeed across any industry.
And the more you bridge gaps between sets of people
and the more network you do, the better off you are.
So my advice is the same I'd give it everywhere,
which is figure out your community that you should join
and then join that community.
And talk about this.
Be someone who's helping lead the direction,
share your experiments, work in public as much as you can,
and you'll get a large part of the way there.
Yeah, that's fantastic.
And I will, for anybody that wants to come join our community,
we have a Discord, active Discord for Stoneblade,
where we talk about some of these things.
We also will be at GenCon.
You can come meet us in person there.
I love to have these kinds of conversations,
both in real life as well as online.
Ethan, thanks so much for taking the time, man.
I know you're busy, and this was such a fantastic conversation.
And I look forward to many more as we see.
this future unfold. Thank you so much. And I'm thrilled to be part of the conversation as always,
and hopefully the recording work this time. Thank you so much for listening. I hope you enjoyed today's
podcast. If you want to support the podcast, please rate, comment, and share on your favorite podcast
platforms, such as iTunes, Stitcher, or whatever device you're listening on. Listener reviews and shares
have a huge difference and help us grow this community and will allow me to bring more amazing guests
and insights to you. I've taken the insights from these interviews along with my 20 years of experience in
the game industry and compressed it all into a book with the same title as this podcast. Think Like a Game
Designer. In it, I give step-by-step instructions on how to apply the lessons from these great
designers and bring your own games to life. If you think you might be interested, you can check out
the book at think like a game designer.com or wherever find books or something.
