Behind The Tech with Kevin Scott - Ethan Mollick, Author and Associate Professor at the Wharton School of the University of Pennsylvania

Episode Date: June 11, 2024

Ethan Mollick is an associate professor at the Wharton School of the University of Pennsylvania, where he teaches innovation and entrepreneurship. His research focuses on the impact of AI on work and... education, and he has published numerous papers and a New York Times-bestselling book on AI, "Co-Intelligence." Ethan is also behind the popular Substack ”One Useful Thing,” which explores the implications of AI for work, education, and life.  In this episode, Kevin and Ethan discuss Ethan’s background as a long-time technology enthusiast, his academic journey, and his insights on the current AI revolution. Ethan shares his experiences from running a bulletin board in the '80s to co-founding a startup company before entering academia, and the two discuss topics from the transformative potential of AI and its accessibility to a broader audience to the importance of AI as a co-intelligence tool that can enhance human capabilities.  Ethan Mollick | Wharton School of the University of Pennsylvania | Co-Intelligence Kevin Scott   Behind the Tech with Kevin Scott   Discover and listen to other Microsoft podcasts.    

Transcript
Discussion (0)
Starting point is 00:00:01 When I talk to companies, I always tell them two things. One thing is that something you used to do that was valuable is no longer valuable. And the sooner you realize that, the better off you are, right? And the second thing is there's something impossible you always wanted to do that you can do. What is the impossible thing? Hi, everyone. Welcome to Behind the Tech. I'm Kevin Scott, Chief Technology Officer and EVP of AI at Microsoft. Today, tech is a part of nearly every aspect of our lives.
Starting point is 00:00:26 We're in the early days of an AI revolution promising to transform our lived experiences as much as any technology ever has. On this podcast, we'll talk with the folks behind the technology and explore the motivations, passion, and curiosity driving them to create the tech shaping our world. Let's get started. Hello, and welcome to Behind the Tech. I'm co-host Christina Warren, Senior Developer Advocate at GitHub. And I'm Kevin Scott. And we have a fascinating interview today with Ethan Mollick, who is a professor at the Wharton School and who's at the forefront of studying AI and especially its impact on work and education.
Starting point is 00:01:10 Yeah, I think Ethan is an incredibly energetic person. You know, it's just sort of thinking about how these new tools can be used to solve interesting problems. And, you know, he has accumulated quite a following on social media, as well as the excellent work that he was doing in academic circles. Just talking not just rigorously about how AI has all of these incredible potential benefits,
Starting point is 00:01:43 but just really engaging on like, hey, I'm trying this, you all should have a look and just giving people that provocation to be a little bit more ambitious with how they're thinking about the tools. He's one of, in my opinions, the most interesting voices out there right now talking about AI. No, I totally agree. And I'm really, really looking forward to hearing this conversation.
Starting point is 00:02:15 Ethan Wallach is an associate professor at the Wharton School of the University of Pennsylvania. He teaches innovation and entrepreneurship, and his research examines AI's impact on work and education, among other things. His papers have been published and cited in top journals, and his book on AI, Co-Intelligence, is a New York Times bestseller. Ethan writes a newsletter on Substack called One Useful Thing, where he shares his understanding of the
Starting point is 00:02:38 implications of AI for work, education, and life with almost 150,000 subscribers. Ethan also leads Wharton's Interactive, an effort to democratize education through using games, simulations, and AI. Prior to his time in academia, Ethan co-founded a startup company, and he advises numerous organizations. Ethan received his PhD and MBA from MIT's Sloan School of Management and his bachelor's degree from Harvard University.
Starting point is 00:03:04 Ethan, we're so happy to have you on the podcast today. Thanks for being here. I'm thrilled to be here. Thanks for having me. So the way these conversations on my podcast always go is we sort of start at the very beginning. You know, how did you get interested in technology in the first place? And what was your childhood like, presuming that the interest in tech started that early? I know this is going to turn out to be a complete shock to you, but I have been a longtime nerd of the old school category. So I had my MS-DOS machines and we were trading games among ourselves. I ran a Bolton board for a little while back in the, in the eighties and then have been sort of tech adjacent
Starting point is 00:03:52 ever since, but yes, video games and as in the form they existed in the eighties programming stuff, I'm not actually a coder, but I've been coder adjacent my entire life. So that's so definitely the background. And so it is an interesting thing. So, you know, you and I are probably like I presume I'm older than you. But, you know, we were sort of coming up during that ascent of personal computing into basically just ubiquity in the public. So we basically went from, you know, computers were things that set in rooms were horrendously expensive, you know, capital intensive devices, and only, you know, like very strange people with a very high degree of focus, use them to like, everyone had one and like the transition happened really quickly. So, you know, and I want to talk about the parallels from then and now a little bit later, but what,
Starting point is 00:04:52 what was it, you know, in your family that even gave you the exposure because, you know, running BBSs and doing all of this stuff was still, you know, back in the eighties, like that was a pretty advanced technical set of things to be doing. So, I mean, part of this was, I still remember when we had a neighborhood computer club. So everyone got together and bought Apple IIe's. And then we shared software because it was expensive. So you'd get a disc and you could borrow a print shop or whatever. So there was, they kind of gave me a built-in community, right?
Starting point is 00:05:29 So this is before the days of the internet, but all the other kids in my neighborhood were all trading the same discs back and forth and doing that. So there was this kind of like built-in community piece among nerdy kids, right? Which is suddenly all the weird nerd hobbies you have, like unlock your ability to play video games and do all this kind of other fun stuff.
Starting point is 00:05:44 So it was, I grew up in Milwaukee, Wisconsin, like not a coastal person at all, even though I sound like a New Yorker. Um, but, uh, you know, it, it, so it was, it was interesting cause it was just something that was local and you buy your computer, you know, your computer magazines. And it felt like there was a community among, among fellow, fellow high school and, and, you know, middle school students that kind of drove it. And so you, you had all of these nerd interests and what did you choose to do? Like what was sort of, you know, your path, like what you were super interested in high school, like, you know, you went to college, like what did you choose to major in and why? So I made up my own majors in undergrad in science technology policy. So the
Starting point is 00:06:26 thing about the horrible secret about me is people think I'm really, really good at sort of the math and science. I got a PhD from MIT. I am actually, you know, but I'm not actually a natural math person, right? What I am pretty good at is understanding systems and, you know, and how they relate together and very, you know, understand science deeply, but I'm not like, I'm not, I'm not the kind of cutting-edge math person out there, right? So I've always been very interested in what does science mean and how to use it and understanding deeply how it works. And so I made up a major in science technology policy, history of science. My original paper I ever wrote was my undergraduate thesis was on Moore's law and interviewed Gordon Moore to try and figure out. That's awesome. Actually, my first academic paper that was ever published was sort of why this happened the way it was.
Starting point is 00:07:08 And that's sort of – and a lot of – I did work with a media lab at MIT at the same time. So very interesting kind of what it all means rather than trying to think about coding myself. Yeah, well, I mean, maybe we can just sort of jump right into what it all means because some of the things, you know that we've already talked about like this uh the pc revolution like this point in time where a brand new technology went from uh inaccessible and unusable to like highly usable and ubiquitous uh seems like it's playing out again right now and like one of the things that drove that earlier round of innovation is Moore's Law, the fact that the compute got cheap so quickly that you could do increasingly interesting things with computers
Starting point is 00:07:57 and even go through the process of making them more useful, like you could burn part of the CPU's resources on things like user interface. And it's sort of the same thing right now. Like we've got AI, like it's very, you know, it's made the phase shift from, you know, really challenging technology to go leverage to very easy technology to leverage. And it's being largely driven by compute. So, you know, like, I love your take on what the parallels are. There's a second dimension besides computer. I mean, compute is the underlying driver of almost everything, right?
Starting point is 00:08:31 Like, you look behind, you pick up any, you know, you rub the Scooby-Doo mask of almost anything, and it's Moore's Law behind it, right? Which is, like, that's the driving force of the last century. And so, you know, three quarters of a century at this point. And, you know, always an interesting topic about how that kind of existed. But the interesting thing that also drives this is the kind of unexpected fact that LLMs, especially, turn out to be a very human technology. It turns out by training them on human stuff, they just work in a very human way.
Starting point is 00:09:01 And that means that people who can't code, like we're back to that kind of interesting world where you don't have to know magical coding secrets. If you're a good manager of people, you're probably going to be good with LLMs. So there's kind of this leapfrog effect also, which is most of the technologies start with like, how do you hack the command line? And then it takes a long time to get to the point where it's like, okay, there's a GUI and you don't even have to know what a file system is anymore. That's not happening with LLMs, right? The interesting skip is the most useful way to do this. You know, there's some early evidence that coders are actually the worst people to use AI because it doesn't do loops. It doesn't do any of the things you'd expect it to do in code. Your computer system shouldn't be probabilistic. It shouldn't argue with you and
Starting point is 00:09:37 try and diagnose your psychological issues. AI does those things when you're in drag with it. So I think part of what makes the skip interesting is not just its accessibility, but the fact that it's kind of a new modality by accident of working with machines yeah it's sort of interesting that uh you point that out it's a thing i've been saying for a handful of years now we we for the first two centuries really of programming, basically the only way that you could get a computing device to do your bidding was either become an expert programmer, which was this very complicated task of learning the intricacies of the machine and then learning all of the ways in which you can map human understanding
Starting point is 00:10:22 of problems onto things that the machine can actually go execute. Or you have to rely on a programmer to have anticipated your needs and built a program that you can then go run on one of these computing devices. And that's sort of the way it was since Ada Lovelace wrote the first program. And that has very dramatically changed in the past handful of years. You can now get machines to do very, very complicated things for you where you are not constrained by having to be a very skilled programmer. Which is not to say skilled programming is a thing that's going away. Maybe we need even more of them than we ever did. But it really opens the aperture up on who gets to decide
Starting point is 00:11:05 what computing devices go do. And I think that, and by the way, I think that one of the most interesting things is actually the role Microsoft is playing with Copilot and its wide availability around the world. You know, I often, when I talk to organizations, I'm like, listen, you know, Goldman Sachs or Coca-Cola or whatever, I'm like, you know, you're used to having the most advanced technology. You need a whole bunch of sort of techno priests to manage. And, you know, you hire my advanced PhDs from Wharton to run it.
Starting point is 00:11:32 And you have the consultants who can afford to build the backend systems. But the AI that you're using internally is almost certainly worse than what every kid in Mozambique has access to for free, thanks to Microsoft's co-pilot and, you know, running GPT-4. And people have not absorbed that completely yet, right? has access to for free thanks to Microsoft's co-pilot and running GPG4. People have not absorbed that completely yet. Also, the technological frontier just caught up to everybody at the same time, and there isn't really a huge advantage. Companies are shipping models almost as fast as they can get them made out.
Starting point is 00:11:57 They're generally made pretty ubiquitously available or available at relatively cheap costs most places around the world. The implications of that are going to play out in really interesting ways that we haven't anticipated yet. Yeah. And so, you know, maybe that's a good segue to your book. So you've written this book, Co-Intelligence. And, you know, what do you want people most to take away, you know, from reading this book and just in general from how they ought to be thinking about what's going on right now? So at least at the current stage, AI really works like a form of co-intelligence.
Starting point is 00:12:36 It is a booster to your activities. It is a threat to some parts of your job, but not the parts you want to do. And it is something that is usable right now. And I think a lot of people, a lot of the books about AI have tended to focus on future, and especially a sort of scary versus, you know, like, are we all saved or all doomed? And I think that that is an important conversation, but in some ways, the least interesting conversation to have about AI that's already here. And it's fascinating, because when you talk to people who are using it, they
Starting point is 00:13:01 want to talk about how to use it. It feels like that 80s again, right? Like, people want to figure out what are the tips, they're exchanging information, there's excitement in the air among users. And I think that I wanted to try and bring that conversation to people and give people ways of getting started. And also to realize like, this is kind of a big deal, right? It's a big deal in lots of ways that we would never have expected AI to be a big deal. And it's a big deal right now. Like it out innovates most innovators, it outwrites most writers, it like, you know, elite consultants, it's a big deal right now. Like it out innovates most innovators. It outwrites most writers. It like, you know, elite consultants, it does a really good job. Like this is weird stuff
Starting point is 00:13:31 that is going to have weird effects. Um, and it is accessible and that's part of why it's going to have such weird effects. Well, I, I love to talk a little bit about, um, benefits and then we can sort of talk about, you know, maybe the weird downside effects in a minute. But, you know, I've been doing machine learning work for 20 years now and yeah, and have been on the large language model train for, you know, the past five plus years, helping OpenAI build their big systems. There's a handful of things that are just super clear that I think are going to be massive beneficiaries of the technology.
Starting point is 00:14:18 They tend in my mind to be things where you have zero-sum problems that exist, where the AI comes in and transforms all of the problem or some important part of it into a non-zero-sum thing. And there are many, many examples in education and healthcare, I think. But what are your interesting benefits right now, the things that you're super excited about and think people ought to be thinking more about? Yeah, on the large-scale social side, I think you're completely right. I mean, the standard I've been applying, AI has many things it doesn't do very well. And there's
Starting point is 00:14:51 things that people worry a lot about how well it does them. But the standard I tend to apply is BAH, best available human. Is the AI better or worse than the best available human you have access to? And especially in areas like education, where most people don't have access to tutors, but that's a magical tool. That's transformative, right? For areas like medical advice, like we're getting good data out of both Microsoft and Google and Harvard Medical School and everyone else right now. They're like, look, I don't want to replace your doctor, but you probably should be asking for a second opinion from AI and getting help, and your doctor probably should be doing that at least, right? It's pretty good legal stuff. There's a lot of things where access
Starting point is 00:15:24 has been gated, where this opens the doors's pretty good legal stuff. Like there's a lot of things where access has been gated, where this opens the doors for reasons good and bad. Education is clearly a giant win. The other thing is like, look, when we survey people who use AI, we get the same results almost every time, which is they're nervous because, well, let's take my job, et cetera.
Starting point is 00:15:37 But they're actually really happy because the first things they give up are the worst things, right? AI maxes out right now at the 80th percentile of ability. A lot of things, it's much less than that. But whatever you're best at, you're probably better than an AI. But there's probably parts of your job that actually suck,
Starting point is 00:15:51 that you're not very good at, and that you don't enjoy. And so the most liberating part of this is people are bored 25% of the time at work in most surveys. The idea that you can kind of give up the parts that bundle of your job that are holding you back, and especially as an entrepreneurship professor, the stuff you're not very good at, the stuff that takes all your time and energy, even though it doesn't get very far. Like, that's fascinating.
Starting point is 00:16:10 I mean, I find it fascinating that the first use almost everyone tells me they put AI to are actually the most intimate uses, like writing children's stories for their kids, writing eulogies and wedding toasts, because these are things that cause people inordinate amounts of stress. And most people aren't that good at, right? And so all they do is spend hours and hours stressed out about it. So to me, it's this liberating aspect also on the personal level, as well as what can do society. I like that best available human framework. Because I think it's a good framing to help you think about how you could ambitiously use it yourself. So what is the thing that you want to do or that you feel constrained by that you need some help with? And can you get a human to help you or can you not because you can't afford to pay them or because they don't live near you or they're inaccessible.
Starting point is 00:17:05 Like my example of this is my 15 year old daughter who, you know, is a freshman and was taking a biochemistry class in high school this year. And she really, really wanted to read the research literature. And the research literature is not written for 15 year old kids. It's like full of jargon. It's complicated. It presumes a bunch of prior knowledge that you don't have when you're 15 years old. But conceptually, like, you know, if the research is good, there's no reason that a 15 year
Starting point is 00:17:38 old can't understand the kernel of what's in a research paper. And so like her idea is like, all right, I'm just going to feed this thing into chat GPT and just ask chat GPT to explain this thing to me. And just, you know, like I'm going to pepper a chat GPT with questions. And like, she, she did, doesn't have access to a PhD biochemist who can like personally tutor her, you know, through her million questions she's got about these random papers. But ChatGPT is like the free version. She didn't even ask me to do this, right? So it's like she's using the free tier of the product.
Starting point is 00:18:14 And it's almost like watching a kid develop a superpower. I cannot believe what she understands. I mean, it's amazing, right? I mean, part of what, you know, I think it was interesting, because we thought that, you know, going back to sort of the history of the thing, I think most of us who were nerds in the sort of 80s and 90s thought that once we had the universal network that everyone access all information, everybody would be seeking out the information they want, and the world would become wise overnight, right? And it turned out to not quite work the way I think the teenagers of the 80s and 90s nerds thought it would.
Starting point is 00:18:51 And I think part of the issue is we didn't understand society. We didn't understand that people have different styles and need different amounts of knowledge and need things to be applicable in their life and different levels of curiosity about different things. And one of the most exciting things about AI is universal translator, and not just translator of language, which it does pretty well, but translator of concepts, right? The weakest way to do that is explain it like I'm 10. The best way to do it is to tell the AI, hey, I'm a 15-year-old and here's my background and here's what I've studied and here's my interests. Could you explain this paper to me in a way that I'd understand and that it
Starting point is 00:19:23 seems relevant to me? And you'll get a good answer from that. And to me, that is a superpower for all of us, right? Because it turns out that just providing access to unfiltered knowledge is not the right way to go. We need help understanding, contextualizing, make it meaningful. And I think that's incredibly powerful. So you teach entrepreneurship to very bright, uh, young people who, and probably old people too, right? Like who, I'm guessing you have all sorts of folks in your classes. But, you know, folks who are, who have this curiosity about how the cycle of entrepreneurship works, presumably because they want to go be entrepreneurs or like participate in the entrepreneurial process. So one of the things that's most exciting to me when these big
Starting point is 00:20:12 technological platform shifts happen is like the opportunities for entrepreneurship are pretty incredible. I don't know what you are advising your students to do. Like, you know, how to, you know, like how are you helping them take advantage of the moment or advising them what to do? There's a lot to talk about there. So we'll start on the most narrow front, right? Like, I'm a former entrepreneur myself, and my startup company invented the paywall in the late 90s. I still feel bad about that. I'm trying to make up for it ever since.
Starting point is 00:20:43 But what I've been trying to think about with this is like, okay, right. First of all, we don't necessarily know what the future is and things are moving very quickly. And there's every indication they're going to keep moving quickly, even if LLM start to even out. And there's very divided opinions on when that will happen. But even if they do, there's still so much low hanging fruit of other stuff you could do with these systems that no one's even bothered to attach them to yet that we've got years of disruption ahead of us. And so my advice has actually shifted a lot. Like, I mean, I have, Worden runs a lot. I think we have the second highest number of startups of any, of any school. Like people run, people raise lots of VC out of
Starting point is 00:21:17 classes that I've taught and my colleagues have taught. And I think one of the big shifts has been the idea that like, there is an opportunity here that didn't exist before. And part of my advice has shifted from how do you help big companies succeed by providing software and services that they then buy you out for to like, how do you smash large companies has become the new model for us. Like how, you know, I'm talking to venture capitalists, increasingly, they're telling me their startups are telling them they don't want to grow past 20 people. Now, I don't know if that's going to be possible or not, right? But I think it's interesting change in ambition level from I want to grow as fast as I can to let's see what we can do at scale, where scale is AI scale, not human scale. And I think that that becomes a really
Starting point is 00:21:56 interesting set of questions. And the question becomes like, what large companies can you solve better by having 1000 great servants? Because AI, you? Because API calls are expensive at the level of like, if you're replacing software calls, they're very cheap if you're using them to provide magic to people that otherwise humans would have had to provide. And so part of that is that shift in that direction. And then one other story, just to tell you,
Starting point is 00:22:19 so I taught all my students how to build GPTs on OpenAI's GPT platform. I know Microsoft has similar efforts that are coming up too that'll make this even more broad. But they're little agents, little software packages that prompt basically. And I had like 200 students in my classes. And the job was you had to come up with a way to destroy the job you were applying for so that you could go for a job interview and hand someone your GPT and say, my job is done, give me a raise. And I had like Navy pilots and hip hop promoters. And of course, tons of like private equity folks at Wharton. And, and they did it. Like they came up with ways of automating work that people have
Starting point is 00:22:54 now adopted and used by the GPT store, like hundreds of different ways I never would have thought about before. So I think there's more low hanging fruit than we've ever seen. I think the question is, how do you stay ahead of a technology that's evolving? And I actually worry that a lot of startups are not being ambitious enough because they're fixated on today. Like, how do I use a RAG-based system using Lama 2 at the lowest possible cost? I'm like, I think you're thinking the wrong way about this, right? You can't be the person who's conservative about where this technology is going. You have to have a viewpoint about what's happening next.
Starting point is 00:23:24 I think that's super good advice. The thing I've been telling people for like inside Microsoft for several years now and any of our partners that I'm chatting with right now is being incremental in this moment is like a horrible mistake. The interesting things aren't things that went from hard to easy, but the things that went from possible to possible. You want to be right on the ragged frontier, like this thing's barely possible,
Starting point is 00:23:58 it's too expensive, it's super hard, it's a little bit fragile because all of that gets better. It gets cheaper. The fragile things become robust. It's going to be extraordinary how all of this works. And it'd be better to bet on that happening than bet on it not happening, right? Like we're on a curve right now. And so look, I'm paying attention as is the rest of the world to how good the next version of OpenAI's LLMs are going to be.
Starting point is 00:24:26 We'll see, right? There's certainly no expectation lowering happening from any of the major labs or you guys or anyone else. So I'm going to keep betting that we're going to see continued change because I know that even if the technology stops developing, we're still going to see continued change. I'd like to see people do exactly that. When I talk to companies, I always tell them two things. One thing is that something you used to do that was valuable is no longer valuable. And the sooner you realize that, the better off you are. And the second thing is there's something impossible you always wanted to do that you can do.
Starting point is 00:24:52 What is the impossible thing? And I always tell people also, have something that is barely working. I love something where the LLMs are just slightly incapable of doing it because you just wait for better brains. And then you're the first people to take advantage of that. And I think that that's really important also. Have you thought at all about like just on a macro level, what markets get disrupted by this? Because it would be kind of unusual with a big technology transition like this for big things in the market not changing in substantial ways. Like I know I've got my list of things that I'm predicting, but I'm always curious to hear what others are.
Starting point is 00:25:30 Yeah, I mean, it is, you know, it's hard, right? Because I think that a lot of the things, the shape of the disruption is going to be determined. We have agency over that, right? Like the technology is shifting, but we have a lot of agency over that. And the model I always think about is the Industrial Revolution. And in the way people don't usually think about, which is steam power came to a lot of factories in England at the same kind of time. The ones that won were not the ones that were like, hey, we could still make pots, but with less people.
Starting point is 00:25:56 Those companies got destroyed, right? The ones that succeeded were the ones that we can now use the same number of people and make 10,000 more pots and ship them all over the world. Right. And I think that that's what we're going to see happening is the disruption is going to be everywhere because look, I can make using GPT-4, right. I can make a vending machine that is perfectly persuasive to you and solves your psychological problems. And, you know, it also sells Coke products, right? Like what do we do with a world where that's happening as a sort of weird question. So I'm looking for imagination coming out of the large companies as an indicator that they'll be safe
Starting point is 00:26:30 rather than look at industries. I mean, look, I think we're going to see disruption to customer service, but a really good version of this would say, how do we extend the customer service process to fit humans into it? And I'm worried about companies not taking the visionary approach
Starting point is 00:26:43 and instead just automation is cost cutting. That's going to be dangerous. And those will be transformed in a negative way. So I think we're going to see this touch everywhere. But I think we have some agency over what happens. And obviously, I think that the industries that matter the most are the most white collar kind of analytical jobs. Everyone could use a consultant to help them out.
Starting point is 00:27:01 And so the challenge facing BCG or Accenture is not, how do I do, you know, serve our current clients with the same number of people? It's, what do I do now that I have increased capabilities and one person can manage a thousand interns? And I want to see that happen. Yeah. I think that's such a amazingly good point and a push. I think one of the things I wrote in my book is like, I don't know whether there is the history of any company that somehow or another innovated its way into the future by focusing only on cutting costs. And I do think that's an important thing for enterprises.
Starting point is 00:27:40 Like you have to figure out how to innovate and serve your customers better and to reinvent yourself over time and to respond to competition. And like all of that requires what you just said. Like I've got tools and I've got human capital. And like, how can I put those two things together and like really inventive ways to invent the future? Yeah, and I not to sound and I want to make clear to the listeners, I don't take money from Microsoft or any other AI lab, but I think now is the time not to sound too Microsoft pro AI lab, but I think
Starting point is 00:28:12 now is the time not to worry so much about keeping your API costs as low as possible and to view this stuff as R&D for a future. I think we need to be thinking about, you need to be doing R&D because there's no instruction manual. And I think too many companies are kind of paralyzed and people are paralyzed waiting for like the secret recipe book. And there isn't one, right? Like I've talked to everyone building on, you do too. It's not like we know the full set of capabilities. If I told you how good is this for, you know, for helping people who are chemical engineers working on long
Starting point is 00:28:40 distance pipe, you know, you don't, we don't even know what their jobs are, but I guess, you know, the people who can't figure that are the people who are chemical companies working in pipe engineering. They will know very quickly what it's good or bad at. And what I worry about is them pausing or waiting for a consultant to tell them the answer because nobody has that answer. Your expertise is the key to you multiplying your effect right now. Yeah. I mean, I think the other, you know, interesting thing here is, you know, again, I do have a point of view about whether or not we're reaching the point of diminishing marginal returns on model power. I don't think we are. And the structure of
Starting point is 00:29:14 how these systems get built is you go build a gigantic supercomputer, and then you go train a model on it, and then you release the model to the world, and then you go train a model on it and then you release the model to the world and then you start the whole cycle over again. And like they're actually overlapped at this point, but it takes a while to like go build the supercomputers and then train the next model. And so that doesn't mean that you don't have an exponentially improving process.
Starting point is 00:29:39 It just means you're sampling the exponential like once every couple of years. Yes, I love that. And you you just can't get confused that you know like you got the last sample and now everything is linear and i mean i worry i mean i worry about that and i really do think like you know i think that you know i it's it's i get that people are worn out from tech cycles right like i i think the nft world did a lot of people in of like, you know, because it illustrated everything of a tech cycle. But like the truth is we haven't had a bust, right?
Starting point is 00:30:13 Like the companies I talk to are being transformed. Like I haven't spoken to anyone who's done internal implementations of AI systems who's like, you know, this didn't do anything for us. Like I'm sure there's plenty out there, but in terms of actually embracing it, building it into their systems and thinking about it,
Starting point is 00:30:29 they don't find benefit. I'm just not hearing that very often. Most companies are just getting started and not barely thinking about it. And, you know, we're not going to see evidence in productivity numbers for a while. Systems take much longer to change than technologies, but that doesn't give you a license to ignore it.
Starting point is 00:30:43 And I think there's no one who uses these systems for 10 plus hours. That's my, my, my borderline of my book is 10 hours of, of, you know, frontier, you know, copilot and, you know, in GPT-4 mode or GPT-4 cloud three, you know, Gemini 1.5, whatever it is, you need to put 10 hours in with that. And if you are still unconvinced, I get it. Right. But like, yeah, I don't meet many people who aren't. So how do you, how do you get people to be optimistic enough? And like, I'm so like the last thing, the last thing that I would ever encourage anyone to do is to be blindly optimistic about anything, especially, you know, blindly optimistic about things where they're investing a lot of their money or a lot of their precious time into. But this is one of those areas where you need just
Starting point is 00:31:30 enough optimism and curiosity where you are pushing. Otherwise, I think you get yourself into a little bit of trouble, just in the sense that people who lean all the way in and use the tool to its best effect are going to have big advantages over people who don't. And the only thing really, given how cheap the tools are, that prevents you from leaning in is your own psychology, your own willingness to go experiment and try. And so I'll tell you, just speaking about Microsoft, it was an interesting thing when
Starting point is 00:32:16 GPT-4 was first available, trying to get people to go use it as ambitiously as they should. Because they had all of the perfectly reasonable reasons for why they shouldn't. And the longer everyone waited, the the, you know, the less well off they were, like relative to like trying to accomplish the thing that they were trying to accomplish. Yeah. I mean, so listen, I actually think you need to have a crisis in some ways. Like, I don't mean to, like, I am very optimistic, but I think you have to get through a crisis one way or another. Like I call it the three sleepless nights. Like you have to, like, you have to have a moment where you're like, oh my God, this feels like it's not a person. It's not alive, but it feels alive.
Starting point is 00:33:07 It could do parts of my job. I didn't expect, like you have to kind of push past because there's a lot of psychological resistance. There's some weirded outness. There's natural skepticism over hype. There is anxiety. And I find a lot of really smart people that I talk to are like, I kind of walked away from it. Right. There's also the fact that most people use one of the free models and those aren't as good. Right. Again, if you're going to use a free model, it has to be use Copilot and creative GPT-4 mode. Like you need to use a frontier model to figure out what's going on. And you need to do your 10 hours and you're going to have a crisis. Like it is weird. It is a thing we've never kind of contacted another form of intelligence before.
Starting point is 00:33:40 It's a weird form of intelligence. It's not human. It's not sentient. It doesn't have goals, but it will fake that all for you initially. And then once you get through it, you'll be fine, but you have to get there. And I think it's very uncomfortable and hard for people to push through. And then afterwards you start to find liberating uses for this. Like that's what I tend to find is very few people seem to be continually upset afterwards because they start to find places where like, this sucks. I don't want to do this part of my job and the AI will handle it for me. And if you start with the high friction parts, you're in good shape. Now there's places where that's bad. My students are like too many people are cheating everywhere. They were already cheating,
Starting point is 00:34:11 but now AI makes cheating even easier. Right. And like, there are things we have to do that are hard things that are human things. But I think that the pushing through and getting those 10 hours and getting to the crisis and just realizing something you have to go through. Like this is a, there's a change happening and it's exciting, but it's also unnerving in a deep way. And, you know, one of the things I've talked to people at Microsoft is like, how do we help people understand, like both go through the crisis, but then feel positive afterwards. And I don't think we have an easy answer to that set of questions, but I do think that they're, you know, that the discomfort is part of the process. I totally agree. I mean, almost everyone that I know, both optimists and pessimists, have had the crisis
Starting point is 00:34:53 that you are talking about. And funny enough, the optimist crisis is really the fundamental mechanism of these transformer-based models isn't that complicated. I mean, there's a huge amount of complicated stuff underlying it. And like, I've seen people have the flavor of crisis, which is like, all right,
Starting point is 00:35:16 like I actually understand the mechanism of this thing. How on God's earth is it getting in the ballpark of things that I thought I was good at? Like, does that mean that I'm a stochastic parrot? Does it, you know, it's really interesting, like watching that.
Starting point is 00:35:35 That is 100% one of the most common forms of this is, oh my God, are we just this, right? And I think you come to the conclusion that we're not just this, but something like this is part of what we do. But that's not that it's sort of like saying being a biochemist, you know, everything about how a person works because, you know, biochemistry. Like, but there is a crisis there, right? There is a nothing was close to us this way. It's like finding out that, you know, that the dolphins really can talk and they've been talking about you behind your back the whole time. Like there is something startling about this. But I think that, you know, it is reconstructable afterward.
Starting point is 00:36:05 I think there's lots of open questions about why is it so good? How does it compare to the human brain? But I think the kind of naive view is probably wrong, right? It's not, we found something that works kind of like it. I've always liked Stephen Wolfram's kind of argument that there is a statistical pattern to language, which means a statistical pattern to thought. And you can reproduce that without necessarily thinking. But it's tough. We don't have the answers to these questions really yet.
Starting point is 00:36:25 We don't. These systems are ill understood, our own cognition is ill understood. It's an interesting set of vagaries here. But if you can look at the thing as it's just a tool, I'm going to try to figure out what the tool is good for, then you can do interesting things with it. And I think that that tool analogy is important, right? It's many things like, it's weird, because you have to both keep thinking about as a tool, but you also have to treat it like a person to get the best results
Starting point is 00:36:58 from it. So it's hard for people to keep both those things in their head at the same time. But the problem is that both analogies are kind of bad, right? Like I hear privacy concerns from companies all the time that don't make a lot of sense because they're privacy concerns you wouldn't have about any other tool in the cloud that everyone's using all the time, right? So it's very weird to see people being panicked, like, but it knows everything. I'm like, well, I mean, are you worried that Dropbox is stealing all of your data? You're not, right? So part of this is this kind of conversation about like, oh, they're thinking like an entity that's learning stuff. So if you show it something,
Starting point is 00:37:29 everybody's learning the thing you're showing it. And it's not working that way either. So it's a little tough to keep both things in your mind at one time, but you kind of have to. And I think once you use it, a little bit of the magic wears off at the same time but the usefulness increases. And that's a good place to be.
Starting point is 00:37:45 So what are some of your concerns about the technology? So I think I've kind of, I mean, there is a bunch of baked in bad stuff that's going to happen one way or another, because I don't think it's stopped at this point. Like we have a perfect fishing machine. This, we have, the world is going to get filled up with what is now starts to be the, cause there's a name for the AI slop, right? Anything since 2022 is going to get filled up with what now starts to be the AI slop. Anything since 2022 is going to be contaminated with AI information. And I use contaminated in both good and bad ways. We have a different information ecosystem than we had before. No matter what we do to do watermarking other approaches,
Starting point is 00:38:20 open source solutions are out there that are powerful enough that they're going to get around whatever barriers are put in place and that die is cast, right? So the nature of IT security is going to have to change fairly dramatically. The nature of how we treat online information is going to have to change fairly dramatically. There is going to be some escalation of scams. And I mean, there's bad stuff that's going to happen that's almost 100% certain going to happen. And there's social changes that are going to be, the job disruptions that I think we will see one way or another. The fact that people are going to make AI friends, right? This is going to be something we're going to have to deal with. And we don't know whether or not it becomes one of these things that at some point people are like,
Starting point is 00:38:55 okay, I've had enough. I want to have real friends too, or whether this is going to be too compelling. AI is very persuasive. These are already baked in changes. And I think we need to start mitigating negative effects. What I'm worried about is sometimes we focus too much on AI will wake up and murder us all and not so much on what do we do to regulate a machine and a thing we've seen before, which is a machine that could do a lot of things that are good and bad. How do we mitigate the bad effects while allowing the good effects to happen? So I think that's my real concern. Yeah. And, you know, I think the encouraging thing from the historical record is we've got plenty of expertise at figuring out how to regulate technologies that are ubiquitous. One of my favorite examples is electricity.
Starting point is 00:39:38 So electricity, in many different ways, is a potentially lethal technology. And we regulate how it's generated. We regulate how it's transmitted. We regulate how it is terminated and connected to your house. We regulate the appliances that consume it. We regulate the people who work on all of the different systems. And, you know, and still, even then, like, small number of tens of thousands of people in the United States die from accidental electrocutions every year. And we have all of that. And like, it is completely unimaginable to everyone how you
Starting point is 00:40:20 would live your life without electricity. And so, and there are many other technologies like that. Yeah, I mean, I think we'll figure it out, right? I mean, part of this is we have to figure it out. And one of the things that I like the sort of policy implications that Josh Gans, who's a professor at the University of Toronto, has been saying, which is, look, when you don't know benefits or risks, what you need is fast, smart, responsive regulation, where you look for emerging harms and then you regulate, right? And so we know there's emerging harms around identity and there's emerging harms
Starting point is 00:40:49 around fake, you know, like we should be thinking about how we're solving those policy concerns. I think that it's worthwhile to think about the sort of dangerous scenario of what happens if we do have a sentient machine that's smarter than all humans. But I feel like compared to the other kinds of risks that we know are happening and the other benefits, like micro regulations actually really need it. When I talk to industries, right, one of the major problems like financial services has is they're not clear what the regulation is. So they can't act as much as they want to because the regulation was written for algorithmic forms of AI where the decision, it was about AI decision making,
Starting point is 00:41:22 you know, in FICO scores and things like that. And that's not really relevant to LLM. So we need positive regulation, too, that carves out room to experiment. How does this work in medicine? What are our HIPAA requirements? So there is, you know, the people who are sort of anti-regulation, also we live in a system that's already regulated. Like there's already choices that have been made.
Starting point is 00:41:41 And, you know, it's a thing worth reminding folks, like there's plenty of regulation already that applies to AI. Like, you know, AI can't write prescriptions, only doctors can. And, you know, even there, like, you have to sort of, someone will have to figure out, like, whether or not and to what extent a doctor can take a recommendation from an AI and use that to write a prescription for a medicine that goes to a patient. Yeah, AI models are not FDA 510K compliant. So, like, you can't use them as a medical device. So, like, there's just a bunch of regulation right now and some of it actually you may need to relax because the benefits are going to be so blaringly obvious for the public uh it will help us solve so many problems you're probably going to want to change the regulation even in the other direction to like make things more permissive than they are right now fast experimentation is going to be key so we need
Starting point is 00:42:40 to figure out ways to allow fast experimentation to happen but it's also fair that like there's going to be downside risks everywhere. It's a general purpose technology, right? Those come along once every generation or so. And computers plus internet was the last one. And it turns out we need to put a lot of regulation in place about how that operates. But we also had to allow experimentation. And I also think it's important to view this as a technology with, you know, there are enough smart people who think there's substantial world-ending risk that I want
Starting point is 00:43:08 to take that seriously. But I'm much more worried about, you know, in the short term, I don't think there's enough discussion about how we allow the good stuff to thrive and the bad stuff to stop and what that looks like. And, you know, how do we let experiments happen the right way, but regulate quickly? That feels to me like a much more important question to have answers to than sort of, you know, than we're giving attention to. Cool. So looking forward to the next handful of years, like what are you most excited about? I mean, this is, so I have been trying to teach education at scale for a long time. I started working at the Media Lab
Starting point is 00:43:49 on this back in the early, mid 2000s. I worked at DARPA projects on using games for training at scale. I've been building tools to teach business skills. It turns out something we don't teach enough is like you teach someone how to network
Starting point is 00:43:59 or how to pitch or any of this stuff. And like their life changes actually in controlled studies. Like it's amazing. Small amounts of skills because we don't actually teach those into most people and they're very useful. Right. And so I've been building games and simulations to teach at scale for a really long time and watching like what happens, like I've got these MOOC massive online video courses that a couple hundred thousand people have taken. And you know, as much as I like the
Starting point is 00:44:20 MOOCs are a terrible way to teach. You're watching a bunch of passive videos. So to me, the education stuff is such a low hanginghanging piece of fruit here, which is you told me with your daughter already moving ahead of class, we're going to have to start relaxing some of our ideas of teaching. But it's going to be great also for teachers because, I mean, it's very funny when I hear teachers feel threatened in some way because I'm a teacher too. By the way, business school professor is going to be one of the most
Starting point is 00:44:41 disrupted jobs on all of the O-net lists. It's number 22 out of 1,016 jobs, the most disrupted jobs. But disruption doesn't mean destruction. I have to teach hundreds of kids in my classes. I don't get to know all of them because I can't, right? I can't reach all of them. When I teach a classroom, I'm a pretty good teacher, but I kind of have to teach the middle or the high end of the class. What happens if you're stuck? Suddenly we have systems that can help bring everyone up to speed, that can help us dynamically develop learning environments that work, that can be individual tutors. Like to me, this is undeniably exciting and it's ready.
Starting point is 00:45:10 Like we can get there with today's technology. And to me, like that's a burning need. Yeah, I totally agree. And like I think it's up and down the stack. Like you look at the number of kids who don't have access to the full set of educational resources that could help them flourish. It's in the developing world, it's in school systems in part of the country that are
Starting point is 00:45:37 just under-invested in for a whole variety of reasons. Honestly, it's even the elite institutions. I look at some of these elite schools where they And like, honestly, it's even the, even the elite institutions. So, you know, I look at, you know, some of these elite schools where they're like, oh, we only admit 5% of our applicants or, you know, 4% of our applicants. And I'm like, that's a, just appalling thing because, yeah, everybody who applies to MIT, I mean, like not everybody, but like some crazy proportion of them are like outstandingly talented. And like the fact that you got to draw the line, you know, like it's kind of nutty.
Starting point is 00:46:13 Talent is much more evenly distributed than opportunity. And our biggest cost is that, right? There are talented people all over the world who don't have access to opportunity. And like the fact that we can close any of that gap will be a net benefit to everybody. And like, listen, I've been working in education for a long time. And there was a shift from sort of utopian technological solutions, right? Like one laptop or child and all these, like, we just give people technology and everyone thrives, which kind of comes from our 80s nerddom we mentioned earlier, which was like,
Starting point is 00:46:40 oh, we learned how to code by just being given a laptop. Everybody should. Turns out that's not true. There's social systems in place. There's problems at all levels of society. There's been a big shift to thinking about those things, which I think are really important. But I actually think we've kind of lost a little bit of the technological optimism of like, hey, maybe there are tools that can help solve these problems. And it doesn't have to all be like the most grinding social change. And that's what excites me the most is like restoring a bit of that optimistic piece while still worrying about downside risks. Like, I don't want to let it, you know, the default way, for example, the default, all the models
Starting point is 00:47:12 right now, if you ask them to teach, they all want to use learning styles, which is actually 95% of teachers believe they're real, but it's debunked as a learning method. Like you don't actually learn better when you're taught. If you're a visual learner, you should not be just taught visually. Actually, it makes you a worse student because you should be learning in multi modes and learning styles don't really work. But the AIs absorb this because that's what most people think. So we have to tell the AI don't use learning styles when we build a tutor. So it's not like you can use this stuff completely unsupervised, but at the same time, it's pretty good out of the box and best available human. Is it better than what you have access to right now yeah 100 um maybe this isn't even an ai question i i'm sort of interested in what's on the mind of your
Starting point is 00:47:53 students so like they are they're entering a really interesting world right now uh and like you know ai notwithstanding like what what's on their mind as they're approaching the future? Like what are they anxious about and what are they optimistic for? I mean, I'm seeing entrepreneurship continue to be really exciting for a lot of them. I mean, I think that the question everyone asks me all the time is like, well, how will jobs change? And, you know, what should I do? And I think we've all learned, don't try and predict an individual job. I mean, there are some fields that I think are riskier than others, but, you know, what should I do? And I, you know, I think we've all learned, don't try and predict an individual job. I mean, there are some fields that I think are riskier than others, but, you know, they're already having issues, right? Journalism is in decline, sadly, for, you know, for a variety of reasons that have nothing to do with AI. But, you know, I don't
Starting point is 00:48:35 tell people to switch jobs because of this, right? I think the question is, figure out what you're good at more than anything else and double down on that. So I think part of this is the uncertainty. I mean, but it's been uncertain times for a very long time, right? Like I started teaching right after, you know, right after the Great Recession started, right? And the collapse of the housing market. There's not really that many times where everyone's like, yes, everything is super stable right now. So my students who have panicked from time immemorial, and they seem to do really well. The thing I also try and tell them is like, the one thing that's definitely happened is careers are now long. You'll do many things of your career. You shouldn't over index on your first job, your first thing you're doing.
Starting point is 00:49:11 Maybe you stick with it, right? Maybe you have a long career there, but like people move, jobs change, the world changes and you're going to be okay. Stay flexible, you know, make luck work on your side as best you can. And I think most people work out, you know, if they, if they're, everyone has setbacks, but there's ways forward in most cases. All right. Last, last question. So you're, you're just super busy right now. You know, I think before we, before we hit record, you know, both of us were talking about all of all of the many things that we're record, both of us were talking about all of the many things that we're off doing
Starting point is 00:49:48 that are supplements to, I guess, our day job. But given how busy you are and how interesting the work is that you're doing right now on AI, what do you do in your free time uh like what what's what's your passion so you know our family and i really close we do all kinds of fun trips and doing stuff together my solo activity is um as uh i i'm a gamer uh and that's my sort of stress relief and kind of fun i do a lot of like um uh these very hard kinds of games called roguelike games that I happen to really enjoy that where you lose most of the time. And I find that a really interesting randomness plus trial plus figuring out how you solve things. So I do a lot of game,
Starting point is 00:50:35 a lot of gaming. And it sort of runs right into AI because what I end up doing is kind of combining the two a lot. I like to figure out how the AI could do something. Thinking about how AI as a game designer is a really interesting approach to working with it that's super super cool well um thank you so much for taking time out of your schedule to chat with us and thank you for being a optimistic uh voice like i i think the thing that you are doing is genuinely helpful right now. I think getting people excited enough to give this stuff a try is going to help them do super interesting things in the future. So that encouragement is really essential, I think. That's great. And the funny thing is, I feel like I'm a pragmatist,
Starting point is 00:51:25 right? I just think that the voices of non-optimism kind of get drowned out. Um, and so I like, I've never not saying like, Oh, do this at all costs. We've just talked about regulation. Like, I don't think this is like, we, we, we, we need to push every lever forward, but like, how can we not look at some of these changes and not be excited for what they can bring and how, especially because we have agency now, but like, this is the time to shape what these things look like. And that's an exciting time to be right. Like we, we don't get to choose every moment we're born, right? This is an interesting moment.
Starting point is 00:51:52 And I hope people see that and see that they can shape their world a little bit in their own image. Well, that is a beautiful note to end this on. Thank you very much for being with us today. Thank you for having me. What a fantastic conversation with Ethan Malek. So I just have to say, you know, you mentioned his energy before we, before you interviewed him and, and you're dead on, like he's the energy that comes across through his newsletter and his social media posts is even more obvious, you know, when you hear him talk and see how excited he gets about this stuff. And I'm very, very envious of his students because, you know, like I would love to take classes from from someone like him. I loved the idea that he had, you know, with one of his classes of saying,
Starting point is 00:52:46 okay, write a resume and now write something that is going to make your job go extinct. And what a great thought exercise on so many levels. Like I really, I love so many things about that. Yeah, it really is. And like, honestly, it's, you know, funny enough, some of the most successful people I know, that's exactly how they have run their entire career.
Starting point is 00:53:07 It's like everything they do, they are trying to figure out, like, all right, well, how can I obsolete myself? Because it's a great thing. Like, if you're going to have career progression, like, you need to figure out, like, how to, like, do the thing that you were already doing well enough so you can, like, just set it aside and go do the next thing. No, I mean, I exactly, I completely agree. It gets you out of your comfort zone. I think it helps people realize I have, what other problems can I solve and what other things can I take on? And, you know, to misquote, you know, Clayton Christensen, you know, like disrupting yourselves, I think is really important. And so that's what I loved about that idea was that it's, you know, there's so much discussion happening around AI, especially generative AI right now, and some of the downsides.
Starting point is 00:53:50 And you two talked about that a little bit. But I think that sometimes what's lost in that is the opportunities, not just when we think about, you know, how much more efficient we'll be and whatnot, but how we can maybe force ourselves to have to think about things that we ourselves do differently. If I'm not, if this is not my career, then what, what else would I do? I think that that's a really interesting thing that this technology opens up. Yeah. And like, he, he also pushed on this very real point that we're quite a ways away from having AIs that can do what we understand right now as full jobs.
Starting point is 00:54:28 Like they're really good at doing tasks. And, you know, like the thing that's changing over the years is like the scope of the tasks that they are able to do or assist with is getting broader and broader. But, you know, it does tend to be that the things that they that they're useful for at first is not the most exciting stuff that you're doing in the first place. Like everything that I use AI for is sort of the most annoying parts of my job. Yes. Yeah. No, same. And I had somebody who's always loved automation and things like that. That's why, you know, this wave has been so exciting. But no, your point, yeah, it's usually not the exciting stuff. But yet, I think almost paradoxically, and you see this with with Ethan's work and the things that he shares, like, that's what unlocks the potential. Yeah, because it's like, okay, if we can do this, then we can start imagining other things that this can help enhance, which is really powerful. For sure. And like just in general, I think the energy that he's radiating right now feels to me a lot like the energy that like lots and lots of people were radiating during the PC revolution and the Internet wave. wave um i mean it was such an exciting time and i feel like we're back there again that this is yet another one of those uniquely exciting times where a new thing has come on the scene it has so much
Starting point is 00:55:53 possibility and potential in it and like if you are curious and ambitious and uh you know like are willing to go build on top of the thing, like you're going to be able to do some interesting stuff. And some of it will fail and like we'll learn in failure. And like some of it will like really make a difference, I think. I completely agree. And one of the things that I appreciate very much about Ethan is that, as you mentioned before, like he's trying these things out. He's encouraging other people to try these things out. So that I think also helps, you know, make people more comfortable. Okay, well, let me play this.
Starting point is 00:56:27 Let me try this out, which is, I think, you can correct me if I'm wrong on this, but that's a little bit different than what we've had in previous revolutions because the barrier to entry now is more accessible than it's been before. Oh, 100%. Which, you know, you talked about your daughter and how she's using things, right? Like this is, everyone has access to, well, not everyone, but we're working on making these tools accessible
Starting point is 00:56:50 to as many people as possible. And that, I think, you know, means that the excitement that can come around this and the education, as Ethan was talking about, that can come around this, has so much potential separate from the technological advancements. Yeah. Yeah. I mean, look, I do think the power of the things that people have
Starting point is 00:57:17 access to that are literally freely available as long as you've got an internet connection and access to a computer. And like, those are are two big things that we shouldn't take as given. We still live in a world where a lot of us probably do take it for given, but that internet access and having a computer, whether it's a smartphone or a PC, not everyone has by a long shot. But if you do have those things, like if you're connected, like the power of the tools that you have free access to
Starting point is 00:57:51 is just extraordinary. Like they have never been more powerful or given you more capability to do more things. And like that, that to me is exciting. So yeah, more people have, have these superpowers now than have ever had them before. Absolutely. And I'm glad that we have leaders and educators like Ethan who are out there helping encourage and unpack what these things mean and, you know, help propel things forward. Yep. Agree. All right. Well, that is all the time that we have for today. Huge thanks to Ethan Mollick for joining us. If there's anything that you'd like to share with us, you can email us at
Starting point is 00:58:28 behindthetech at microsoft.com. And you can follow Behind the Tech on your favorite podcast platform, or you can check out our full video experience on YouTube. Thanks for listening. See you next time.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.