In Good Company with Nicolai Tangen - HIGHLIGHTS: Ethan Mollick

Episode Date: June 13, 2025

We've curated a special 10-minute version of the podcast for those in a hurry.   Here you can listen to the full episode: https://podcasts.apple.com/no/podcast/ethan-mollick-ai-urg...ency-leadership-responsibility/id1614211565?i=1000712377483&l=nbWhich companies will lead and which will be left behind as AI transforms the way we work? Nicolai Tangen connects with Ethan Mollick, Wharton professor and author of 'Co-Intelligence: Living and Working with AI,' to explore how organizations can harness AI's revolutionary potential. They discuss the growing adoption of AI tools across workforces, proven tactics for driving company-wide implementation, the rise of autonomous AI agents, and why traditional training approaches may be missing the mark. Ethan reveals insights from his research showing that AI works best as a collaborative teammate rather than a replacement. With AI capabilities advancing faster than expected, organizations face increasing urgency to act.In Good Company is hosted by Nicolai Tangen, CEO of Norges Bank Investment Management. New full episodes every Wednesday, and don't miss our Highlight episodes every Friday.The production team for this episode includes Isabelle Karlsson and PLAN-B's Niklas Figenschau Johansen, Sebastian Langvik-Hansen and Pål Huuse. Background research was conducted by David Høysæter and Yohanna Akladious.Watch the episode on YouTube: Norges Bank Investment Management - YouTubeWant to learn more about the fund? The fund | Norges Bank Investment Management (nbim.no)Follow Nicolai Tangen on LinkedIn: Nicolai Tangen | LinkedInFollow NBIM on LinkedIn: Norges Bank Investment Management: Administrator for bedriftsside | LinkedInFollow NBIM on Instagram: Explore Norges Bank Investment Management on Instagram Hosted on Acast. See acast.com/privacy for more information.

Transcript
Discussion (0)
Starting point is 00:00:00 Hi, everybody. Tune into this short version of the podcast, which we do every Friday. For the long version, tune in on Wednesdays. Hi, everybody. Nicolai Tangen from the Norwegian Sovereign Wealth Fund. And today I am here with Ethan Mollik, one of my favorite professors, a professor at Wharton, and who was out not long ago with a book called Co-Intelligence, Living and Working with AI. And actually you can see it behind Ethan there, down to the right.
Starting point is 00:00:31 If you haven't got it, run and buy it. Ethan, if you were a chief AI officer in a company for the next three months, what kind of top actions would you take straight away? So I think that the most important thing is to get people actually aware of where the state of the art in AI is. I talk to companies all the time,
Starting point is 00:00:52 and I think that a lot of executive level people may have tried AI a while ago or didn't use it personally and don't realize how potentially transformative it is. And I think that I have a sort of general idea that you need to involve your team leadership, you need to involve a set up a lab that's doing research and you need to think about how to roll this out to the crowd to everybody in the organization. So you've got to kind of bring the whole company with you which is not always an easy thing to do.
Starting point is 00:01:17 How are the best companies going about this? So I've seen some really interesting examples of how this works. I can tell a few of the stories, I can't tell all of them. One example is radically changing incentives. So I've seen companies that offer $10,000 bonuses at the end of every week to every employee that uses, the employee who best uses AI to automate their job, right? And they think they're saving money versus other approaches. I've seen people build this into their hiring process. So before you hire somebody, your team
Starting point is 00:01:45 has to try and use AI to do their job. And then you adjust your request for hiring based on that experience. Or before you request money, you need to show how you're using AI to do it. Moderna has this really great example that they put together. They use AI for everything.
Starting point is 00:02:01 And what they did was build around the process of annual reviews. So basically, they built a whole series of GPTs that help people uncover their own performances, improvement needs, and what they've done over the year, and talk to the right people about their jobs and things so they can write a really good yearly update about themselves. And they said, well, if you don't use these GPTs, you're probably not going to do as well on your performance reviews, and that will hurt your internal salary.
Starting point is 00:02:27 And everybody ended up using this series of things which introduced them to AI. So putting these bottlenecks in place where people have to use it, thinking about building into internal processes in a way that encourages positive use rather than negative use, those tend to be really effective methods. So you need a combination of the stick and the character.
Starting point is 00:02:46 I think you do, and I think you also need role modeling, right? A leader who uses AI will make sure AI seems critical. Someone who doesn't use AI and says use it is kind of be a problem. Will the top performers, well, will the people who are top performers without AI be the top performers with AI?
Starting point is 00:03:05 So this is one of the biggest questions we're facing. If you think about it, there's four possibilities for what happens in an AI world on skills. So the first effect that we saw, and we saw this in our study, the Boston Consulting Group study I did with my friends at Harvard and MIT and University of Warwick, where we found big performance gains, and this came out like a year and a half ago, kind of made a big stir, 40% improvement in quality for people who use GPT-4 versus not, big speed improvements. And a lot of other studies like that have shown a leveling effect. So bottom performers get the biggest boost. When you really look at what's happening, it's actually the AI doing the work with the bottom performers, right?
Starting point is 00:03:43 So the AI is pretty good, so it moves everybody up to the eighth percentile. So one option is it boosts the bottom performers. A second option that could exist simultaneously is the idea that top performers get some sort of massive returns. We have a couple of studies show that, but not that many. It's hard to study. There's actually a one of the best pieces of evidence for that was actually turned out to be a fraudulent paper from MIT that didn't exist, right? But I think there's a lot of suspicion that top performers using AI can get a huge boost, just harder to measure. So there's a possibility that maybe there's a hundred times return if you're already a good coder. And we'll know more about that in the near future.
Starting point is 00:04:19 There's also a possibility that, you know, that AI lifts everybody up. So everybody's performance goes up by a similar amount. And then there is this sort of other possibility that there's AI whispers who are just good at AI, and they're the ones who get all the returns. So we don't know whether it's concentrated in the lower end, on the top end, whether it lifts everybody up,
Starting point is 00:04:37 or whether there's just sort of magical AI whispers who are just built to do this. And then agents are coming to replace everybody if you listen to the AI labs. To which extent is it now being used for CFOs trying to cut costs and to which extent is just amplifying power and helping us to do things better? So this is where companies get to make choices. And one of the things I worry about with AI is if the leadership isn't well informed in companies about how they work, they view this as another normal technology in the sense
Starting point is 00:05:08 of like, this is a cost cutting measure. So I can increase productivity by 20% so I can fire 20% of my staff. I think there's two things that worry me about that approach outside of any sort of moral or other kinds of concerns you might have, which is that first of all, no one knows how to use this, right? There is no off the shelf product that just does things for you with AI yet. They'll come, but you have to figure out how to use it inside your own company.
Starting point is 00:05:30 And doing that requires you to actually have experts figure out how it's used and the experts of your own organization. Your HR department, your R&D. So if you start firing people for using, you know, because AI makes it more efficient, everyone just stops showing they're using AI and you're going to be in trouble. So I think there's some danger in making a cost cutting move right away. That doesn't mean people aren't doing it.
Starting point is 00:05:50 The second big danger for me, cost cutting is if you believe we're on the edge of a real revolution in how work gets done, which I do, then the idea that you're going to slim yourself down. So if I get 20% performance improvement, I'll cut 20% of people. It feels like a really bad solution in a world where everybody else is going to have 20% performance gain overnight. And so I think that organizations that are in growth mode will tend to outperform those who are using this as a cost cutting technology. But we don't have all the models yet. People are still figuring this out.
Starting point is 00:06:21 It's interesting because there are very few cases where a compliance officer can kill a company. I mean, here, if you hold back the usage, you kill your company because competition is just, you know, pulling apart by 20% a year. And within two years you're dead. I mean, I think that that urgency you feel is, it's really interesting. I talked to lots of executives and you see this light switch go off for them. A lot of them are treating this as like they put this down seven levels of their organization or they've hired a consultancy
Starting point is 00:06:52 who's gonna produce a report on their AI readiness. And then you see the executives who kind of get it and there's just night and day because once you get what's happening here, it's very hard to not feel urgency and to not be anxious about resistance everywhere. In our previous conversation, that's one of the things that struck me was that feeling like, oh, this is the big one and we need to figure this out.
Starting point is 00:07:14 And organizations that haven't put that on the list aren't going to be in trouble. What proportion of companies have got it now, you think? I am surprised by how quickly the religion is spreading, but not as many as you think. I talked to a lot of top executives. I would say it's gone from two or three percent of people getting it to 20 percent of executives in a lot of the firms that should feel urgency feeling it. That's a pretty big increase in a short time. This technology is remarkably rapidly adopted.
Starting point is 00:07:44 People move from one company to the other, right? And it seems like these models are not ahead for a long period of time. They are being overtaken all the time by other things, right? Is this something that will continue? So a lot of questions of the future are unclear. I think, you know, so the frontier models, the best models at one point, there's only a few companies that can afford to make them at this point. And so generally, you want to stay close to one
Starting point is 00:08:08 of the model makers. So the people who make frontier closed source models are OpenAI and Anthropic and Google by and large. There are some other options out there, but those are the three big closed source ones. Generally, if you go with one of those, they're going to stay in the frontier for the foreseeable future.
Starting point is 00:08:25 There's not a reason to suspect they're going to, they might fall four months behind for a little while. If that matters to you, that's what your lab is supposed to be doing in your company is like, how good is the new model? Should we switch over? Somebody else has to be doing that testing 24 seven.
Starting point is 00:08:37 Another thing that always surprised me in companies is how few of them have people assigned 24 seven to just working with AI. Like it just, there's lots of other departments that work on things, but there's very few people whose job is to stay on top of these things. So you're in the lead, you get all this stuff, you're invited to pre-releases. When you now look into the future here, what's been the biggest surprise for you lately? So I think the biggest surprise for a lot of us has been this idea of reasoner models
Starting point is 00:09:09 that you kind of see here, right? So I showed you a little bit of this as an example earlier, but it turns out that models that sort of think out loud outperform those that don't. And this very kind of simple trick has increased the ability of AI by a tremendous amount. So I think the capability curve is coming faster than I expected it to. And that's been a big surprise. And then the other side of it that's been a big surprise is how fast adoption has occurred. So this is a very fast adoptive technology, according to any historical precedent.
Starting point is 00:09:39 We're probably up to a billion people using, you know, chat GPT at this point. The last numbers they released were somewhere between 500 million and a billion people. There's another few hundred million using other models. Like this is an insanely high adoption rate for technology that sometimes doesn't work or you know is weird to use and where we don't quite know what it's good or bad for yet. And so I think that the speed of adoption and the speed of capability gain are both faster than I thought.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.