TED Talks Daily - 3 possible futures for AI — which will we choose? | Alvin W. Graylin, Manoush Zomorodi

Episode Date: January 20, 2026

After decades working in technology across both the US and China, Alvin W. Graylin sees three possible paths for the future of AI: one where tech giants create a class of trillionaires, one where comp...etition escalates into war or one where humanity builds and shares this technology for the common good. In conversation with TED Radio Hour host Manoush Zomorodi, Graylin cuts through the hype to clarify how we choose the right path. Hosted on Acast. See acast.com/privacy for more information.

Transcript
Discussion (0)
Starting point is 00:00:07 You're listening to TED Talks Daily, where we bring you new ideas and conversations to spark your curiosity every day. I'm your host, Elise Hu. After 35 years working in technology across both the U.S. and China, Alvin W. Grayland sees three possible paths for the future of AI. One where tech giants create a class of trillionaires, one where competition escalates into war, or one where humanity builds and shares this technology for the common good. In this conversation with journalist and TED Radio Hour host Manus Shomeroti, Graylin cuts through the hype to clarify how to make sure we choose the right path. Alvin, you have been in this field, AI, cybersecurity, VR, semiconductors, 35 years you've been doing this. But what makes you very different is that it's been both in the United States as a U.S. citizen and in China a lot of the time.
Starting point is 00:01:10 I think a lot of people feel ambivalent about AI. they feel like what is actually really happening? What is hype and what is transforming our existence? Where are we right now, according to you? I mean, this is one of the biggest questions that we have as a society today, and unfortunately, there's just a lot of misinformation. And my answer to you is probably going to be a little different
Starting point is 00:01:34 than the Silicon Valley consensus, even though I work at Stanford, and it's going to be probably a little scary to a lot of you. But hopefully by the end of this, it will convince you to take action, just like what TEDs, and the little note I saw in TED is said, what action you're going to take after this event?
Starting point is 00:01:52 We are really at this inflection point, and the inflection point, not the traditional one that just keep going up. We are essentially at a fork in a row between three possible futures right now. One where the big labs essentially takes control of the government, by growing their power and the resources as much as possible,
Starting point is 00:02:15 then creating essentially a class of trillionaires and everybody else. This is kind of the Elysium future that's ahead of us. The second option is that actually we are heading towards a Mad Max future, where we intensify the conflict between countries and going from AI race to AI war to kinetic war and potentially to nuclear war. And I've talked to people in D.C. who actually sees that as inevitable, which is a little scary.
Starting point is 00:02:43 And the third option that we have right now is potentially the Star Trek option, the option where technology is being used and shared, and something brings us, in the Star Trek stories, essentially the Vulcans bring us advanced technology, peaceful, rational species, brings us technology and saves us from ourselves, and brings on this century of discovery,
Starting point is 00:03:06 or millennia of discovery. We have a potential to get there. Unfortunately, today, we are heading towards the first two, and the forces of what's driving it is actually going to take a lot of work for us to move from the first forces towards the first two towards that last one. Can we get into that a little bit more?
Starting point is 00:03:27 Because I think the narrative we've all been told, at least certainly by Sam Altman and maybe some other AI executives, is that we've got to lock this technology down, we've got to grow it, we've got to grow it fast, because if we don't, China will. Would you agree with that? That's actually one of the biggest myths out there, and actually one of the most scary things out there.
Starting point is 00:03:48 In fact, two days ago, I just came back from China. I've worked there half my career, and I think essentially the AI industry today is using the same tools that the military-industrial complex has used over the last century, in terms of you have to create an enemy. Once you do that, then you get funding,
Starting point is 00:04:06 you get support, you get deregulation, you get to move faster, and then you get to make money. And what the AI labs are actually trying to do is not to save the world. It is actually to create billions, actually trillions of dollars. In fact, they specifically said AI is worth trillions of dollars, and they want to be the first one to create AI, artificial genital intelligence,
Starting point is 00:04:27 and it's defined actually by Sam as a technology that can replace the average worker. And what that means is he wants to create a technology that can take everybody's jobs here. Now, on the surface, that actually may be scary, but I think if it's coming from the right place, it actually could be an amazing thing because that means we get liberated
Starting point is 00:04:47 so that we can spend time doing art and music and watching coming to TED. But unfortunately, I think right now, there isn't the other side of the story being put in, which is how do we protect the people who are going to be displaced by it? Okay, so I mean, despite what we've just talked about so far, Alvin is actually an optimist.
Starting point is 00:05:09 He is, I promise. Explain the vision that you have come up with about how we take the right track, that we take this moment of inflection, and we actually pivot in a good way. Yeah, so I actually just turned in the paper to Stanford, which is an AI policy paper about what we need to do going forward
Starting point is 00:05:30 and how we move from today's trajectory into something better. And it's a three-piece, three-part story, which is, sounds simple, but it's actually very hard to execute. One is we actually have to decide that instead of competing over resources and creating hundreds of labs around the world, trying to create duplicate, actually,
Starting point is 00:05:48 the same work, and having an under supply of chips and memory and talent. Rather than doing that, we need to come together and create what some people call the CERN of AI. Essentially a single lab that aggregates all of the talent around the world. Like the space station. Like the space station, like CERN, like CERN, like. the Eater labs that we've done for other types of technologies. It is very doable.
Starting point is 00:06:12 And then whatever comes out of it, rather than hoarding it for one company or one country, to do it and share it with the world, which is the whole idea of open science. This is what's made progress in this world happen. For open science. Yes, Ted Crowd. All right.
Starting point is 00:06:28 Nerds, I love it. Yes. And then two is that we need to put together everybody's from around the world so that we're not creating, in fact, the thing that a lot of people want to do today is create something called sovereign AI, which means an AI that works for your country, your culture, and represents you. And it essentially is a subset of data feeding into it. And when it sounds like, okay, that's good, because I have something on my side. But what data right now, what research is showing is that the less you give data, the more bias
Starting point is 00:07:00 these AI become. And what we really need to do is to make sure that the entire world's data, all of our history, all of our languages are represented, all of the culture, because then the AI can come in and create a optimal for everyone, that there is a way to find a way to balance everybody's needs without taking other people down. So how are we going to convince people to do this,
Starting point is 00:07:21 technologists, governments, to go along with this? That's the hard part. I think that the thing is we need to understand, or we need them to understand, that the world is not zero-sum, and that actually by working together, it's not weakness. together is enlightened self-interest. Because when you work together, you actually raise everybody
Starting point is 00:07:41 up. And when you raise everybody up, there's a lot less reason to have conflict, a lot less reason to have my children fly 10,000 miles around the world to kill your children. Why would I need that when I have everything around me from this technology that's going to give us? Because this is amazing technology. It's going to solve cancer. It is going to bring us better energy sources, it's going to solve hunger, all these things. But we have to choose to share with the world. and we have to choose to use it for humanity's good, not for one country's good. The third part of the plan is something called the GI Bill for the AI age.
Starting point is 00:08:16 So why did I say that? Because in 1944, 45, there was about 15 million American service people coming back from World War II, and they were going to create a giant employment shock to the world because they're going to come in, they're going to be unemployed, what did America decide to do? The government says, hey, we're going to give you free education, We're going to give you zero-interest loans.
Starting point is 00:08:36 We're going to give you free medical, and then we're going to help you essentially buy homes because that's what's needed for people to have secure lives. And it created the American middle class. It created a boom in our economy and turned us into what we are today, which is the most successful and most powerful nation in the world. We can do that again, but not for 15 million people,
Starting point is 00:08:57 maybe for 150 million people, maybe for 1.5 billion people, because America has 170 million workers. And the displacements that we are seeing is going to be the proportions that people are predicting. It could get into 100 plus million people affected just in this country. And on globally, it will be billions of people. And we have to take care of them,
Starting point is 00:09:17 because if we don't, this world is not going to be a very good place for us to hang out in. Okay, that's a lot to take in. I do want to give us something actionable, right? Because it can feel like, oh, this AI thing is happening to us and that it's inevitable. But what can we do? Like, when we walk out of here?
Starting point is 00:09:36 I think what you need to do is actually to start to change your mindset, to start to understand that the world is not zero-sum, and you actually have a responsibility as business owners. Most of you guys own businesses or work in our very senior positions in businesses. You need to see about how does your company integrate AI, not in a way to replace people, but in the way to make things more efficient. And rather than saying I'm going to lay off 30 percent of my staff,
Starting point is 00:10:00 which some companies are doing, because recently I've talked to 50 companies in the last two months about how they were implemented. A lot of them are saying I'm going to just replace my people. Giving them four-day work weeks, giving them reskilling to other places. We need to reduce the shock of what this technology is going to do to our society. The prior industrial revolutions took 80, 60 and 40 years to play out.
Starting point is 00:10:19 This one is going to happen in the next five to 10 years, maybe shorter. And our society is not equipped to move at that speed. So play with the models, see what they're like, know what these companies are talking about? Do you recommend that? Oh, you have to do it. You have to actually use these models
Starting point is 00:10:37 because you'll hear people say, oh, this thing is not that scary. We have our, these things will never replace humans. The reality is the more you use it, more you understand how power they are and how quickly they're changing every day. And if you don't use it, you won't understand it. Alvin Grayland, thanks for giving us a glimpse into our future.
Starting point is 00:10:54 Thank you, Mnich. That was Alvin W. Graylin in conversation with Manushe Someroti at TED Next in 2025. If you're curious about Ted's curation, find out more at TED.com slash curation guidelines. And that's it for today. Ted Talks Daily is part of the TED Audio Collective. This talk was fact-checked by the TED Research Team and produced and edited by our team,
Starting point is 00:11:22 Martha Estefanos, Oliver Friedman, Brian Green, Lucy Little, and Tonica Sung Marnivong. This episode was mixed by Lucy Little. Additional support from Emma Tobner and Danielle. Bella Baleroza. I'm Elise Hu. I'll be back tomorrow with a fresh idea for your feed. Thanks for listening.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.