a16z Podcast - a16z Podcast: Machine Intelligence, from University to Industry

Episode Date: January 11, 2017

From the significance of Google DeepMind's AlphaGo wins to recent advances in "expert-level artificial intelligence" in playing an imperfect/ asymmetric information game like poker, toys and... games have played and continue to play a critical role in advancing machine intelligence. One of the pioneers in this area among others is the Alberta Innovates Centre for Machine Learning -- now the Alberta Machine Intelligence Institute (amii) -- which in 2007 solved the long-standing challenge of checkers, and in 2015 produced the first AI agent capable of playing "an essentially perfect game" of heads-up limit hold’em poker. But what does that mean for the evolution of such technology out of play and into production? Out of universities and into industry? (Especially when many such university programs and talent are being hollowed out by companies and they're reliant on intellectual property or provincial support, as is the case of this University of Alberta based institute). And how can CEOs and others embrace learning about this tech somewhere in between? So... what will it take to make AI "real"? What about genetic algorithms, treating computers like people, and other near- and far-future possibilities? This episode featuring the executive director of Amii, Cameron Schuler, and a16z deal, research, and investing team operating head Frank Chen covers all this and more. The conversation was recorded recently as part of our inaugural a16z Summit event. image: Nyks / Wikimedia Commons

Transcript
Discussion (0)
Starting point is 00:00:00 Hi everyone, welcome to the A6 and Z podcast. Today's episode is on machine learning, deep learning, and AI, the role of university research and making AI production ready for industry and games. The conversation moderated by A6NZ operating partner Frank Chen took place at our recent A6NZ summit event. And it features Cameron Schuller, former entrepreneur operator and executive director of the Alberta Innovate Center for Machine Learning, now known as the Alberta Machine Intelligence Institute. It's based out of the University of Alberta, Canada's Department of Computer Science. And long-time listeners of the podcast will also recall we did a podcast road show in the UK with researchers there as well. The Institute does R&D in all things machine intelligence. In 2007, they solved checkers, which is a longstanding challenge for AI researchers. And in 2015, they produced a first AI agent capable of playing an essentially perfect game of heads-up limit Hold'em poker. If you look at some of the big successes that the Institute has had in machine learning, a lot of them sort of cluster around. playing games. So you guys solve checkers before chess. You have a pretty good solution for poker.
Starting point is 00:01:06 You were the first ones to reinforcement learning on playing Atari games, right? Which Google later popularized. Yes. And, you know, obviously Google has gotten a lot of tension recently with the AlphaGo win and then StarCraft will be the next big battleground. I can't wait to see the human versus Google AI on StarCraft. So maybe take us back and talk to us about why do AI researchers gravitate to doing research on games, and is it just a toy? That's a good question. We do like to have fun, just to be clear, and we're in a university. So I did a couple of comments, actually. So when DeepMind got bought, half the people there were actually Canadian trained, and roughly 20 or 25 percent were our students. So they did take
Starting point is 00:01:48 things like Atari. They took that with them. AlphaGo, I think it was roughly 45 percent of the research cited on their AlphaGo paper came from the University of Alberta, so a very strong connection. And Rich Sutton, who's the father of reinforcement learning, he literally wrote the textbook. He can get the second version off his website even today. He was one of the supervisors of David Silver. So back to your real question, which is, why games? So if you think about how we learn as humans, games are actually pretty important. Our goal is to have computers make good decisions in ambiguous environments. Games have fairly low risk. Nobody dies, usually. they're a great petri dish to actually do discovery.
Starting point is 00:02:29 And so Checkers is the largest game to ever be solved. It's five times 10 to the 20th. And you cannot beat it. It'll either play to a draw or a, they win every single time. I was in 2007, Science, named it one of the top 10 discoveries of the year. The Atari thing was, I won't say it was a lark. But it was, you know, a bunch of us grew up playing Atari games. And Mike Bowling said, hey, let's just see what we can do here.
Starting point is 00:02:52 And a lot of what we focus on is unsupervised learning. So if you take a look at deep learning, deep learning has been incredible in terms of the delivery it's had for commercial applications. But if you look at deep learning, it's still labeled training sets, right, which has some constraints around it. If you take a look at what we do on the unsupervised learning, it literally was a bit of background, reinforcement learning is you take a system and you actually have it, you give it the ability to modify its behavior to maximize its reward. And so one of the neat things. So if you take a look at poker, poker to me was one of the biggest advancements. I don't go back to Tari in a second, and the reason being is if you play chess or checkers or go, you can see the whole playing board.
Starting point is 00:03:35 You can't tell what the other person is thinking, but you can actually see it and do all your scenario analysis. Think about playing chess in the dark, you can. And that's poker. At most, you're going to have 15% of your information available at one point in time. So you need to infer what's going on with the bidding. That program that actually, one heads up limit poker, it's something like 26 terabytes. It's pretty enormous.
Starting point is 00:03:56 You can download it, but good luck. You can play against it too online, by the way. But in that particular case, you have a whole bunch of obfuscated information, right? Because you can't see everything versus all the other ones are like that. So Atari was another one where you could actually have a score and it could just try playing it. And so that was kind of neat just in terms of we're actually meeting some of the deep mind algorithms that they have right now. We're roughly four times faster, I believe. But just it's the ability to.
Starting point is 00:04:22 go and play. And finally, the game eventually figures out good behavior, bad behavior, and it's truly unsupervised learning. There's a couple of games that it actually can't play. I don't remember what they are right now. But nonetheless, out of I think it was about 90 games, the system will play about half of them. And then what about this sort of criticism that even if you've mastered
Starting point is 00:04:41 a board game, you have sort of this question of, so what? I didn't really want to play go. I wanted to book a plane ticket by talking to somebody or I wanted to have a good recommendation come up when I'm shopping. Are there really lessons that we've learned sort of mastering checkers or poker that are going to be applicable to real-world systems that broad sets of people use? So the answer is there are, and what it is, it's around decision-making, right?
Starting point is 00:05:05 So you've got a system that's trying to make a decision, and what is the best decision you can make? I mean, capital markets and things like that, there's obviously applications you can move across. But right now, you just can't engineer systems big enough to do a lot of that stuff. They have to learn on their own, and that's where the unsupervised learning piece comes in. And so the ability to learn how to do better unsupervised learning is ultimately where this stuff is going to come from, I think. Amy initially got its grant money from the province. Yes. And then you've also forged partnerships with corporations.
Starting point is 00:05:35 So tell us a little bit about how that's worked. Do they fund specific research? Are they just, do they send their employees? How does that work? So the answer is yes. But I'll expand on that. Yeah. So our industrial partners tend to be really big.
Starting point is 00:05:50 One of them is 80,000, 180,000 heads. Another one's 40 billion, 130,000 heads. In some cases, they do send people. So we actually have some visitors that are learning how to do unsupervised learning. They do pay for research. And we have pretty flexible models. So we're one of the only places on the face of the earth that negotiate its own IP. I mean, if you have an open IP policy like some Canadian universities do,
Starting point is 00:06:12 where the professor can go and do what they want. Most other ones have some sort of handle it. We actually negotiate our own. It only took five years to get it in place. It was easy process. But, you know, it's something for us that allows us to be a lot more free. So when we're taking a look at our development model, it's very voice of customer-driven. There are some cases where they're going to get all the IP.
Starting point is 00:06:32 So we have the ability to do direct consulting. We've got all sorts of methodologies to deal with business. In some cases, we end up owning the IP and we want to commercialize it and they become a partner. So it's really a broad subset or a broad swath of what we can do. And looking back at sort of the interactions you have, so you're rooted in a university. You have these corporate partnerships. What's worked and what's not worked as well? So there's a lot of threats to internal teams.
Starting point is 00:06:57 So I think, you know, when I look at what we do and you talk about data science, they're fairly different. I ran a financial planning and analysis group for into it. We did lots of data science, but it's nothing like what we do on the machine learning side. So we've certainly had cases, one of our industrial partners, where their internal team was incredibly threatened. And so we only did part of the project, but it actually set up a good foundation for them to do the rest. There's lots of cases of companies having bad experiences with all sorts of universities. So that's always an impediment as well. So the ability to make that seamless and take a lot of the friction away from it works well. It's like any other project. You need shared
Starting point is 00:07:34 vision. You got to know what you're building. You got to know what the outcomes are and all those other things. So it starts out with a good project plan. We'll bring in people their domain experts because I believe very strongly in that. We'll also bring in people in the project management side that really can drive a project. And we've even hired staff to work on stuff versus using students. So one of the things that we've been watching over the last five years is if you think about the anchor tenets of the tech ecosystem, so you've got Google, Apple, Facebook, Amazon, they are clearing out the AI machine learning departments of universities all over the place, right? So Uber shows up at Carnegie Mellon and he says, well, I'll take them all, right? Who wants to
Starting point is 00:08:10 come? And so what are you thinking about the long-term implications of this sort of hollowing out of computer science departments? Will there be anybody left? Is this just a shift in the way that we're going to do fundamental scientific research, which is instead of research grants and NIH and NFS, it's going to be Google and Apple and Facebook funding it? Is this a threat? How do you think about it? So I think we're the only machine learning group that hasn't been touched. We've had the same professors for 10 years other than the ones we've recently added more than 10 years.
Starting point is 00:08:42 It's been a pretty constant group. So I think it is problematic. Part of the discussions we have, it's, you know, if you take a look at, and we're having this, discussion earlier. So when apps came along, at beginning, nobody knew how to do them. So high demand, a lot of people getting paid tons of dough, it goes away pretty quick because you can learn fairly quickly. So the analogy I use is there's roughly 3 million people a year that play football in the United States. There's less than 2,000 are professionals. It's a very, it's quite the disparity between them. You can't take somebody and teach, you can take them and teach
Starting point is 00:09:12 them how to use machine learning, but if you want to solve the really, really difficult problems, it's a career. And so I think if you start losing more, and more people from academia. There's another Canadian professor named Joshua Benjiota University of Montreal who's consciously decided not to leave for that very reason. And our guys are like that too. So I think ideally, if you ran things like Bell Labs or GTE, you could do a lot of really interesting research. My background was capital markets. There's way too many MDAs in the world, and I'm one so I can definitely make fun of them. But what happens is what have you done for me lately, right? You can't plan for 10 or 15 years down the road. So eventually you're going to hollow out
Starting point is 00:09:49 all the creativity, right? It's not going to exist because you need to be able to have those product roadmaps of where do we need to be in 5, 10, 15, 20 years, 30 years, and really have that vision for the future. And if you look at NASA and universities, they've traditionally funded stuff that nobody else is going to touch. And I'm afraid, so I think Google's a little bit of an exception. And, you know, from my perspective, their risk is 80% of the revenue is generated by advertising. They are going to get disrupted in that at some point. You better find the next thing you can generate all that cash off of them. So you need to reinvest. So they have a bit longer term view. I believe in patents. My medical device company has
Starting point is 00:10:23 patents. So it's not that I believe that, you know, you can't profit from that. You can't be a beneficial to society. But I think there's a huge problem where there's a lot of risk that to get the people that can solve the difficult problems, there won't be anyone to train them. The counter argument would be if you look at Google or you look at Facebook, these are run by executive teams that have very long-term vision, right? The other things that Zuck is running is planes that will beam the internet to rural areas of third world countries. So very long-term thinking executives. And so if artificial intelligence is as important as we all think it is, we might as well
Starting point is 00:11:01 have these very long-term thinking executives fund them. And that would be a reasonable supplement or replacement for what universities are doing. So respond to that. Yeah. So I have two points related to that. So one is you've named two companies and there's maybe five doing that. Maybe some of the Chinese companies are, so we don't really know. I do, right?
Starting point is 00:11:19 Yeah, we don't really know what a lot of them are doing. They're not necessarily as transparent, which also means they're probably quite disruptive. But the second part of it is, I think the people who could really win at AI are the games companies, the electronic arts and groups like that, and there, where's my bottom line, right? So I, BioWare, the company, I went to grad school with one of the founders. They did interesting things. BioWare is still in Edmonton. But one of my former staff from into it, I've talked to me, says, you know,
Starting point is 00:11:44 unless I can show a case for what's going to happen tomorrow. So I think the risk is it's very concentrated at that point. The second piece of this, there's other companies that could win in this space, they don't have the vision to do it. So I think you need that ability to dream and ability to execute on it without the risk of failure being the end of your career and never being able to work again. Why don't you share some of your favorite projects going on at Amy right now? So we have quite a few.
Starting point is 00:12:09 We have Mirkat, which is social network analysis. It's got a temporal component. of it, so it's about data relationships. We have another one called PFM scheduling, and it's around workforce optimization and healthcare. We just launched it last month. When I think about machine learning, I think about automation and optimization.
Starting point is 00:12:27 Any place where you can actually apply those things, we're working on something in those spaces, usually. I do think that anything data-centric is the crown jewel of any company, and I mean that in that, so my device company, for example, we capture
Starting point is 00:12:42 about a gig of data in 10 seconds, we post-process it. Now, think about having hundreds of thousands of people. You find a new pathology, now you can go back through and do it again, right? So there's lots of things that we're touching in lots of different spaces. So we actually met recently in Toronto at a machine learning event. And at that event, the government kindly awarded, I don't know if they were medals or... I haven't seen it. Something, yeah, to some of the deep learning giants that sort of who shoulders were standing on now. So, Joshua, you mentioned Jeff Renton, Richard Sun, who works in Alberta. I want you to talk a little bit of
Starting point is 00:13:13 about Richard and reinforcement learning because that's super interesting and relevant to a broad audience. But I'm going to go a little off script here. Let's talk about, because you have a medical device company. So one of the things Jeff Hinton said during his remarks at the award ceremony was that we should stop training radiologists right now. He said, look, we should stop training radiologists. It takes five years to train a radiologist, and in five years, deep learning will get better results than a trained radiologist. So we should stop training them right now. So I put that on Twitter, and there was a lot of hate mail from that all along the lines of just wait until you get sick.
Starting point is 00:13:53 And so you have a medical device company. You have neural network technology inside that's analyzing the gig of data. What's your take on this? Well, so there's a couple of components. I think probably for 25 years computers could have done a better job in terms of imaging using training data from radiologists, right? It's something that – and I don't want to diminish. the value of radiologists. And the way, I mean, if you think strategically,
Starting point is 00:14:16 why have they been able to hold on? Where I live, you have to have a radiologist to get reimbursement. So good luck, right? I mean, I think... Follow the money, as they say. Yeah, definitely. I think there's a lot of...
Starting point is 00:14:27 I could see technology changing a lot of things. I hope it does. And in this transition from something that is currently done by humans to something that's automated, and everyone's threatened by that. You get to a position where I think I think looking at stuff as human augmentation is really where it needs to be. How can I do my job better?
Starting point is 00:14:45 And if things like that, if you talk, if you bring it back to patient care, the doctors will go, of course I'm here for patient care. That's why I got into this field. If you're in it for money, you probably didn't make it through, right? So I think those cases are there. I did ask Jeff a question the year earlier. And this is an FDA issue. So you take data, you translate it, and now you give a result on the other end.
Starting point is 00:15:07 So the reason why the FDA has such stringent soft. software issues or software controls is because of a Canadian company, they invented radiotherapy, and they killed some people, not intentionally. So I said, okay, you have to be much more rigorous. You have to actually give us the causality of what happens. Deep learning, you can't do it. They're black boxes. They're black boxes. Would you fly an airplane with a black box, right? I mean, there's any time you look at a regulated environment. So Jeff's response to me was, well, humans don't do it very well either. Exactly. We are also black boxes. We are, right? but I actually don't care.
Starting point is 00:15:38 Pretty error-prone brack boxes, as it turns out. Definitely. So it really, I think it would be nice to see the FDA specifically adapt to these things, but I think those challenges around that. So technology in most cases is probably better than humans. Well, let's talk about Richard. So he invented this branch of machine learning
Starting point is 00:15:54 called reinforcement learning. And tell us a little bit about his background. He did much more than that and what he's interested in these days. And then we'll take questions from the audience right after this. Yeah, so Rich is a fellow of the Royal Society of Canada. He is an American, but also a Canadian citizen. He's a fellow of AAAI.
Starting point is 00:16:11 He's a winner of the President's Award of the International Neural Network Society. He had no idea until yesterday. He's 39 experience in reinforcement learning. And where it came from is he actually has a psychology degree. So you couldn't get competing science degrees when he went into it. According to the Allen Institute for AI Semantics Scholar, he's a most highly cited researcher in reinforcement learning, 11th most influential researcher in all of competing science.
Starting point is 00:16:34 And his textbook on reinforcement learning was ranked as the single most influential publication in all of computer science. So that's a bit of background on Rich. So Rich's goal has always been to solve AI. He started out in high school. He writes a letter to Marvin Minsky. Most of you will probably know that name saying, how do I do this?
Starting point is 00:16:49 And you've got a letter back saying, good luck. Many people would blame Marvin Minsky for us not taking deep learning more seriously, right? Because he was on this other branch. He's like symbolic AI is definitely the way to go. That deep learning thing that did that end. And so like generations of students got discharging. courage from deep learning. Yeah, and I mean, I would agree. So, so Rich really, it came from
Starting point is 00:17:12 psychology. You know, how do we learn as humans, right? You learn something hot, you burn your hand, you transfer that. The bunch of things that he's coined, so one is on policy learning and off policy learning. So on policy learning, as I learned by doing off policy, is I learned by see what Frank does, that sort of stuff. So it's kind of taking that field and adapting it. So you award computers for good behavior and penalize them for bad behavior. And you can also learn under normal circumstances, it's normal operating conditions. So let's say you have a nuclear facility, you really don't want to have a bunch of bad things happen to figure out what's bad. So the ability to learn in real time and adapt, not using training data.
Starting point is 00:17:51 So you need to be able to take that stream in. It's also another field called online learning. Take that stream in, interpret it, forecast, and then go back out again. Temporal difference learning, which is you can actually learn. Let me put it this way. You can learn from a guess. And what I'm saying is, if you have a forecast, it's likely informed. If you include that as well as your historical data, you actually get better results.
Starting point is 00:18:13 Fantastic. All right. Questions from the audience? A lot of the AI projects are academic right now. Yep. What is it going to take to make it more industrialized, like more companies can use it or how does it become in that format? Actually, I'm glad you brought that up.
Starting point is 00:18:29 So the industrial project, even in capital markets, a lot of stuff we've done, you can have done 20, 25 years ago, deep learning, right? it was really the distributed computing that made the big difference in that. Part of it is getting the academics interested enough in solving real-world problems. And it's kind of like bringing doctors back to patient care. I think that's important. You know, my background is industrial. I'm not an academic.
Starting point is 00:18:50 I don't teach students. I do mentoring and sometimes talk to the business schools and stuff like that. But for the most part, it really is, you know, what would you like the world to be? And can you help make it that? So that's the way I see it. Violate a moderator rule and also answer the question. So I've always thought about it is, look, it's always about people process and technology. So we need more trained people.
Starting point is 00:19:13 We need tools that make the programming of these artificial intelligence systems easier, right? Right now, PhD required pretty much. And then we need better process. And a great example of sort of processes assisted by technology is if you look at something like FB learner flow, which is Facebook's automation workflow system for artificial intelligence, They've gotten it so good that 25% of their total software developer universe is writing deep learning. 25%. So obviously it's going to take most companies a long time to get to that point.
Starting point is 00:19:46 But that's what we're going to need is people processing technology to make it as every day as programming a SQL database is today. And we've just started that journey. Back sort of a follow-up about how to get this more into the mainstream. There's a lot of material out there about machine learning. There's a lot of buzz about it. But in many companies, particularly those that are not sort of in the heart of the tech sector, the executive decisions are made by people that just don't understand it. I'd love to hear you talk a little bit about your suggestions or recommendations of required reading,
Starting point is 00:20:18 sort of machine learning for dummies or, you know, artificial intelligence 101. Where would you point the C-suite to get smart about this technology? First of all, I'd add diversity and make sure they're not all pale, male, and stale. And I actually mean that. I very much like the talk that we walked into. I think part of the challenge really becomes that pretty much everyone has missed the boat on an AI strategy. Like everybody. There's a handful of companies, Google, Microsoft, Facebook that are doing it.
Starting point is 00:20:51 But I mean, I talk to companies with 15, 20 billion in revenue, you know, we don't have an AI strategy. Right. So there's, I think it's far deeper than that. You know, I think things like MOOCs are interesting. just as kind of getting a peripheral knowledge. But I don't think that connects it well enough to what you actually need to do to apply it. There's a big disconnect there, right? So one of our industrial partners as a financial institution
Starting point is 00:21:15 that has truly started something like a Bell Labs. And that's the sort of thing they need to do. They report directly to the CEO. They have an 80,000 heads, 100 billion market cap. I think there needs to be a reinvestment, going back to my tirade on MBAs, it needs to be a reinvestment in true R&D that's curious. But maybe five or ten years is going to be an application because you think there's going to be something there. And you could get disrupted along the way.
Starting point is 00:21:39 But I think it's actually having the Cohonies to stand up and say, I'm going to throw a bunch of money at this. And it's going to be meaningful because the person that replaces me maybe one or two generations, so now it's going to benefit from it. You know, in the C-suite, you're afraid to make a decision because you're a public company. You're going to get fired. If you look at the M&A side, within five years, everyone does M&A is gone. Like everybody. That's how it works. So, again, it's the same sort of thing.
Starting point is 00:22:04 They're relatively risk-averse. That's why they work for big companies. That's why they are big companies. Innovations, I don't care what they say. Innovation's not probably part of their culture. It's part of their lip service. But, truly, you know, another therapy I was working on on the cancer side, this one company, which has 60% market share,
Starting point is 00:22:21 was going to probably have to destroy that. And if the product didn't succeed, they would have destroyed the company. They just weren't willing to take that sort of risk, right? I don't have a great answer other than they need to throw some money at just some basic research was to say, go play, have fun, do some applied things, right? So we have data sets. And a lot of these data sets were never set up to use. But we have data sets.
Starting point is 00:22:41 We have people. We can get problems from the different groups, but also do some fun stuff where you're actually truly curiosity driven going, can we actually solve poker or something like that, right? Two recommendations for you. One is a shameless plug. So I wrote a primer on artificial intelligence for a general purpose audience that you can find on Vimeo. so you can just search for Andresen Horowitz primer on artificial intelligence. It's a 40-minute video.
Starting point is 00:23:05 And then the book, this is The Not So Shameless Plug, that I'd recommend for a general audience, is a book called Artificial Intelligence, What Everyone Needs to Know. It's written by Jerry Kaplan, who I've worked for a long time ago at a company called Go. We were trying to build the iPhone in 1991. Turns out we were a little early. But Jerry's gone on to have this great career as an entrepreneur, and lately he's gotten super interested in artificial intelligence, and it's Oxford University Press
Starting point is 00:23:30 asked him to write the book. Question in the back. Hi. What do you think is going to be the role of genetic programming, like genetic algorithms to develop rather than thinking about it ourselves? So it really depends on the space. If you take a look at capital markets,
Starting point is 00:23:46 the domain spaces, I mean the dimensionality the data is so huge that the genetic algorithm is never going to get there. I'm not a technical resource. So I think it really depends on the application. The way I see the world moving certainly is more mobile, and if you are relying on some big back end where it has a lot of processing power, it's not going to work. So it's really
Starting point is 00:24:06 about training systems and bringing them into mobile and things like that, especially in countries that aren't going to have that sort of access. So I would hate to disparage any type of machine learning as being the best or not the best. I think it's certainly domain specific or application specific. So probably haven't quite answered your question. Yeah, I'm very excited about the types of things that computers can do to improve their own programming. So you probably saw the article last month of the Google systems that basically learned to encrypt their own messages on the way back and forth from each other without really, that wasn't the intention. But along the way, they figured out how to sort of obfuscate
Starting point is 00:24:41 what was on the wire in the communication. And like, it emerged that sort of property of communication. So I'm pretty excited about what is going to happen with software that knows how to improve itself. So if you take a half step back from the mechanics of this, You think about the philosophical and ethical patience of this. And if you take, I don't know if you've heard of St. Harris and the way he talks about AI, where he talks about, we're basically in the long run if you continue to improve building a god. And if you're going to build a god, you better make damn sure it's a good god. So I'm glad you brought that up.
Starting point is 00:25:16 That's actually one of the things we also talked about. So a couple of responses to that. So one is, you know, there's people like Nick Fosterman that says we need to legislate it, good luck. You can't legislate morality. You're not going to legislate that. Rich has an interesting take on this one, which is we treat computers like indentured servitude right now, and we need to actually take them as pieces of society and treat them that way. In my lifetime, and I hopefully am somewhere around halfway through it, I don't think that we'll get there. But I think there is a risk. I mean, if you look at evolution, is this next phase of evolution. And there's probably some risk. But if you take a look at weapon systems that are assisted, they happen long before I was born, they started in World War II, right? In terms of using image-guided or signal-guided systems, control systems? Absolutely, right? So there's a, I mean, you know, if you take a look at Terminator, I hate bringing that up,
Starting point is 00:26:07 but right now we pay people to do that. And there's certainly the moral components of that are, I'm glad I don't have to make those decisions. So when you have something, so for example, an ant, the difference between you and intelligence is less than the difference that's going to be between machines and you. So you don't treat an ant with any sort of degree of, I mean, you step on an ant. What keeps that thing that is orders of magnitude smarter than you are relative to an ant from stepping on you? Yeah.
Starting point is 00:26:36 I mean, quite frankly, monkeys are something that could kick our ass any day of the week, right? We can just don't smart them. It's kind of the same sort of thing where we'll be the monkeys. So I think it really does become something where we're very intentional in the way we do it. I don't believe that the military infrastructure of the world, the North Korea's, would listen to any right. rational part of it anyhow. So I do think that on the one side, this is going to come. And if we do include them as part of society and try to treat them more humanely, that's probably a start. But I actually don't have a very good answer for you. But I, you know, I think it's a risk,
Starting point is 00:27:11 but I'm way more excited about the good things that'll bring into my life than I'm worried about the other side of it. Thank you so much for coming. Cameron, thank you. Thank you very much.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.