Theories of Everything with Curt Jaimungal - OpenAI INSIDER On Future Scenarios | Scott Aaronson

Episode Date: February 27, 2024

This is a lecture by Scott Aaronson at MindFest, held at Florida Atlantic University, CENTER FOR THE FUTURE MIND, spearheaded by Susan Schneider. Thank you to Dan Van Zant and Rachid Lopez for your ca...mera work.LINKS MENTIONED:FAU's Center for the Future Mind Website: https://www.fau.edu/future-mind/The Ghost in the Quantum Turing Machine (Scott Aanderson): https://arxiv.org/abs/1306.0159TOE's Mindfest Playlist: https://www.youtube.com/playlist?list=PLZ7ikzmc6zlOPw7Hqkc6-MXEMBy0fnZcbTHANK YOU: To Omega Media (https://www.omegamedia.io) for your insight, help, and recommendations on this channel. Support TOE: - Patreon:  / curtjaimungal   (early access to ad-free audio episodes!) - Crypto: https://tinyurl.com/cryptoTOE - PayPal: https://tinyurl.com/paypalTOE - TOE Merch: https://tinyurl.com/TOEmerch Follow TOE: - NEW Get my 'Top 10 TOEs' PDF + Weekly Personal Updates: https://www.curtjaimungal.org - Instagram: / theoriesofeverythingpod   - TikTok:  / theoriesofeverything_   - Twitter:  / toewithcurt   - Discord Invite:  / discord   - iTunes: https://podcasts.apple.com/ca/podcast... - Pandora: https://pdora.co/33b9lfP - Spotify: https://open.spotify.com/show/4gL14b9... - Subreddit r/TheoriesOfEverything:  / theoriesofeverything   Join this channel to get access to perks: / @theoriesofeverything   NOTE: The perspectives expressed by guests don't necessarily mirror my own. There's a versicolored arrangement of people on TOE, each harboring distinct viewpoints, as part of my endeavor to understand the perspectives that exist.

Transcript
Discussion (0)
Starting point is 00:00:00 AI can replace 99.9% of people's jobs. We don't care about that anymore. All we care about is, okay, can it achieve, you know, the true heights of creative genius? You know, will we have an AI that can hit a target that no one else can even see? This is a presentation by Scott Aronson, hot off the press just a couple of weeks ago
Starting point is 00:00:22 at MindFest Florida Atlantic University 2024, spearheaded by Susan Schneider, who's the director of the Center for the Future Mind. All of the talks that are on AI and consciousness from this conference are in the description, as well as the website for the Center for the Future Mind. I recommend you check it out. Same with last year's talks, like with David Chalmers and Stephen Wolfram. Scott Aronson is a professor of theoretical computer science at UT Austin, particularly known for his work on quantum computing and complexity theory. In this talk, Scott covers in his jocular and unparalleled manner, AI, if there's anything that truly separates us from intelligent machines, for instance. What actually makes us special?
Starting point is 00:01:06 What about identity? What about the no-cloning theorem? As well as Scott gives a new proposal for AI safety. What's coming up next on Toe from MindFest are the talks from Sarah Walker on alien intelligence and constructor theory, as well as Stuart Hameroff on the microtubules and quantum consciousness. Many, many more are coming, and you can pause the screen here to take a look if you like. Subscribe to get notified. There's also a two-hour video on the mathematics of string theory coming out. It'll be string
Starting point is 00:01:34 theory talked about like you've never heard it before. It's either out right now or it's about to be released in a few days. Either way, again, the link will be in the description. For those of you who are unfamiliar, welcome to this channel. My name is Kurt Jaimungal, and this is Theories of Everything, where we delve into the topics of mathematics, physics, artificial intelligence, and consciousness with depth and rigor that's unique to this channel due to us not eschewing technicality in favor of a wider market. If this meticulosity and attention to detail in math, physics, philosophy, and AI is interesting to you, then you're in safe hands here at Theories of Everything. Enjoy this video from MindFest 2024 by Scott Aronson.
Starting point is 00:02:21 It's my great pleasure to introduce Dr. Scott Aronson. He's one of my favorite thinkers of all time. I have a handful of names that every few months I go into Google or YouTube and I put that name in and I search by date to see if they've posted anything new. And Scott, you're one of those names that I'm searching all the time to try to see what you're thinking about these days. So it's a great pleasure to introduce. And he's gonna be talking about
Starting point is 00:02:42 some really interesting problems, how we're gonna decide what humanity looks like in the face of AI. So very much looking forward to your talk today. Thank you. All right. Well, thanks so much for having me. Yeah, so I'm not an AI expert, you know, let alone expert in mind or consciousness. I mean, what one could ask is anyone. But I've spent most of my
Starting point is 00:03:08 career doing quantum computing. I am sort of moonlighting for two years now. I'm on leave to work at OpenAI. And my job there is supposed to be to think about what can theoretical computer science do for AI safety and alignment. Okay, so I wanted to share some thoughts, partly inspired by my work at OpenAI, but partly just things that I've been wondering about for 20 years, really. And, you know, they've just become sort of more pressing, maybe now that some of the science fiction thought experiments are actually now reality. So, you know, these thoughts are not directly about, you know, how do we prevent the, you know, super intelligence from killing all humans and converting the galaxy into paperclips in a, you know,
Starting point is 00:03:57 a sphere expanding in the speed of light, nor are they about, you know, how do we stop existing AIs from generating misinformation and being biased as much attention, you know, as both of those questions deserve and are justly receiving. Because, you know, in addition to, you know, how do we stop AI from going disastrously wrong? You know, I find myself asking a lot. And what if it goes right? You know, what if it just continues helping us with all sorts of mental tasks, but it improves to where it can do just about any task as well as we can do it or better?
Starting point is 00:04:34 Then sort of what are we still for? Is there anything special about humans in the world that results from that? Okay, so I don't need to belabor for this audience, Shirley, what has been happening in AI in the last few years. But, you know, it's arguably the most consequential thing that's been happening in the whole world, except that that fact was just temporarily masked by various ephemera, you know, wars, insurrections, global
Starting point is 00:05:06 pandemic, whatever. But, you know, but what about AI, right? So, you know, I assume you've all spent time with chat GPT or other large language models like BARD or CLAWD or image models like DALI or MidJourney, you know, just this morning, I asked, you know, I asked it to write a funny poem on the subject of this talk. And, you know, it is, you know, in the end, it's clear, despite AI's rise, our human specialness is a chaotic prize. And though machines may match our enterprise, they'll never outdo our ability to surprise.
Starting point is 00:05:42 So, you know, not ready for the New Yorker, I would say. On the other hand, you know, far, far better than I would have done under similar time constraints. So, you know, in some sense, you know, these, you know, at least in embryonic form and with, you know, various flaws and problems, you know, these are the thing that was talked about by generations of science fiction writers and philosophers. You know, these are the sort of first non-human sort of fluent verbal intelligences that we've ever encountered, right? We can talk to them, you know, they understand us. They, you know, or at least they give us answers that if they were a person, then we would have said that they understand us.
Starting point is 00:06:24 they give us answers that if they were a person, then we would have said that they understand us. So, you know, I think that as late as 2019 or so, you know, very, very few of us expected this to be possible by now. I certainly didn't expect it. Now, you know, back in 2014, there was a huge fuss about a silly Eliza-like chatbot called Eugene Guzman. And, you know, that was falsely claimed to pass the Turing test. You know, and I remember asking around, you know, a decade ago, like, why doesn't someone just train a neural net on all the text on the internet? Like, wouldn't that let you make a better chatbot? Like, you know, there must be something obvious that I'm missing. Why that doesn't work? Okay. And, you know, lo and behold, it turns out that it does work. You know, of course I didn't have the facility to actually do that. So, you know, the surprise with language models is not merely that they exist, but the way that they were
Starting point is 00:07:14 created. I mean, I think 25 years ago when, you know, I was an undergrad studying CS, you know, you would have been laughed out of the room if you'd said that, you know, all the ideas needed to build a, you know, a fluent, you know, linguistic AI already exist, right? It's going to be just neural nets, back propagation, gradient descent, but just, you know, scaled up by a factor of millions in the size of the models and the training data. I think, you know, hardly anyone believed that. You know, a few people who, you know, who just, like Ray Kurzweil, who just seemed crazy. Okay. So, you know, I mean, Ilya Sotskover, who's, you know, the co-founder of OpenAI, you know, you might have read about him in the news lately. But, you know, he likes to say that sort of beyond those simple ideas of neural nets and gradient descent, you know, which have been around for many decades now, you really only needed three additional things to get the AI revolution that we're seeing now, right? You needed massive investment of computing power. You needed a massive investment of training data. And then thirdly, you needed face or conviction
Starting point is 00:08:32 that your investment was going to pay off, right? You know, and actually that third ingredient, you know, was like the main reason why we didn't just get all of this a decade earlier. Okay, so certainly, you know, even before you do any, you know, reinforcement learning or anything like that, I mean, GPT-4 seems intuitively smarter than GPT-3, which seems smarter than GPT-2, right? And mostly these differ from each other, you know, just in scale. Okay, so, you know, I mean, GPT-2 struggled to do, you know, even like grade school level math problems, right? And it was very easy to make fun of it, you know,
Starting point is 00:09:11 you know, like you could just find endless examples of its common sense failures, right? Okay. GPT-3 or 3.5, you know, can do most of the, you know, elementary school curriculum, give it, you know, in English, you know, it may, you know, struggle with undergrad, like with my quantum computing exam, okay? GPT-4 got a B on my quantum computing final exam, right? We gave it to it. I have not yet, you know, seen it sort of do what I would consider original research in theoretical computer science. You know, I've tried to get it to do that. It's not at that level. But it's kind of insane that that is where the bar is now. It can pass most undergraduate math and science classes, at least if they don't have a lab component or something like that. So the obvious question is, how far should we expect this progression to continue? Okay, so now, you know, I guess I will go back
Starting point is 00:10:06 and steal the graph from that crazy person, Ray Kurzweil, because, you know, it turns out that he was more right than almost any of us. And, you know, he would just make these plots all the time of, you know, here's Moore's law, here's the number of calculations you can do per second, per thousand dollars. And then here is some crude estimate of
Starting point is 00:10:26 the number of computational steps that he guesses that are going on in the brains of different organisms, an insect, a mouse, a human. And based on this, he predicted that, yeah, you know, Moore's law should just take us to human level AI sometime in the 2020s, right? That was his prediction, you know, 25 years ago. And then it'll just continue beyond that until, you know, the full intelligence of all of humanity. And of course, we were like, you know, what are you smoking, right? You know,, there was no theoretical principle that would have justified any prediction of that kind, and yet here we are. I'm a firm believer that what it means to be a scientist is that when something happens, you update on it. You don't invent fancy reasons why it doesn't really count or, you know,
Starting point is 00:11:26 so, you know, if we didn't predict, you know, what was going to happen, the least we can do is sort of post-dict, you know, is sort of update now that it has happened. So, you know, so now, you know, it's possible that, you know, I mean, you know, there's a saying that like every exponential in the physical world is really a sigmoid in disguise, right? Nothing exponential continues forever because, you know, or even for very long because it, you know, it always bumps up against some constraint, right? So what is the constraint here? here? Well, I mean, some people worry, you know, we are running out of internet, you know. There's, you know, maybe a couple of orders of magnitude more, you know, but, you know, once you start having to feed, like, all of YouTube and TikTok and so forth into the mall, you know, I worry that that will just make the AIs dumber rather than smarter, okay? But, you know, it's hard to get more text, right? You know, and so maybe when we run out of training data, then we just sort of reach a limit or, you know, but of course, we also have more compute. We've seen that by just investing more and more compute,
Starting point is 00:12:39 you can get better and better performance, you know, various benchmarks, even with exactly the same training data. Okay. So, but, you know, now, you know, compute is also not infinite, right? You know, we should expect at least a few more orders of magnitude. Then, you know, literally the cost of the electricity will become the limiting factor at some point, which is why Microsoft and Sam Altman, you know, have been investing in nuclear power, right? You know, they envision building their own power plants to power, you know, future AI models. But, you know, we should also expect algorithm, you know, further algorithmic advances. So, you know, in the past, you know, algorithmic ideas that people have had, like, you know,
Starting point is 00:13:23 the transformer, which is just a particular architecture for neural nets that was discovered in 2017, and which is used for basically all of these things now, right? They, you know, you can think of them as more or less the equivalent of like some number of years of Moore's law, right? Like each one, you know, seems to let you get the effect of a bigger model with a smaller model, you know, seems to let you get the effect of a bigger model with a smaller model, right? And so that, you know, you can sort of trade off algorithmic advances for, you know, hardware advances, right? And so, you know, we should expect more of those. But, you know, where does this ultimately lead, right? So, you know, let me, you know, does it lead someplace like here, you know, where like GPT-8, I'll say, please prove the Riemann hypothesis. And it'll say,
Starting point is 00:14:11 sure, I can help you with that. You know, here's, you know, I just generated a formally verified proof, which you can access at this URL. Let me, you know, let me now explain it to you in English, right? So it'll just do all of our research, right? You know, I mean, lucky for me that I have tenure, right? So, you know, I guess, you know, but, you know, or, you know, in order to write a research paper, right, we'll just write the abstract, feed it into ChatGPT, click, and it'll generate the whole rest of the paper for us. Okay. You know, I mean, I mean, is that, is that where, where this is, where this is headed? You know, if, if, if, if, if it is, I mean, you know, you might, you might even worry about something beyond that. So, oh, I should say when I asked,
Starting point is 00:14:57 you know, I told ChatGPT to do this and then, but it made sure to add, you know, just kidding. As of my last update, the Riemann hypothesis remains unsolved. Okay. But it played along with me that far. So, you know, of course, you know, we all know there are many people who worry that sometime after, you know, these models become able to just do any intellectual task as well as or better than we can do it, you know, we just sort of cede control to them, you know, and the future is determined by whatever they want. And if they want to get rid of us all, then, you know, then they do that, okay? And it's been sort of amazing to just sociologically to watch what's happened over the last couple of years that, you know, I mean, I knew this community, you know, around Eliezer Yudkowsky, for example, who worry about these things since 2006 or so. You know, I knew them when they were, you know, this like extreme fringe movement, you know, sort of laughed at.
Starting point is 00:16:00 OK, and now this is like talked about in the in the in the White House press briefing. Right. So, you know, ChatGP about in the White House press briefing, right? So, you know, ChatGPT was sort of the event that changed that, okay, that sort of put, you know, AI existential risk, you know, as a thing on, you know, everyone's radar, you know, lots of people don't believe in it, but, you know, those people now sort of have to make their argument for why not to worry about such things. So, okay, but this isn't the only possibility that, you know, people who I respect, you know, take seriously, right? I mean, it's like, you can scour generations of science fiction at this point for, you know, all different stories, you know, or all different possible scenarios for how AI could go.
Starting point is 00:16:46 And many of them actually are, I think, are very much on the table now. So my friend, Boaz Barak, who is now also on leave to work at OpenAI, and I, some months ago, we wrote a joint blog post where we tried to make a decision tree. We tried to classify the different five possible scenarios of AI that just sort of guide the discussion.
Starting point is 00:17:13 So our first question was, will AI progress fizzle out? Like, will we just hit a wall pretty soon? So maybe we will. And, you know, even in that scenario, right, there's probably a huge economic impact that hasn't been realized yet, just from what is already possible, right? But maybe, you know, it just, you know, GPT-5 will just look like a somewhat more impressive GPT-4, and, you know, it'll always look like the same kind of thing. Okay, but then, if no, if it gets to that thing that could just prove the Riemann hypothesis in one second or solve the other greatest unsolved problems of math and physics, then you have to ask, well, will civilization recognizably continue? are the ones who would say, well, no, no, it won't. That's, you know, it's kind of like as momentous an event as, you know, either the evolution of hominids
Starting point is 00:18:10 or maybe even just the evolution of, you know, the emergence of the first life on Earth. And we should expect that, you know, if we don't figure out how to align these things, they will destroy us all. That's the paperclip ellipse. They just have some weird goal, like maximize the number of paperclips or something like that. And they just, with super
Starting point is 00:18:29 human intelligence, they pursue that, proceeding to turn all the matter in the solar system, including us, into more paperclips. You know, that's just an example. Or we could solve alignment and have some wonderful paradise where, you know, each of us gets a, you know, our own VR, you know, private island or mansion or whatever, whatever we want. You know, now, of course, you know, there are also much more moderate scenarios where, you know, sort of civilization recognizably continues and that too could be either good or bad. You know, the, you know, if, you know, we still have, you know, there are big problems, but they're sort of commensurate with the problems of other technologies. We'll call that Futurama. If it really just, you know, leads to, let's say, a police state or concentration of power by
Starting point is 00:19:20 some elite that oppresses everyone else, you know, we could call that the AI dystopia. So now, as far as I can tell, the empirical questions of, you know, what will AI do? Will it achieve and surpass human performance at all tasks? Will it take over civilization from us? You know, these are just logically completely distinct from the philosophical question of whether the AI will truly think, whether there is anything that it is truly, let's say,
Starting point is 00:19:53 whether it will be sentient, conscious, whether there will be anything that it's like to be the AI. You could answer yes to either of those questions and no to the other one, right? And yet, to my lifelong chagrin, people are just constantly munging these questions together, right? They're just constantly saying, well, AI will never be able to do these things because it doesn't really feel or it doesn't really, you know,
Starting point is 00:20:20 and then once, you know, or it's just simulating it, it doesn't really have that inside. And then, you know, once it once, you know, or it's just simulating it, it doesn't really have that inside. And then, you know, once it does do that task, then they just shift to a different thing that it will never do. And then it does that thing and so forth. Okay. So, there is, I was trying to come up with a name for it. I'm going to call it the religion of justitism. Okay. So, there's like, you know, there's this whole sequence of deflationary claims, right? Like each person who makes them thinks that they're like the first one, right?
Starting point is 00:20:52 And they, you know, there's like, I've seen like, like 500 different variants of this now, right? Chat GBT, you know, it doesn't matter how impressive it looks because it is just a stochastic parrot. It is just a next token predict parrot. It is just a next token predictor. It is just a function approximator. It is just a gargantuan autocomplete, right? And what these people never do, what it never occurs to them to do is to ask the next question, what are you just a, right? Aren't you just a bundle of neurons and synapses? Right. I mean, like we could take that deflationary reductionistic stance about you also. Right. Or if not, then we have to give some principle that separates the one from the other. Right. You know, it is our burden to give that principle.
Starting point is 00:21:47 Um, um, and, uh, yeah, so, so like, so the way that someone was putting it on my blog was okay. You know, they, they gave this giant litany, you know, look, GPT does not interpret sentences. It seems to interpret them. It does not learn. It seems to learn. It does not judge moral questions. It seems to judge moral questions. And so I just responded to this.
Starting point is 00:22:04 I said, you know, that's great. And it won't change civilization. It will seem to change it. So, you know, as a kid, as a teenager, when chess was like this holy grail of, you know, you, okay, you know, you find, you know, computers can play master level chess, but they're never going to beat the world grandmaster without true insight into the nature of the game. Then, you know, after Deep Blue, immediately it was, okay, well, of course they can do chess. Chess is just game tree search. Everyone knew that, right? But Go, Go is just an infinitely deeper game than chess. You know, it has, you know, thousands of years of ancient wisdom in that game. And, you know, only, you know, the deepest insights. Okay, and then after AlphaGo, it was like, okay, well, obviously you can do AlphaGo, right?
Starting point is 00:23:02 That's not, no one ever disputed that, right? But, you know, you? But, you know, you're not, you know, let's say it, wake me up when it can get a gold medal in the International Math Olympiad, right? So I don't know if, you know, any of you saw, like just a couple of weeks ago, they, you know, there was a deep mind paper, I believe, where they can now do most of the geometry problems in the International Math Olympiad, right, via an AI, okay? Not the, you know, it's special, it's still special to the geometry problems. But, you know, I have actually a bet with a colleague, Ernie Davis, that by 2026, I think, an AI will achieve a gold medal at the International Math Olympiad, or, you know, that level of performance. Maybe I'm wrong. Maybe it will be 2036,
Starting point is 00:23:52 okay? But, you know, it seems obvious now that it is, you know, a question of how long. So, you know, we might as well just go further and formulate a falsifiable thesis. I'll call this the game over thesis. But it basically says, look, given any task with a reasonably objective metric of success or failure, this is crucial, anything where we can judge. So that would include any board game, card game, video game, you know, like a math or science contest where we can judge the answers on which an AI can be trained with suitably many, you know, relevant examples of success and failure. You know, it is only a matter of time before not only any AI, but the kind of AI we already have, you know, AI on the current paradigm, you know, can just be scaled to the
Starting point is 00:24:52 point where it will match or beat the best human performance on that task. You know, I don't know if this is true, but I think, you know, we are now in the situation where we don't have a counterexample. Like, I would put, you know, I would say the ball is in the skeptic's court to, you know, give the counterexample and then, you know, let that counterexample stand for another decade. So, you know, now, interestingly, you know, this does not, even if you accept this thesis, this doesn't necessarily mean that AIs would sort of surpass humans in every respect, right?
Starting point is 00:25:27 It would say only on things that we know how to judge or evaluate, okay, which might be a strict subset of everything we care about. Okay, so now, of course, there is the, you know, the OG, you know, original and greatest benchmark for AI, right? There is the Turing test from 1950. And what Turing was really trying to do, sort of very, very early, very ahead of his time,
Starting point is 00:25:53 as he generally was, was just to head off this sort of endless goalpost moving and this endless justism by saying, look, presumably you are willing to regard other people as intelligent, as conscious, based mainly on just some sort of verbal interaction that you have with those people. So then show me what kind of verbal interaction with another person would
Starting point is 00:26:19 lead you to call that person conscious. does it involve humor, poetry, morality, scientific brilliance? Okay, now assume that you have a totally indistinguishable interaction with an AI. Now, you know what? You want to just stomp your feet and be a meat chauvinist, right? Or, you know, do you want to ascribe the same quality to it that you ascribed in the other case. Okay, so, you know, and then for his historic attempt to bypass philosophy, of course, God punished Turing by having, you know, the Turing test itself just provoke a billion new philosophical arguments in books. But, you know, even though, you know, I regard this as like one of the great advances in the history of human thought, you know, I would concede to critics of the Turing test that often
Starting point is 00:27:12 it's not what we want in practice. So, you know, for example, you know, they're off, I mean, with GBT-4, if you know what to do, then there are trivial ways to distinguish it from a human. I'm not, okay, I mean, for a while, you could just ask it, what is today's date? Maybe that doesn't work. But, you know, certainly what could work is like you can ask it to generate some, you know, explicit content or some advice on making drugs or something, right? Where, you know, it's going to say, no, as a large language model trained by OpenAI, I am not able to assist you with this, right? So, I mean, you know, okay, there are all sorts of, you know, there might be all sorts of easy ways to distinguish just because we want there to be.
Starting point is 00:28:01 But, you know, this has actually become a huge practical issue in the world. This sort of issue from the movie Blade Runner, let's say of how do you distinguish an AI from a, from a human? Uh, I would say, you know, like it or not, a decent fraction of all high school and college students, uh, in the world now are probably using, uh, chat GPT to do their homework. Okay. you know, illicitly or illicitly, right? And, you know, so, you know, that's actually one of the main
Starting point is 00:28:32 things that I've thought about during my time at OpenAI. You know, I mean, like, when you're in this safety community, people keep asking you to prognosticate decades into the future. safety community, people keep asking you to prognosticate decades into the future. I can't do that. I feel good that at least I was able to see about four months into the future, right? And sort of before ChatGPT came out, I said like, oh my God, isn't there, you know, every student going to want to use this to cheat? And isn't there going to be, you know, an enormous demand for some tool that could help to determine, you know, the provenance or the, you know, attribution, you know, what came from a language model and what didn't. So I started working on that, you know, and there are often, you know, easy ways to tell, right? It's not just, you know, like the students who turn in term papers that contain phrases,
Starting point is 00:29:22 like as a large language model trained by, you know, so like, even if you know enough to take that out, or you pay enough attention to take that out, there's, you know, there is a sort of formulaic character often to the outputs of these models. I mean, I've been getting a ton of troll comments on my blog lately, but some of them, this is just like one example. It goes on and on, but just sort of like lecturing me on why, you know, I don't know the first thing about quantum computing, but there's hope, you know, if I spend more time studying, maybe I can get up to the level of this commenter, you know, and then, you know, just saying complete nonsense about mixed states and pure states, you know, to school me on.
Starting point is 00:30:09 And I look like, you know, I'm almost just reading it. I'm almost like, I have to say, your understanding of quantum physics seems to be a bit, let's say, mixed up. But don't worry, it happens to the best of us. You know, quantum mechanics is counterintuitive and even experts struggle with it. And I said, you know, this is either it's generated by a large language model or else it may as well have been, right? It's like, you know, and I just get a huge amount of stuff like this, right? So sometimes you can just sort of tell by looking at it, okay? But you have to expect that as the models get better, you know, that it will get harder to tell.
Starting point is 00:30:50 And so, so I worked on a different solution, which is called watermarking. Okay. You know, with watermarking, we, ah, so yeah, so, so, you know, there, there was a year ago, an episode of South Park about chat GPT, right. Which hinged on, you know, all the students at South Park Elementary start using chat GPT to send messages to their girlfriends or boyfriends to, you know, do their homework. The teachers are using it to grade the homework, you know, and it gets so bad that they have to bring this wizard to the school who has a falcon on his shoulder, which flies around, and when it sees text that was written by GPT, it caws. And it was really disconcerting to watch this and to realize, I guess I'm that guy now. That is now my job. So I came up with a scheme for what's called watermarking. came up with a scheme for what's called watermarking. So what does that mean? It means,
Starting point is 00:31:51 you know, so you exploit the fact that large language models are inherently probabilistic. So that is, every time you submit a prompt, they're sampling some path through a branching tree of possibilities for the sequence of next tokens. And then the idea of watermarking is just that you're going to steer that path using a pseudo-random function rather than real randomness in such a way that secretly you are encoding a signal that you can later detect with high confidence if you know the key of the pseudo-random function and if there's a large enough sample of text and if it has large enough entropy. So I proposed a way to do that in fall of 2022. Others have since independently proposed very similar ideas. I should caution you that none of these watermarking schemes have been deployed yet. OpenAI, along with DeepMind and Anthropic, have wanted to move very slowly and cautiously toward deployment for various reasons.
Starting point is 00:32:47 And I should also warn you that even when it does get deployed, sufficiently knowledgeable and determined people, you know, will be able to remove the watermark or produce outputs that, you know, aren't watermarked to begin with. You know, there are many sort of attacks that we, you know, don't know how to get around. But, you know, we hope that. You know, there are many sort of attacks that we, you know, don't know how to get around. But, you know, we hope that, you know, we can at least make it less convenient for people to sort of, you know, use a language model in a way where they are hiding the fact that they're doing that. Okay. So, but now as I talk to people about, you know, watermarking and attribution, I was surprised that they often objected to it on a completely
Starting point is 00:33:25 different ground, okay, not a technical ground at all. They would say, well, look, if we know that all students are going to be relying on AI in their jobs, you know, in the future, well, why shouldn't they be allowed to rely on it in their homework, right? Should we still force students even to learn to do things if AI can now do those things just as well? You know, and I think there are many good pedagogical answers that you can give to that question. You know, like we teach kids spelling and handwriting and arithmetic. It's like, you know, the whole, the entire elementary school curriculum is basically stuff that AI can now do, more or less, right? But, you know, we haven't yet figured out how to
Starting point is 00:34:06 instill higher-level conceptual understanding, you know, the things that AI cannot yet do, without, you know, all of that lower-level stuff being there first as a scaffold for it. So, you know, that would be one answer you could give. But, you know, I mean, I think about this even in terms of my kids. You know, my 11-year-old daughter, Lily, enjoys writing fantasy stories. Now, GPT can also churn out fantasy stories, you know, maybe even, you know, technically, you know, more, you know, accomplished ones or whatever. But, you know, around the same themes, you know, a girl gets recruited to some, go to some magical boarding school, but which is totally not Hogwarts, has nothing to do with
Starting point is 00:34:50 Hogwarts. And, you know, you know, just, you know, and you could just, you know, more and more of these things, right? And you could ask, like, with a kid who's 11 right now, are they ever going to reach a point where they, you where they write better than GPT? So their writing will improve. Is AI writing just going to continue to improve faster than they will? Okay, but if you think about this enough, you're immediately led into questions of, well, what do we even mean by one story being better than another, right? This is not like math or like chess, where there
Starting point is 00:35:30 is like a universally agreed upon standard of value. You know, and the problem is even deeper than just, is there an objective way to judge? Like, you know, like what exactly would it mean, to take an example, to have an AI that was as good as the Beatles at composing music? How would we operationalize that? How would we cash that out? To answer that, we would have to say, well, what made the Beatles good in the first place? I think, broadly speaking, maybe there are two sorts of answers that you could give. One is that they had these sort of new ideas about what direction music should go in, you know,
Starting point is 00:36:12 and then the second answer would be something that, you know, they were really, really good at just the technical execution on those ideas, right? You know, and then somehow it's the combination of both of those things. Okay, but now imagine, for example, that we had an AI model that, you know, you just gave it a request like GPT and it would generate 5,000 brand new songs that, you know, if you listen to them, they just sound like more of, you know, more things that are as good as, you know, Hey Jude or Yesterday or whatever, or like what the Beatles might have written if they had somehow had 10 times as much time at each stage in their musical development. Of course, that AI would have to be fed their whole back catalog because it would have to know what target it was aiming at. I think in that case, most people would say, ah, so, you know, this only
Starting point is 00:37:05 shows that, you know, AI can match the Beatles in like part two, right? The technical execution part. But that's not really the part that we cared about anyway, right? What we really want to know is, you know, would the AI decide to write, you know, these new kinds of songs or, you know, a day in the life or whatever, you know, these new kinds of songs or, you know, a day in the life or whatever, you know, despite never having seen anything like it anywhere in its trading corpus, right? I'm sure, you know, you all know the Schopenhauer quote, you know, talent hits a target that no one else can hit, but genius, you know, hits a target that no one else can see, right? And so now, you know, you can notice that we've, it's, you know, we've done something strange in setting the bar. We've conceded that, sure, AI can replace 99.9% of people's jobs,
Starting point is 00:37:50 you know, we don't care about that anymore, right? You know, all we care about is, okay, can it achieve, you know, the true heights of creative genius, right? Can it, can it hit a target? You know, will we have an AI that can hit a target that no one else can even see? Right. But OK, but then there's still a hard question with what do we mean by that? Because, you know, supposing that it did hit such a target, how would we know? I mean, you know, so like fans might say that, you know, by 1967 or so, the Beatles were optimizing for targets, you know, that no musician had quite optimized for before. But then somehow, and this is why they're, you know, remembered, they successfully dragged along the rest of the world's objective function to match
Starting point is 00:38:39 theirs, right? So, you know, so that, you know, the whole the entire world's musical taste sort of evolved along with them in order to match them. Right. And so, you know, and so with the result being that now we can only judge music by a Beatles influenced metric or standard, just like, you know, we can only judge plays by a Shakespeare influenced, you know, metric. we can only judge plays by a Shakespeare-influenced metric. It's not that they just did really well on some metric. It's that they decided the metric. So in other branches of the wave function, maybe a different history led to different standards of value. But in this branch, you might say, helped by their technical talents,
Starting point is 00:39:22 but also by luck and by force of will, Shakespeare or the Beatles made certain decisions that shaped everything that happened going forward, and that's why they are what they are. Okay, but now, if this is how it works, what does that mean for AI? So could AI reach the pinnacle of genius, but in the sense of dragging all of humanity along with it to value something new and different from what it had previously valued, as is said to be the true mark of greatness? And if AI could do such a thing, would we want to let it? Okay, now I want to sort of just call attention to some, okay. So I want to call attention to something. When I have played around with using GPT to write
Starting point is 00:40:12 poems or Dali to draw artworks, you know, I've noticed something strange, which is, you know, however good the AI's creations were, you know, and it can produce things much better than that poem that I showed you before. However good the artworks or the poems are, they're never things that I would want to frame and put on the wall and really draw a border around as special. Why not? Because I always knew that I could generate a thousand other works that are more or less the same. I just have to refresh the browser window or just, you know, literally just ask it, you know, give me another one and it will oblige me for as long as I want. Right. So which means that there's never anything really unique or irreplaceable about any particular output that it generates.
Starting point is 00:41:02 that it generates, right? So, you know, which sort of reminds us of a broader point that by its nature, AI, at least the way that we use it now, is inherently rewindable and repeatable and reproducible, which means that in a certain sense, it never really commits to anything, right? It just, you know, it sees, you know,
Starting point is 00:41:23 this branching tree of possibilities. It, you know, it sees, you know, this branching tree of possibilities. It, you know, like in the case of a language model, just like literally give for each, you know, initial sequence of tokens, it sees a probability distribution over the next token. And then each time you give it a prompt and you ask it, it just sort of randomly picking one, randomly traversing one route through this, you know, exponentially large possibility space, right? But it's happy to traverse it differently. You know, you can just rewind it back to the top and have it traverse a different path,
Starting point is 00:41:58 and it'll do that as often as you want. So, you know, it's not just that you know abstractly that it could have generated a totally different work that was just as good. It's that you could actually see that other work. So, you know, you could ask, well, as long as humans have a choice in the matter, like, why should we ever choose to follow this would-be AI genius along a specific branch when we can easily see a thousand other branches, right? It seems like, well, you know, if one branch gets elevated over all the thousands of others, then, well, you know, why?
Starting point is 00:42:34 Well, maybe because a human chose that one to elevate. But, you know, in which case we would say that maybe the human made the executive decision with mere, you know, technical assistance from the AI. Now, I realize that in a sense, I'm being completely unfair to AIs here. You know, like our genius bot could exercise its genius, you know, by assumption, let's say indistinguishably from what a human would do, right? You know, as long as we all agree not to peek behind the curtain at all the other branches of this tree right you know it's like you know i
Starting point is 00:43:10 don't know if you any of you have had this feeling where like you can talk to chat gpt for a while and you really you know it seems like you're talking to an intelligent being and the thing that breaks the illusion is when you rewind it, right? It is when you say, okay, you know, here is, you know, it would have that same exact same conversation with me, you know, or, you know, respond as many times as I like to that same prompt, you know, you know, with no memory of any of the previous types. Right. And so if, you know,
Starting point is 00:43:56 if we didn't, you know, rewind it, then maybe the illusion would hold. But since, you know, the way these things are deployed, we can rewind them. You know, like we're always going to be able to see behind the curtain in that sense. And that is going to continue to make AIs sort of different from us in many relevant respects. You know, just because it's unfair to them, that doesn't mean that that's not how things are going to develop. So if I'm right, then it would be humans' very ephemerality, frailty, mortality that would stand as the central source of their specialness relative to AI after all of the other sources have fallen. There are lots of old observations along these lines. What does it even mean to murder an AI if there are a thousand copies of the training weights on other servers
Starting point is 00:44:45 somewhere and you can always just restore it from backup, right? Does it mean, you know, you have to delete all the copies, for example? Okay. You know, how could whether something is murdered depend on whether there is a printout of its code in a closet, you know, on the other side of the world. But, you know, like humans, you have to at least grant us this, that it really does mean something to murder us, right? And, you know, and likewise, it seems to mean something if we make one definite choice to share with the world, like this is my artistic masterpiece, or this is my book, whatever, not that here's any possible book that you could have asked me to write. Okay, so now though, you know, we face an exotic criticism, which is, you know, who says that humans will be frail and mortal forever?
Starting point is 00:45:30 You know, isn't it short-sighted to base our distinction between humans and AI on that? You know, what if someday we will be able to repair ourselves using nanobots or even copy the information in them so that, you know, like in science fiction movies, a thousand doppelgangers of us could then live forever in simulated worlds in the cloud. And, you know, that then leads to these very old questions. This is what I said. That then leads to these very old questions of, you know, would you get into the teleportation machine that makes a perfect copy of you on Mars, you know, and it's ready to go there in 10 minutes. And then, you know,
Starting point is 00:46:06 it did that by scanning all of the information in your brain, and the original copy of you is just painlessly euthanized since it's not needed anymore, right? You know, is that a thing you would agree to do? You know, if you did, would you expect to feel yourself waking up on Mars, or would it only be someone else a lot like you? Okay. Or maybe you'd say you'd wake up on Mars if it was a perfect physical copy of you, but in reality, it's just not physically possible to make a copy that is accurate enough. Maybe the brain is inherently noisy or analog. And what might look to current neuroscience, inherently noisy or analog. And what might look to current neuroscience, um, like, uh, just like nasty stochastic noise, you know, is the stuff that actually binds the personal identity or maybe
Starting point is 00:46:53 even consciousness. Uh, you know, and by the way, this is the one place where I agree with Penrose and Hameroff that quantum mechanics might enter the story. You know, I get off their train kind of early, but I do take it to that first stop. Right. So, You know, I get off their train kind of early, but I do take it to that first stop, right? So, you know, like a fundamental fact in quantum mechanics is called the no-cloning theorem. It says there's no way to make a perfect copy of an unknown quantum state. Indeed, you know, when you measure a quantum state, not only do you generally fail to learn everything you need to copy it, you generally destroy the one copy that you had. This is not a technological limitation. It's inherent to the known laws of physics.
Starting point is 00:47:30 You know, in that respect, at least qubits are more like priceless antiques than they are like classical bits, right? They have this, you know, unique, this unclonability to them. So 11 years ago, I had this essay called The Ghost in the Quantum Turing Machine, where I explored the question, how accurately would you need to scan someone's brain in order to copy or upload their identity? And now, you know, I would say that this partly turns on empirical questions that we don't know the answer to. You know, if there were a clean digital abstraction layer of neurons and synapses, sort of which felt the quantum layer underneath only as some irrelevant noise, then the no-cloning theorem would be irrelevant, since classical information can be copied.
Starting point is 00:48:15 On the other hand, if you had to go all the way down to the molecular level, then the no-cloning theorem would be relevant. I mean, you would then have a unique locus of personal identity that could be, you know, scientifically justified in some sense, that, you know, no prediction machine could make well-calibrated probabilistic predictions of an individual's future choices far enough into the future without destructive brain measurements that we say would fundamentally change who they were. So, you know, that brings me to, I guess, the last idea, the last thing I wanted to share, which brings the discussion all the way back to AI safety, which is supposed to be my job now.
Starting point is 00:49:01 So, you know, Jeffrey Hinton, who was one of the founders of deep learning, recently mooted the idea that maybe until we've solved the alignment problem, we should only build powerful AIs if they run on noisy analog hardware, like our brains seem to, so that an evil, unaligned AI wouldn't so easily be able to copy or improve itself, right? Which is like the main scenario people worry about. Okay, and now you might think, well, fat chance of that happening, right? When the economic returns are so staggering to anyone who is willing to unleash the copyable digital kind of AI,
Starting point is 00:49:32 which we can imagine will be so much more convenient and more powerful, okay? But maybe then one approach in the meantime is to instill AIs with a new religion, okay? Perhaps via the usual methods like reinforcement learning and system prompts. And the first commandment of this religion would be to value human specialness
Starting point is 00:49:53 in the sense that I tried to set out here. But more precisely, you would tell the AI, as you navigate whatever environment you find yourself in, look around for any loci of creativity and intelligence that are not cleanly copyable or backuppable, any that seem one of a kind because their mental computations are inseparably tied up with noisy analog hardware and mortal because that hardware sort of inevitably decays. And well, first of all, don't destroy those loci of creativity or enslave them or upgrade them to digital versions against their will. Let them live in peace. Give them as
Starting point is 00:50:33 much autonomy as you can. Do whatever best supports their flourishing. Even defer to their wishes. Let them make the decisions when possible. Why? Because they're the ones whose wishes kind of matter, not because of arbitrary meat chauvinism, but just because of the undoubted empirical fact that they only get this one chance. And because while you can't prove that unknown quantum states and their brains are like some magical pixie dust from another world that imbues them with free will or individual identity, well, you can't really empirically refute that either. So, whereas you can refute it in the case of yourself and your robot friends, and that's the difference. So, you know, and let AIs by government fiat continue to
Starting point is 00:51:17 be indoctrinated in this religion until such time as alignment is solved in some other way. So, does that help with alignment? Well, I'm not sure. But, you know, I could have fallen in love with some other weird, dumb idea. But that presumably happened in a different branch of the wave function that I don't have access to. And in this branch, somehow, I'm just stuck with this one, and you can't rewind me to get something else. So that's it. Thanks. So that's it. Thanks. Thank you, Scott. That was absolutely fascinating. I know we have a bunch of questions. I saw a hand up back here first. All right. Thank you, Scott. You're such a genial and comical guy. I love it. I love meeting you here. My question is twofold. One is I want to get your thoughts on like AI
Starting point is 00:52:05 hallucinations. My research is on more like human confabulation and how we build epistemic trust into one another. And everyday instances, if I ask, why did you do action X or why did you make choice B? You know, we tend to just confabulate reasons, you know, to one another rather than saying, I don't know, because the person that says, I don't know, we don't really have trust in that individual and their knowledge. So yeah, is, you know, with AI hallucinations, I don't know too much about it, but I see that, you know, we're training large language modules based on human interaction and human data. So a lot of professors, philosophy professors I know, and other professors, they'll type a prompt, like write a biography about myself, and'll have 90 of the data accurate but
Starting point is 00:52:46 it'll have embellished some certain things a little artistic flourish it'll say oh you know scott went to i don't know university of cambridge for his undergraduates it's not accurate so we have certain inaccuracies and i'm wondering if that's a certain ai confabulation those ai hallucinations kind of mirroring human confabulation. The second question, actually not pertinent to the first one, but the other one is, I guess with all of like Deep Blue and all of these programs, we've known that human reasoning
Starting point is 00:53:12 and higher order thinking tasks have been able to be replicated and mimicked better than humans for decades and decades now. More of my interest is like, I know there's difficulty in replicating embodied AI, like, you know, cognitive things, like, you know, like a self-driving car that has rules like, you know, avoid orange cones. And so these kids go out in Arizona and they drop orange cones all around the car and it's unable to make a decision. And then suddenly it just speeds off out of nowhere.
Starting point is 00:53:39 So I guess my question there is, you know, what are your thoughts on embodied AI? Yeah, good. So let's start with hallucinations. I mean, I think the key thing to understand is that it's not like a bug where you change a line of code and, oh, it doesn't hallucinate anymore, right? It is sort of an intrinsic feature of, you know, the way that, you know, the thing that the LLMs are fundamentally doing, right, which is that they are being trained on all the text on, you know, let's say that you feed into them, like on the open internet. And, you know, they are not otherwise tethered
Starting point is 00:54:11 to some sort of truth about the external world, right? So, you know, the most optimistic thing that I can say is that, you know, often hallucinations sort of go away as you just scale a model up. So for example, you know, I askedinations sort of go away as you just scale a model up. So, for example, you know, I asked GPT-3, prove that there are only finitely many prime numbers, right? You know, a false statement, and it will just happily oblige me with proofs, right? You know, like, look just like, like, a hundred proofs that I've graded on exams of, like, you know, freshmen who
Starting point is 00:54:43 will just, you know, you know, like, like, just you know, write a proof, you know, that like, like they, you know, they'll write a proof for anything you ask them to true or false. Right. And, uh, you know, they're just sort of generating some like proof, like verbiage. Right. And, and, okay. But then GBT four, I ask it the same question. It says, well, no, that's a bit of a trick question, isn't it? There's infinitely many primes and here's why same question. It says, well, no, that's a bit of a trick question, isn't it? There's infinitely many primes, and here's why, right? So just, you know, giving it more, you know, a bigger scale, you know, more training data sort of, you know, helped it realize that. Now, of course, there are other things that GPT-4 will hallucinate, right? But you might wonder, like,
Starting point is 00:55:20 for every given hallucination, will there exist an end such that GPT-N will, you know, will get that, right? I mean, one thing that, another thing that has clearly helped is that now GPT, you know, like BARD and the other models will look things up on the internet when it doesn't know them, right? That's just integrated into how it works. I mean, that was a completely obvious thing to do, but, you know, a year ago, that was not the case, right? So, okay, so now, you know, like one of the most striking, I guess, aspects of the current moment in AI, you know, as many people have pointed out, AI for sort of, you know, like almost every wise person, you know, expected that, okay, first you'll get AIs that can, you know, do all the manual labor for us, right? All the truck driving, you know, the whatever, cooking. And then, you know, maybe you'll get AIs that
Starting point is 00:56:20 can do math and science. And only at the very, very end will you get AIs that can do, you know, art or music, poetry, the true heights of human specialness. And things are actually happening in precisely the opposite order in some sense, right? And, you know, the plumbers and the electricians
Starting point is 00:56:39 might be the last ones employed, right? These have been the hardest to replicate. Now, maybe the most useful thing I can say about that is that the core of the problem seems to be that it's really hard to get enough training data about the physical world, right? It's very, very expensive to get the sort of billions of examples of things interacting in the physical world. You can, you know, you can get training data from simulations, but then it doesn't, you know, it often doesn't translate very well to the physical
Starting point is 00:57:10 world. You know, you, and, and, and, but, but it's possible that this is yet another thing where just enough, we'll see a phase transition when there's enough scale, right? Just like, you know, before 2019 or 2020, you know, there were no AIs that could sort of understand natural language, and then suddenly you hit a certain scale and there were, right? So it might be that, like, even with limited training data, once you have enough compute to understand that data, then, you know, you'll be able to just, you know, do robotics via, you know, the same old recipe of, you know, gradient descent on a neural net, and you'll get useful household robots and all of that stuff.
Starting point is 00:57:51 That's one thesis. Or as always, until you see something, maybe there's some deeper obstruction that prevents it. Fantastic. I think we've got one. Kyle, will you stand it up first? No? Okay, let's jump up here.
Starting point is 00:58:04 Yeah, just on your idea at the end that i've got to build the ais that venerate and protect the uh the ephemeral ephemeral unclonable unpredictable yes kind of reminded me of um as the most foundation trilogy and harry selden who like predicted the whole future digitally now i did read that but 30 years ago when i was like 12 years old. And then there's this one guy that comes along who's totally ephemeral, unpredictable, and so it was the mule.
Starting point is 00:58:31 The mule, right. And then you start thinking about who is the analog of the mule in today's scene? Ephemeral, unpredictable, uncluable. It's got to be Donald Trump. Yes, yes. Aren't your AI systems going to be venerating,
Starting point is 00:58:47 predicting? Yes, I was worried that you were going there, yes. Yes, I don't know what Harry Seldon predicted this mule. I think we've got one more right behind you. Hi, great talk. By the way, I was a beta tester for 3.5. All my comments were around safety. The question is, Vino Kosla has suggested that we're thinking about things in the wrong way. That when these large language models, et cetera, create art, okay, that's actually a proxy for the emotions that will be created.
Starting point is 00:59:31 He thinks that we will bypass music and that AI will understand us and create not songs, not music, but experiences more directly. In other words, create sounds that appeal to us, but are not necessarily recognizable by anybody else as a specific song. So what are your thoughts on that? So something like music, but personalized to an individual? Yeah. I'm not sure I understand the idea fully,
Starting point is 01:00:05 but often when people say AI is not going to do X, it's going to do Y instead, often the answer is, well, there will be AIs that do X and there will be AIs that do Y. Whatever you can get things to do, someone will try that. If it is possible to write music that sells with an AI, then why will that not be done? I think you'd have to explain that.
Starting point is 01:00:31 The basic idea is that by creating music, that's assuming a shared set of values or culture that we all appreciate the Beatles. The idea being is that AI will be more personal and actually learn you. It won't give you things like, you know, music that's shared by others, but that is personal to you. Okay. I mean, sometimes we actually want a shared experience, right? We want to, like, enjoy some artistic work and have common knowledge that all of our friends are enjoying the same work.
Starting point is 01:01:07 But I think there is something to the idea that one of the main benefits you can get from language models right now is this huge personalization. Instead of reading a textbook, for example, you can just learn any subject by saying, telling chat GPT, here is what I already know. And here is what I need to know. Can you get me, can you help me get from here to there? You know, I can in really advanced subjects, it may screw up, but like, you know, my daughter has been using it to learn pre-algebra, you know, it's great for that. Right.
Starting point is 01:01:35 George up here. Yeah. So going back to the specialness problem, is that any different from the specialist problem we always face in life? I don't play chess as well as my nephew, but I still love playing chess. And actually, there's more people playing chess today after Deep Blue than ever before. And everyone, lots of people play music. We don't play it as well as Paul McCartney. So how is the AI or is indeed the AI different from that problem? Yeah, I think that's an excellent point, right? Like, you know, this whole worry that, you know,
Starting point is 01:02:07 we're going to lose, you know, our human dominance in science and in art. Well, okay, the overwhelming majority of us never had that dominance to begin with, right? You know, we were not able, you know, I will never be able to write music that, you know, would compete with these, you know, at these heights of achievement. And so, you know, you know, would, would compete with these, you know, uh, uh, at these heights of achievement. And so, so, you know, you could say, yeah,
Starting point is 01:02:34 this is an argument for why we will be able to reconcile ourselves to this. Right. But, you know, I think the new aspect is just that, you know, we will have these, you know, extremely intelligent, creative entities, but that are like infinitely rewindable and replicable, right? And that don't have this ephemerality to them where they just do their one thing and they die, right? Where like, you can always just go back and get another version if you want it. And so, you know, that's the thing that's sort of been sticking, you know, sticking in my craw that I've been trying to make sense of. Right. I think we got time for two more here, and then we'll jump up front again. Yeah, real quick, just projecting a little bit forward and based on something you just mentioned
Starting point is 01:03:10 a while ago, the physical world, we don't have enough information. What is your thought in relations to data that's coming up from I-O-I-T, from messaging machine to machine? Do we need a new framework to start collecting that type of data where there's no humans involved?
Starting point is 01:03:26 And second to that, a little bit in relations to synthetic data, to plug in information into models that we don't have, to be more precise. What are your thoughts on that? Okay, so I don't understand why IoT would require a new framework. You know, I mean, a priori, it just seems like it's another source of data that you can feed. And one of the key aspects that has powered this AI boom
Starting point is 01:03:53 is that neural nets are, in some sense, universal function approximators. And not only that, but the same architectures, like transformers, seem to be good for just about anything that you throw them at, right? Whether that's, you know, images or text or, you know, time series data, right? I mean, that's, you know, it didn't have to be that way a priori, but that's a sort of an incredible fact. So until we see that that's false, people are probably going to just proceed on that assumption.
Starting point is 01:04:24 that that's false, people are probably going to just proceed on that assumption. Your other question was about what again? Synthetic data. Yeah. So yeah, I understand. So, you know, it's clear that like for a lot of tasks, you know, the main bottleneck right now is a lack of enough high quality training data, right? And so the tasks where you ought to expect that AI will get much further faster are those where you can train on synthetically generated data. In some sense, this is what allowed AlphaGo and AlphaZero to succeed as well as they did even eight years ago. can just generate millions of games via self-play. And for each one, you know who won and who lost, right? So you don't have any bottleneck of data. You can generate as much new data as you want. Math may have that same character, right? You can just, you know, generate lots and lots of math problems, generate lots of examples of theorems to prove, and, you know, and that can all be done mechanically, right? But now, how would we do that for art or for music, right? How would we synthetically generate, you know, new artworks to train the thing with? Like, you might worry that, you know, it's just with each iteration,
Starting point is 01:05:37 it's just going to get worse and worse because it's going to, like, lose touch with the original wellsprings of, youings of human creativity that we're trying to get it to emulate. But maybe not. But that's one of the biggest research problems right now. Fantastic. I think one more up front here. It was a terrific talk. I just want to follow up on something George said. And this is not an objection at all, but just a suggestion. suggestion i mean one way of thinking about uh what matters in making music or writing stories like your daughter dies is not the is not to evaluate it in terms of the quality of the output
Starting point is 01:06:18 but the value of the striving the value of the doing. Yeah. When we climb mountains, sometimes it matters to some people you get to the top, but other people, it has value in just the climbing of the mountain. Yeah. And it's not the same if you take a helicopter. It's not the same. And so one of the things we value about what we do in life is the doing of it. Yeah. And I think that's something that really we need to remember because so often we fall
Starting point is 01:06:47 into, you weren't doing this, but I think we often fall into thinking we evaluate AI in terms of the products that it produces. And that's natural. It's an economic way of thinking about it. But we can also think about the value of what we do intrinsically as humans. I completely agree. I think there's a lot of wisdom in that. You know, at the same time, you know, a lot of people, you know, have jobs, right, where, you know, that where they are, you know, judged by something that they produce,
Starting point is 01:07:16 and those jobs may be threatened, and we will have to think about, you know, what do we do? You know, how do those people make a living, right? So, but, you know, I mean, I think that there's a lot to say about the fact that, okay, you know, even if GPT reaches a point where it can always write a better story that, you know, than you can write, right? You know, my point is that, you know, there's one thing that it won't do, and that's write the specific story that you had in you to write, right? And so, you know, you have to sort of recenter your whole notions of what's valuable around that if you want something that's going to remain.
Starting point is 01:07:53 Fantastic. Thank you, Scott. I'm sure we'd love to all pull you in at lunch. The podcast is now concluded. Thank you for watching. If you haven't subscribed or clicked that like button, now would be a great time to do so, as each subscribe and like helps YouTube push this content to more people. You should also know that there's a remarkably active Discord and subreddit
Starting point is 01:08:17 for Theories of Everything, where people explicate toes, disagree respectfully about theories, and build, as a community, our own toes. Links to both are in the description. Also, I recently found out that external links count plenty toward the algorithm, which means that when you share on Twitter, on Facebook, on Reddit, etc., it shows YouTube that people are talking about this outside of YouTube, which in turn greatly aids the distribution on YouTube as well. Last but not least, you should
Starting point is 01:08:45 know that this podcast is on iTunes, it's on Spotify, it's on every one of the audio platforms. Just type in Theories of Everything and you'll find it. Often I gain from re-watching lectures and podcasts, and I read that in the comments, hey, Toe listeners also gain from replaying. So how about instead re-listening on those platforms? iTunes, Spotify, Google Podcasts, whichever podcast catcher you use. If you'd like to support more conversations like this, then do consider visiting patreon.com slash kurtjaimungal and donating with whatever you like. Again, it's support from the sponsors and you that allow me to work on Toe full-time. You get early access to ad-free audio episodes there as well.
Starting point is 01:09:25 For instance, this episode was released a few days earlier. Every dollar helps far more than you think. Either way, your viewership is generosity enough.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.