OpenAI Podcast - Episode 17 - What happens now that AI is good at math?

Episode Date: April 28, 2026

Math is one of the clearest ways to see how far AI has come in a short span. OpenAI researchers Sébastien Bubeck and Ernest Ryu join host Andrew Mayne to explain what changed and what it could mean f...or the future of research. They reflect on how Ernest used ChatGPT to help solve a 42-year-old open problem, the difference between deep literature search and original mathematical discovery, and what changes when AI can work over longer timelines. Chapters01:27 The surprising progress of AI’s math capabilities 03:01 Solving an open problem with ChatGPT06:57 How models went from basic math to research level11:32 Why math matters for AGI14:26 AI and the Erdős problems21:26 Building an automated researcher28:19 The role of humans as models improve33:52 Verifying proofs with AI36:00 The risk of shallow understanding41:19 Advice for learning math with ChatGPT Hosted on Acast. See acast.com/privacy for more information.

Transcript
Discussion (0)
Starting point is 00:00:00 Hello, I'm Andrew Main and this is the Open AI podcast. Today, our guests are researchers Sebastian Bubeck in Ernest Rio, and we're going to talk about math, how it went from almost laughable to Olympiad level and why you need math to reach AGI. The progress of the last few years has been nothing short of miraculous. We will be able to have LLMs be able to solve problems that require more than 50 pages of thinking.
Starting point is 00:00:25 Mathematics was just the perfect benchmark to see the model making progress during the world. last four years. Sebastian, Ernest, I'd love to know more about you. So how would you explain your roles? Yeah, sure. So I have been working in mathematics for almost 20 years now. I used to work in optimization and a theory of machine learning. I was a professor at Princeton for a few years before moving to Microsoft. And now I'm a researcher at OpenAI. And in the last few years have been really trying to understand how AI can help mathematics and to really evaluate the progress that we're making in terms of solving difficult mass problems with AI.
Starting point is 00:01:09 Ernest, how about you? Yeah. So I've recently joined OpenAI as a researcher, but before that, I was an applied mathematician working on optimization and machine learning theory. And in my previous job, I worked as a professor of mathematics at the UCLA Math Department. So I think a lot of people have this perception that these models aren't good at math, literally called language models. And how has that changed? What's gone on? Yeah, I think, you know, the progress of the last few years has been nothing short of miraculous.
Starting point is 00:01:41 It's important to remember that two years ago, we didn't even have reasoning models, let alone models that could prove, you know, difficult mathematical theorems. Today, two years later, the models they are able to help Fields Medalist in the world. their day-to-day work. So really, the jump is just simply astounding. And maybe if I can build a little bit more on that, something which is important to understand is that everybody has been surprised by this progress, including us. So to tell you a story, a year and a half ago, I was at a workshop at a conference with other fellow mathematician, and there was a debate that I participated in on whether LLM, scaling LLMs will help us resolve major open problems. So this was a debate, you know, a year and a half ago. And the room was
Starting point is 00:02:31 very divided. In fact, they did a poll at the beginning. And I think it was like 80% said, no, impossible that this would happen. So then the debate unfolded. And, you know, by the end of the debate, it was more like 50-50. So, you know, pretty good progress during that hour. This obviously was just so wrong in hindsight. Like just mere eight months later, the model were starting to be able to do research level mathematics. What was the breakthrough moment for you? realizing that there was a really good intersection between AI and mathematics. So summer of 25, the big news was Chad Chip-T was able to achieve a top human-level performance at the International Math Olympia, a gold medal performance.
Starting point is 00:03:12 So that was amazing news, and that demonstrated that, well, at least for the competition-level mathematics, the models are very highly capable, only on par with the top human high school contestants. But, well, competition problems are canned problems. They have relatively short solutions because they are meant to be solved within a few hours. And they're not novel because, well, somebody came up with it. There's a solution. So it's not research level math. So then I got curious, and a lot of people got curious, can chat GPT do research level mathematics? And there was a lot of debate online. And then I thought to myself, I should try it on my own problems. Maybe I'll try it for myself and make up my own minds as opposed to, you know, listening to what other people say,
Starting point is 00:03:58 because I'm a mathematician myself. So I took a classical open problem in, in optimization theory, which is a branch of applied mathematics that I work in. And the question specifically is there's a famous algorithm called the Nestrov Accelerated Gradient Method. And does this have this convergent behavior or is it possible that the, for, you know, in certain bad cases, can there be a certain diversion behavior? This question was, was genuinely open in the sense that people know that in most cases the algorithm behaves well. It's convergent, but people really did not know, like, is there a bad instance? Does it, in the worst case, could it diverge? The answer turned out to be yes. And the way I discovered it is, I remember it distinctly. So, so, so,
Starting point is 00:04:48 So my bedtime for my son is 8 p.m. And then I try not to stay awake after midnight. So I had four hours of usually evening hours to myself if I want to focus on something. So I decided, okay, I'm going to spend a few days working on this. So over the course of three days, so that's 12 hours total. I interacted with Chad Cipiti on this question. It wasn't as simple as me just putting in the prompt and getting a solution. I played the role of the verifier.
Starting point is 00:05:14 I told whenever the model made a mistake, I corrected it. I also tried to point the conversation into areas that I felt, approaches that I felt were novel. And after a while, the proof, there was a proof, and I checked it. I also asked ChatchipT to double check it, and it was correct. And that's how this 42-year-old open problem got resolved. And once I got this solution, I thought to myself, what would be the most fun thing for me,
Starting point is 00:05:49 fun way for me to publicize this? Because I could just write a paper and that would be, but that would be less fun. So I decided, let me go to Twitter and talk about this. Dangerous, but yeah. Yeah. But, well, I had a lot of fun. Yeah, so people, it was,
Starting point is 00:06:07 I think one of the earliest instances of a genuinely open problem, mathematical open problem being solved by AI. And, yeah, I mean, people, people like the people ate it up and it was a lot of fun. It is interesting as you brought that up that we've seen sometimes people said, hey, I found something cooler novel and then sometimes it gets torn apart. Sometimes it stands up. And going into social media can be kind of scary,
Starting point is 00:06:31 but it sounds like we do need these kind of feedback cycles. I think part of the challenge for a lot of us is we hear terms, you know, we hear like the international math Olympiad and we're trying to figure out like, okay, what does that mean from like a scale of a problem? You know, I can understand addition, subtraction, multiplication, Could you give me an example of understanding, like where we went from, from like, you know, first ChatGPT, which could kind of sort of use it, then it could do math, it could use a tool, but then the model sort of implicitly understanding that. When ChatGPT, you know, just entered the scene in early 23, I started testing, I was very curious about how the model is, would perform fair on sort of common math problems.
Starting point is 00:07:12 So these would include math problems that you would see in like the high school level, but also like day-to-day like math-ish problem. So for example, imagine a scenario where like the three of us went camping together and then I paid for this, set for pay for this, and then Andrew you pay for whatever. And then we want to clear the ledger and we want to split things evenly at the end. Can chat chip BT do the calculations for us? And this is moderately complicated if you have like 17 items that we purchased. in 23, 24, and also in early 25, I remember, the models couldn't do this.
Starting point is 00:07:45 Another example would be, I'm in, let's say, in Korea, Sab's in Paris, Andrew, you're in California, and want to set up a Zoom meeting, like what would be a good hour to do so? Again, in early 25, the models couldn't do this. But then just suddenly things just changed, and I wasn't in Open AI at the time, So I'm not at all, I don't, I'm not quite privy to what exactly you did, but suddenly the models started solving IMO problems. And then furthermore, it started solving research problems. And the way I sort of calibrate this right now is that unless you are a professional mathematician trying to discover new mathematics, if you are somebody who's like, let's say, a physicist or a chemist who uses relatively complicated mathematics,
Starting point is 00:08:35 like differential equations, differential geometry, things like this, but you're not inventing new math. Then chat GPT can do all of the math that you would need. So any basically user of high-level mathematics from STEM can now use chat GPT to basically have their math taken care of. You want to exercise some degree of caution to check the check whether things are right, you know, run simulations just to double check. The models can make mistakes. But now any math problem that you would want to solve, most people for 99% of the population, the models can do it. When I worked on the release of GPD4, I used scheduling as one of those examples. And I could put three people into a schedule and have it figure out time slots.
Starting point is 00:09:17 But pushing it beyond that, that was really hard. Why did, was there a change? So Ernest just talked about noticing all of us that got better. Now, we know one thing was tool used. You could let the model use a calculator, but something else happened with the models themselves. So going back to the debate that I just told you about, like the framing was really about can scaling alone, scaling of LLMs alone bring you to, you know, solving research breakthroughs in mathematics.
Starting point is 00:09:45 And this is a wrong framing. What we do at OpenEye, we do a lot of research, innovative research. It's not just about scaling the model. So when you ask what happened or, you know, when you're asking what happened middle of last year when suddenly the model we're able to solve mass problems. Well, a lot of things happen. We do a lot of research. And all of this has to progress at the same time.
Starting point is 00:10:06 So I can't really point to a single element. But it was able to do it itself, though, without the tools. Yeah. So I think it's really, really important to, you know, just double down on what Ernest was saying about the progress and, you know, the scheduling problems that the model wasn't able to do back then. I said that two years ago we didn't have reasoning models. Well, I think about four years ago.
Starting point is 00:10:31 Four years ago, so this is pre-Chad GPT. And I remember Google came out with a mathematics model called Minerva at the time. And I fell from my chair. I was so impressed. What was I impressed by? That the model, I could give it the coordinates of points in the plane, and it would give me a line that goes through those points. Like when I say that, you know, now it's almost hard to understand what are you talking about. Obviously, a model can do that.
Starting point is 00:10:56 So I think we have kind of forgotten how quickly things have happened. And now, yeah, you know, Ernest was saying that it's basically at the point where unless you're trying to invent new mathematics, it's kind of at the right level already. I would say we're already seeing glimmers that even to invent new mathematics, it's getting there. Could you break down, though, aside from somebody who's interested in developing new fields of mathematics or just making new proofs, what does this affect everything else? What is the impact of this going to be on science? what is the impact of the rest of what you're working on? Why is this really important and not just, oh, cool, it does math? So I think the all cool, it does math part.
Starting point is 00:11:39 What did matter as we were developing those models as a good way to benchmark the progress? The nice thing about mathematics is that the question are very clear, non-ambiguous. You know, everybody agrees on what the question is asking. So that's point number one. Point number two, you can verify the answer. So once the model can give an answer,
Starting point is 00:11:57 everybody will agree, was it correct? or was it not correct? Although you can put a pin on that because we will talk about, you know, in research level, it's not that simple anymore to evaluate. But before research level, it's very easy to evaluate.
Starting point is 00:12:09 So mathematics was just the perfect benchmark to see the model making progress during the last four years. Now I would say we have kind of saturated that aspect. And you can ask, okay, now, okay, fine, the models do mathematics. We have understood. What about the next steps?
Starting point is 00:12:27 And for the next step, I would say that having our models be good at mathematics is going to be good for many, many other things, and let me explain why. A key feature of mathematics is that to resolve a problem, you have to think for a long time. Be it days, weeks, sometimes years. So this long thinking, not only do you have to think for a long time,
Starting point is 00:12:48 but you also have to think consistently for a long time. If at some point in your chain of reasoning there is a mistake, this will kill the entire argument. It doesn't matter if everything after that is correct. If there is one single failure point, the entire argument is destroyed. So this property makes it that this is what you want out of reasoning models, that if they make mistakes, they will be able to correct themselves. So we're hoping that this property that they acquire through mathematics will generalize
Starting point is 00:13:16 towards a domain, which by the way is exactly the same thing with human beings. Why do we train human beings in mathematics? I mean, it's a very fun topic. I love it. We did it professionally. Maybe we still do some of it a little bit. But why do we train humans in mathematics? Exactly for the same reason.
Starting point is 00:13:32 It gives you this kind of very logical thinking. Do we need to think about new ways to talk about these discoveries? Yeah. So I personally view it a little bit as part of my role to try to educate the research community about the recent advances because I have just dual background of both being a former mathematician and now working on the frontier of AI. And indeed, like Twitter and social media is a great place to try to explain what is a progress, in particular because this progress is so fast. So, you know, for example, maybe we can talk a little bit about the Erdos problems, you know, and some of the controversies that happen around that.
Starting point is 00:14:14 So there was a first example. So there was first, you know, earnest example and then there were a few other problems that were so. Just what explained Paul Erdos, though, too, just so I think people would love to know who he is and why his problems are sort of interesting. of course. So Paul Erdos is one of the most prolific mathematician of the last century. He has written, I think, 1,500 research paper. He was a very iconoclastic figure. You know, he didn't have a house or an apartment.
Starting point is 00:14:41 He was just traveling from one university to the next, trying to find new collaborators. And every time he would go to a place and basically ask questions. He was very, very, very gifted at asking questions. Not all the questions that he asked were interesting. Let me just say that right away. But still it was very productive and, you know, they, to research community wrote a lot of papers with him. There is even this concept of an Erdos number, which is, you know, how far away are you in the chain of collaborators from having also a paper with Erdos?
Starting point is 00:15:10 My Erdos number is two. I, of course, a paper with someone who coerced with Erdos. Wow. Yeah, I'm pretty happy about that. My number is three. The joke was, you know, you could be on a train ride with him and then by the end of the train ride, you'd maybe work. on a paper with him and have your name. Absolutely.
Starting point is 00:15:28 Absolutely. I think the two versus three basically says something about our respective age. That's essentially what it said. So anyway, so Erdos has, you know, all of this problem. And there is a very nice website by Thomas Bloom, who is keeping track of all the Erdos problems that are still open. So I think there is like a thousand problem or something like that on that website. And Thomas himself has done the work of trying to find, you know, he's an expert in Combinatorics.
Starting point is 00:15:54 So he can kind of say, okay, this is open, this is, you know, resolved. This has some complicated status, you know, for every, every problem. Of course, it doesn't necessarily know the answer to all of them. So if there is a paper which is marked open, it is not necessarily true that nobody knows how to solve it. But it is also a very interactive website where people can go on it and, you know, add comments to every problem and explain whether there is a solution, etc. So it's a very dynamic, a great website. So, of course, once we started to have GPT be able to solve research mass problem, this sounded like a treasure trove of problem to try our models on.
Starting point is 00:16:33 And we tried a couple. And to our great surprise, the model came back with answers to some of them that were marked as open. So we got really excited about this. The first one, you know, that I tweeted about, I don't remember when it was maybe it was in October or something like that last year. it was a deep literature search result. So let me explain what that means.
Starting point is 00:16:57 It means that what GPT did is that it did a vast literature search, trying to scan, you know, thousands of papers. And it found in some unrelated field the answer to the question. Now, it's really important to understand that it's not like in that, you know, unrelated field, the person said, okay, I'm solving an erdosch problem. It was written in a completely different language. It was different mathematics. You have to do work to connect.
Starting point is 00:17:21 like the two pieces, and GPT did that. So that was kind of amazing. And this was very ad hoc. Like, you know, we just tried by hand, basically, in the chat GPT interface. Once we saw that, Mark Selke, who is, you know, in our team also, decided to have a more systematic approach
Starting point is 00:17:39 of trying all of the problems. And he tried that, and the model came back with solutions to 10 Erdoss problem. And this was, you have to remember. At that point, there was still, I think, a very dynamic discussion about whether, you know, those models could go beyond the state of the art and discover, invent new mathematics. So I got very excited about this result and I tweeted about it. And, you know,
Starting point is 00:18:02 it's kind of an infamous tweet because people misunderstood it as kind of saying it really found the solution to 10 open problems that are very hard and the solution is completely new and did not exist in the literature. But that's not what happened. It was connected, of course, to the previous case where it is a deep literature search. So there was some, you know, you know, Freud with Google about, you know, endemic about whether, you know, this is the right way to talk about such results.
Starting point is 00:18:30 But now the punchline is kind of amazing, which is a few months later. So again, I said 10 solutions to open problems, and these were solutions in the literature. And then the question is, can you find solutions that are not in the literature? By now we have more than 10 actual solutions that are completely new,
Starting point is 00:18:49 that are publishable in top journal in combinatorics, completely obtained by, you know, some by CHATGPT and some by our internal models. So just within, again, this really speaks to the acceleration. In the span of just a few months, we went to, it's kind of a ridiculous statement to say that there would be 10 solutions to Erdosch problems, to it's actually happening for real and it's accelerating.
Starting point is 00:19:15 Yeah, it's interesting because it seems like that, you know, step one is have models be able to do really, literature research. And there have been major papers and awards done, given to people who've just done literature searches and found the solution was solved here and that actually applies elsewhere. So it's neat that it does that as the first step, but now that it's actually doing original. I mean, you know, the one thing that I really like about AI research is that it forces us to confront big questions about intelligence and about, you know, research and progress and how do we discover new things. In particular, there is this question of whether
Starting point is 00:19:49 the progress that we're seeing in science, is it just putting together different pieces and doing a little bit of reasoning on top of it? Or are there those brilliant sparks of insight? Everybody, of course, points to Einstein's relativity. I'm not even sure that really counts, to be honest. So I think the jury is still out on whether this process of just recombination
Starting point is 00:20:12 and a little bit of thinking, whether you can kind of increase, you know, human knowledge with no limit, or do you really need the sparks of genius that would be somehow only human? Well, even he credited, I forgot who was, but who came up with the analogy, the visualization method. He said it wasn't his. We pointed out who did it.
Starting point is 00:20:29 And he kind of took it to the next step further, obviously. And I think that we sometimes, we love these tiny little stories when it's a lot more complex than that. Yeah, absolutely. What will it mean for sciences in general if we have better mathematical tools in AI? How does it affect other things? Biology, material science. Yeah, so again, how it affects the rest of science. Well, the point is, I think it's really important for everybody to understand.
Starting point is 00:20:54 It's not like we're doing something very, very special for mathematics. Our techniques, our training techniques are very general. They are applied to everything. So our expectation is that we are seeing more progress in mathematics. Well, one reason is because it's very easy to benchmark. It's very easy to see that progress. But we have full expectation that this is going to happen in all sciences. It's not going to be limited to mathematics.
Starting point is 00:21:18 Yeah, it seems like something that's very good at going if this is true and then this is true and going through a long sequence of those kinds of statements has a lot of applications elsewhere. We've heard the term auto researcher. Do you unpack that of it? Right now, the way we work is exactly what Ernest described, which is really an interaction. It's kind of a professor's kind of a professor's a student interaction where chat GPT is a student and the professor is kind of, you know, giving a first problem and the student comes back, and then they talk a little bit, the student goes away for another week, comes back.
Starting point is 00:21:50 One point, of course, is that it's compressing those timelines greatly. In the honest story, you know, of solving this problem in 12 hours. I mean, I don't know. Without chat GPT, how long would it have taken you? Well, I have spent more than 40 hours failing without AI. And I don't know, maybe a month. Right. Yeah.
Starting point is 00:22:09 So, so exactly. So, you know, there is this thing of just compressing timelines. Now, when we talk about the automated researcher, that's a slightly different vision where the model or maybe a collection of model would work autonomously for a long period of time. This is kind of needed if we want to go beyond the current level. The current level of interaction, you know, the professor's student interaction where the student comes back after a week, is going to be very hard with that mode of interaction to do real breakthroughs to solve actually longstanding, you know, research problems or to make progress
Starting point is 00:22:43 in, you know, very difficult fields in biology where you need to interact, you know, with the wet lab and do all kinds of experiments. So once you want to go towards the real breakthrough, we will need to work over longer timelines. And this is where the automated researcher comes in. Maybe let me say it in a slightly different way. One concept that I'm a big fan of is this concept of AGI time. So you can have AGI seconds, minutes, hours, days, and so on. So that really means you have an AI and for like it can mimic human thinking, but for how long? So as Ernest was saying, you know, two years ago, maybe models were mimicking, you know, a high school student who thinks for a few minutes on a problem. Now we can mimic a researcher who can think for hours, maybe a few days.
Starting point is 00:23:29 We really want to go towards, and this progress has been going on for now, you know, very consistently for four years. where we went literally from seconds to minutes, to hours, to days, and now we are roughly at days slash one week. We want to go to weeks, if not months. This is open research. I don't think anyone on the planet knows exactly how to do it. But this goes back to we are doing a lot of research, a lot of innovation, and I think once everything will be put together,
Starting point is 00:24:00 we're just seeing this arc of progress where we keep making progress in AI time. But this is the direction of the automated researcher. So the people, the other mathematicians that I, you know, to talk to, their mode of using AI is they open up chatchip-T and then they talk to chatchip-tee within that context window. And you can have multiple sessions, but each session has a finite context length. And roughly on the order of like 50 pages of a math paper. And that's not long enough to make true like deep math ground. groundbreaking math breakthroughs because a lot of math papers are longer than 50 pages. And also the thought, the human thought that went into to produce, let's say, a 10 or 30-page
Starting point is 00:24:44 paper is usually, well, much orders of math to longer than the final output. So there's a limitation with the limited context window. But for users, but people who use codex will know that you can actually have very long work sessions with codecs. So you just keep, you know, giving instructions as to what. what kind of code you want to write. And then the code itself that you're working on, the repository of your code,
Starting point is 00:25:09 which in the math sense, the analogy would be that would be analogous to like math notes that you write down. That can be very, very, very long. But Codex is pretty good at deal with that. Once in a while, it compactifies its conversations. And it has its way of becoming this really amazing agent that can do really complex jobs over huge,
Starting point is 00:25:34 repositories of code over a really long context of conversation. And this, I believe, is going to happen with mathematics research as well. So we will be able to have LLMs be able to solve problems that are longer than just, you know, that require more than 50 pages of thinking. And that's what humans do. That's what human mathematicians do. People think for a day on a certain problem and then we kind of summarize our ideas and put it into notes the next day or the next week we come.
Starting point is 00:26:04 back to it. And then over several months, we've thought for so long, but it's sort of summarized. It's sort of organized in a way that becomes manageable. And in the end, the final output becomes a 30-page paper that summarizing the thoughts over, you know, many, many months or even years. So, yeah, I think that's going to happen. I was working on a very, very laughable problem to you guys over the weekend and using an LLM to try to do it to figure out like how to use a really small. LLM to do math. In the middle of it, I needed a benchmark. And I came across easy math, which is a benchmark for small LLMs. And the problem is just a paper on it. There wasn't really
Starting point is 00:26:42 a lot of data. And I just in the middle of codex, I go, can you create our own benchmark here and just generate the data for that? Yeah. And five minutes later, I had it. And that was magical to me because I'm in the middle of working on the tool that would have involved me. All of a sudden, okay, I got to spend a few hours, go do a generator, go produce this sort of stuff. Absolutely. And it runs in the background. I can't imagine. what it's like for you guys doing grown-up problems. Yeah, I mean, what you describe is really, you know, what we went after when we published the paper,
Starting point is 00:27:13 whose title was early experiments in science acceleration with GPT-5. Like, we, what you have experience is literal acceleration. Like, this is something that would have taken you before. I don't know, maybe a few days of work or something. I would have given up. Yeah. Yeah. Yeah.
Starting point is 00:27:29 So that's actually a great point, you know, I would have given up. this really enables scientists everywhere. Like, for example, mathematicians to be able to use code. Most of our friends, they don't code, you know. And now suddenly they have codex. They can do all the experiments that, you know, before they were trying to find a poor grad student to do the experiment for them. Now they can do all of these experiments very easily.
Starting point is 00:27:53 The flip side is, of course, like that scientists in all the disciplines, they can also use more advanced mathematics now, thanks to chat. I sat down with Bob Metcalfe. showed him how to use codex to do R. Because he was working on a project and R was new to him and he wanted to learn that. Yeah. And that was kind of a fun experience to take somebody's got a great mind and say, oh, instead of spending a lot of time having to figure this out, there's the tool for you.
Starting point is 00:28:17 But of course, now, as you alluded to before, we should talk about the role of the human in all of this. What is the place for the human, especially if we start to think about, you know, let's think a little bit about the future. I'm not a big fan of trying to predict the future. I'd like to explain what are the... But what do you think will happen? I think,
Starting point is 00:28:38 I think, you know, there is what my heart tells me and there is the rational aspect. So what my head tells me is, look, the progress has been happening, you know, very consistently for the last four years. From being able to solve mass problems that would take you seconds,
Starting point is 00:28:55 to minutes, two hours, to days, there is no reason. Anybody who would look at the situation would say, okay, a year from now, you will have systems that can think for weeks. Two years from now, systems that can think for, you know, years. Not only that, but already today we're finding that our models, they are able to really surpass humans in the sense that they can find mistakes in papers. You know, we had systems, we had agents internally that have been able to come up with,
Starting point is 00:29:24 to find papers and say, hey, actually this is wrong. Here is the correct answer. Not only that, but people tend to think. that AI is only good at answering questions. Actually, no, it's also pretty good at asking questions. Of course, you need to be, you know, again, you need some research innovation there, which we had. And now our models are very good at asking questions.
Starting point is 00:29:46 So good, in fact, that humans are looking at those questions and saying, hey, maybe I should write a paper based on this question. So this is, you know, really, really already happening now. So I think what I'm trying to say is that in a year, in two years, yes, models could do basic, more or less everything that human researchers do. So now what? What is the role of humans? Well, why is it that we're doing science?
Starting point is 00:30:11 What's the point? You know, the point is not to, I mean, it shouldn't be to just solve problem for the fun of solving problems. We're solving problems because we're trying to understand something. The understanding piece is key. We're not solving problems to write papers, to show, to say that we can write, you know, 10 times more papers than our neighbor. That's not the point.
Starting point is 00:30:34 You can do competitive chess if that's your kind of deal. We're trying to really understand deeper things. And why are we trying to understand deeper things? Because we want to have better control over our environment. We want to be able to cure diseases. We want to be able to build things better, faster, more robust, more solid, all of those things. So I think there is a chance that. we're looking at a very, very bright future using those tools, as long as the human stays in
Starting point is 00:31:07 control and guides what are the problems that matters. Problems that, you know, the AI doesn't care about curing disease. I mean, you know, they will not suffer from the same disease as we do, but we do care. So we have to control them and to guide them towards those problems. At the time of the advent of the first computers, when the computer went from being a person that did the math to an actual machine that did it, you saw some people looking at maybe we all have to move from math to physics because that's where the hard problems are going to be, and there's not going to be any more hard problems than mathematics because computers resolve that. Now it was the 1940s and 1950s, and it turned out that that's not the case, that computation opened up a whole new branch of that.
Starting point is 00:31:46 And that's what's going to continue, that we're just going to see that the mathematician that's in high school today is going to have a very exciting future 30 years from now because of what's happening here. I think math is going to be so much fun. So, okay, so math is, so mathematicians enjoy solving problems. But, you know, pre-AI, you know, we would think for months to solve a problem. And that's, there's enjoyment in that, but there's, it's quite grueling. There is pain. There is a lot of pain. And there is a huge, like, there is a surge of dopamine when you actually find the solution.
Starting point is 00:32:23 That's going to be accelerated. So, you know, more solutions, more fun. But also, I think math is going to become much more richer because it's going to be much more interconnected. Because at research level, a lot of math is hyper niche. And when you write the paper, you know that there are only five living humans right now that will care about this paper. But you like the results. So you put it put it out. And then the five other people appreciate it.
Starting point is 00:32:50 So they read it. But then, you know, 20 years later, it's going to, well, it's going to be in the archive somewhere. and nobody will read it. But now that we have AI, the AI will have read it. And if there is a useful connection, as Sebastian mentioned, it will surface it. And then people, you know, 100 years down the line will discover it and use it for whatever they want to use. So there's, I would now have much more confidence that my results that are just like,
Starting point is 00:33:19 that is put out there will be used if there is a use in the future. And also, I'm now able to add. access to mathematics in a much broader way. There are fields that I've not studied, but if a result comes up, then I would still have to study that field to be able to use that particular result in my research. But there is no way I could have found that result without the assistance of AI. But now it's accessible. The model tells me, hey, you can use this to solve your problem.
Starting point is 00:33:45 And then, well, okay, I'll go and try to use that. So math is going to be a much more interconnected enterprise. And also, verifying correctness of mathematics is actually, quite non-trivial because imagine there's a proof written by somebody that's it's 300 pages long and it claims to solve a really important problem. And this person is a very reputable person. So like there's there and the paper at surface looks plausible. How do you know? Well, I mean, this is a process that takes years to verify. And it's also not enough that one person reads it. Many people need to read it and read it and then try to extend it and then look into the details.
Starting point is 00:34:31 This is a process that takes years. And sometimes fatally incorrect proofs are published. So that's also a very slow process where the field initially accepts a result, but later on discovers that it's unsalvageable. So then it needs to get filtered out. This is going to be so much more accelerated with AI. So right now, our chat GPT and our AI models are not perfect. at verifying mathematics, but it's very good. And also, it has much more patience than humans.
Starting point is 00:35:02 So the truth is, so much of the published mathematics have minor mistakes, and a lot of them do have major mistakes. And we know because we have tested these things with our models. But now, I think the more richer future of mathematics is that this will be through AI verification. We will have much more certainty as to which results are correct, which results are. incorrect. And we'll have a much faster feedback on this. The paper published put out a week ago, we could get a verification on that, and then we could trust and build on that, as opposed to rating for five years to really ascertain the correctness. So overall, math is going to be much more fun. It's going to be much more interconnected. We'll be able to trust the results more. We'll be able to
Starting point is 00:35:50 move faster, and the mathematicians will solve harder and more interesting problems. So maybe one thing that I want to add. So I totally agree with everything that you just said. It's going to be a lot of fun. But I want also to look about one potential danger of the current progress, which would be that we kind of hand the keys to the castle to the AIs. And that human just start to trust the system a lot more and that they don't do the hard work that we, you know, kind of did to own our skills and to own our skills to be able to
Starting point is 00:36:22 verify and to sit patiently, you know, for. hours, you know, many days in a row or many weeks in a row to try to understand deeply a result and instead just kind of ask chat GPT to explain it to us in simpler terms. So basically I'm worried about potentially having a shallower understanding of things because we rely too much on the tool. So I think it's really important for the audience, for everyone listening to us, to understand that expertise is even more valuable than it ever was. The reason why we are able to squeeze out those results from CHAPT is because of all of those years of training
Starting point is 00:37:00 and our deep understanding of the subject. If it wasn't for that, we would not be able to push the state of the art. And we're seeing it. It's not like we're seeing thousands of people like non-mathematicians suddenly being able to prove new result. In fact, if anything,
Starting point is 00:37:14 we have seen recent examples in social media where non-mathematician have tried to use those tools to prove serum and come up with, you know, many tens of pages of proof. and then it turns out to be just wrong. So this is a danger that we have to grapple with. It seems like that's going to be a problem in a lot of things.
Starting point is 00:37:33 You see people spend using current models that often just reinforce things you want to hear. And that can be kind of your, you know, I'm going to come up with some sort of unified theory or whatever. Like, well, guess what? That's going to be a lot harder. Yeah. I mean, this sort of issue of mental sort of atrophy, if you will, is also, I think, very prominent in coding as well. So, I mean, I'm not a, you know, I wasn't a computer science major. I took some computer science courses and I did, I coded myself.
Starting point is 00:37:58 I wrestled with the debugger and most people of my age did. But nowadays, you don't have to do that in your university curriculum. And I think that's very dangerous. I've heard some people in the scientists who look at the progress are very optimistic. Like, well, we're not going to need scientists. We're not going to need this anymore. No. Yeah, no.
Starting point is 00:38:14 Wow. This is terrible. So really, I want to make sure anybody listening, please do not say that. This is the opposite of what we need. We need more scientists than ever. Those scientists are going to be more productive, more powerful. They will do better things. But we need them to be really, really good at their craft.
Starting point is 00:38:32 And I think this is where, you know, obviously, openly, I cannot do everything, you know, just to say it out loud. And this is where the existing institution have a very big role to play. So academia needs to both understand the rate of progress and, you know, how fast this is going, but also to kind of reclaim their role in that process. Yeah, my hope and expectations we're going to see more people go into the sciences because if you decide later on in life that you want to get into this, it's easier to catch up if you're dedicated because you have the greatest tutor in the world. I just added it to chat GPT and has a visual explanation tool now that helps you explain things. And I think that people, you know, just because all of a sudden an AI model is able to, you know, completely top out, you know, a benchmark, doesn't mean that you go, okay, we're done. We sold grade school math.
Starting point is 00:39:23 Congratulations, everybody. AI is done. It's like, no, there's a next level and the next level and you're going to need people. No, I think it will help. I mean, the young generation to get up to speed in science, like, so much more quickly. That's for sure. Like, I cannot imagine if I had Chad GPT, you know, as a teenager. I mean, I remember looking at Maxwell equation and being like, what does it really mean?
Starting point is 00:39:45 How did they come up with this stuff? Now you can just ask it. And it will explain it to you so beautifully. it's a big deal, but you still need to do the hard work on top of it. Though with a lot more people trying to create mathematical proofs who don't know what they're doing, you know, aren't really maybe putting in the right scholarship to make sure of that, we've seen areas of code repos and whatnot and people contributing fixes that aren't real fixes and things like this. How do you solve for that?
Starting point is 00:40:10 If I'm somebody who's involved in mathematics or a journal right now, I'm a little bit terrified. Yeah, so I think what Ennis said is that, you know, AI can help also for that. So we can have on the other side of it, of those systems, to have AI agent that are also going over everything, trying to verify as much as possible. And then, again, we do not want to trust fully the AI to verify and to accept a paper or to accept a commit. But we can have the AI agent flagging specific potential issues. So kind of bringing to the front, okay, hey, maybe this part, I'm not totally sure about it. So that will accelerate. that will help, you know, the human to have less to verify, basically.
Starting point is 00:40:52 And I think the sort of social structure of mathematics or, you know, code, it has to change a little bit in a way that the human doing the commit or human controlling the agent takes responsibility. So in mathematics, there already is a culture of, well, if you put out an incorrect proof, then, well, that's, that it hurts your reputation. And you're putting your reputation on the line when you put out a paper with your name. And that has to, I think we need more of that. If you're mathematically curious and somebody is watching this or listening,
Starting point is 00:41:22 then they maybe have an interest in math, but maybe they didn't feel they were a math person. But they're kind of curious to get started. What would you tell them? Good. Chat with chat GPT. If you are interested in learning, then it's so helpful. Like even at the research level, when I need to learn a new concept,
Starting point is 00:41:39 I would habitually go to Wikipedia and then it's just very dense. And I'm like, okay, well, after like 30 seconds, I go, okay, let me ask Chachapiti. And then I ask it. And then I also ask follow-up questions. And when I do so, it gives me so much more helpful information that is tailored to the parts of my knowledge that is missing because I'm asking the questions tailored towards that. And you could imagine explaining to Chachapit, your mathematical background, the things that you, the books that you've read, the material that you've learned. and then ask you to come up with a question that would be open and also would be understandable with your level of expertise.
Starting point is 00:42:21 Sebastian mentioned this. I think people, I don't think people yet appreciate that these LLMs are able to come up with good questions, but I think they can. So having this companion that you can talk about math with and talk about questions, you could ask the model to help you solve it. Once you have a solution, then you could keep talking and come up with the next question, you know, variations of this. It becomes a much more, even though you're still in your room alone, it feels much less of a solitary process. And that's what really makes mathematics fun. Because math, I think it really is a social endeavor.
Starting point is 00:43:03 I think toy problems be fun. And I tell people we can start with like, how many M&Ms can you fit in your bathtub? It sounds silly and you start to ask you. Then you go, how many words did you read last year? How would you figure this out? And then you can start to have this real wonderful conversation and start asking these questions. Next thing you know, you're starting to do more and more complex mathematics
Starting point is 00:43:22 and realize how it should affect you. A gentleman, this is great. Sebastian Ernest, thank you very much. Thank you. Thank you for everyone.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.