No Priors: Artificial Intelligence | Technology | Startups - O3 and the Next Leap in Reasoning with OpenAI’s Eric Mitchell and Brandon McKinzie
Episode Date: May 1, 2025This week on No Priors, Elad and Sarah sit down with Eric Mitchell and Brandon McKinzie, two of the minds behind OpenAI’s O3 model. They discuss what makes O3 unique, including its focus on reasoni...ng, the role of reinforcement learning, and how tool use enables more powerful interactions. The conversation explores the unification of model capabilities, what the next generation of human-AI interfaces could look like, and how models will continue to advance in the years ahead. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @mckbrando | @ericmitchellai Show Notes: 0:00 What is o3? 3:21 Reinforcement learning in o3 4:44 Unification of models 8:56 Why tool use helps test time scaling 11:10 Deep research 16:00 Future ways to interact with models 22:03 General purpose vs specialized models 25:30 Simulating AI interacting with the world 29:36 How will models advance?
Transcript
Discussion (0)
Hi, listeners, and welcome back to No Pryors.
Today, I'm speaking with Brandon McKinsey and Eric Mitchell, two of the minds behind OpenAI's O3 model.
O3 is the latest in the line of reasoning models from OpenAI, super powerful with the ability to figure out what tools to use and then use them across multi-step tasks.
We'll talk about how it was made, what's next, and had a reason about reasoning.
Brandon and Eric, welcome to NoPriars.
Thanks for having us.
Yeah, thanks very nice.
You might walking us through O3, what's different about it, what it was in terms of a breakthrough
in terms of like, you know, a focus on reasoning and you're adding memory and other things
versus this a core foundation model at LLM and what that is.
So O3 is like our most recent model in this O series line of models that are focused on thinking
carefully before they respond.
And these models are in sort of some vaguely general sense smarter than like,
like, models that don't think before they respond, you know, similarly to humans.
It's easier to be, you know, more accurate if you think before you respond.
I think the thing that is really exciting about O3 is that not only is it just smarter
if you make like an apples-to-apples comparison to our previous O-Series models, you know,
it's just better at like giving you correct answers of math problems or factual questions about
the world or whatever.
This is true and it's great.
and we will continue to train models that are smarter.
But it's also very cool because it uses a lot of tools
that enhance its ability to do things that are useful for you.
So, yeah, like you can train a model that's really smart,
but if it can't browse the web and get up-to-date information,
there's just a limitation on how much useful stuff
that model can do for you.
If the model can't actually write and execute code,
there's just a limitation to how, you know,
the sorts of things that an LLM can do efficiently,
whereas a relatively simple Python program can solve a particular problem very easily.
So not only is the model it's on its own smarter than our previous O-Series models, which is great,
but it's also able to use all these tools that further enhance its abilities
and whether that's doing research on something where you want up-to-date information
or you want the model to do some data analysis for you or you want the model to be able to do the data analysis
and then kind of review the results and adjust course as it sees fit
instead of you having to be so sort of prescriptive about each step along the way.
the model is sort of able to take these high-level requests,
like do some due diligence on this company
and, you know, maybe run some reasonable, like, forecasting models
on so-and-so thing.
And then, you know, write a summary for me,
like the model will kind of like infer a reasonable set of actions
to do on its own.
So it gives you kind of like a higher-level interface
to doing some of these more complicated tasks.
That makes sense.
So it sounds like basically there's like a few different changes
between your core sort of GPT models
where now you have something that takes a pause
to think about something.
So at inference time, you know, there's more compute happening.
And then also it can do sequential steps because they can kind of infer what are those
steps and then go act on them.
How did you build or train this differently from just a core foundation model or, you know,
when you all did GPD 2.5 and 4 and all the various models that have come over time?
What is different in terms of how you actually construct one of these?
I guess the short answer is reinforcement learning is the biggest one.
So, yeah, rather than just having to predict the next token and so,
large pre-training corpus from, you know, everywhere, essentially. Now we have a more focused
goal of the model solving very difficult tasks and taking as long as it needs to do to figure out
the answers to those problems. Something that's like kind of magical from a user experience for me
was we've in the past for our reasoning models talked a lot about test time scaling. And I think
for a lot of problems, you know, without tools, test time scaling might,
occasionally work and
but at some point the model is just kind of
ranting in its internal chain of thought
and especially for
like some visual perception ones. It knows that
it doesn't, it's not able to see the thing that
it needs and it just kind of like loses
its mind and goes insane and
I think
tool use is a really important component now to
continuing this like test time scaling
and you can feel this when you're talking to
03 at least my impression when I
first started using it was
the longer it thinks like I really get the
impression that I'm going to get a better result and you can kind of watch it do really intuitive
things and it's a very different experience but being able to kind of trust that as you're waiting
like it's worth the weight and you're going to get a better result because of it and the model's not
just off doing some totally irrelevant thing. That's cool. I think in your original post about this too,
you all had a graph which basically showed that you looked at how long it thought versus the accuracy
of result and it was a really nice relationship. So clearly, you know, thinking more deeply about something
really matters. And it seems like in the long run, do you think there's just going to be a world
where we have sort of a splitter bifurcation between models, which are sort of fast, cheap,
efficient, get certain basic tasks done? And then there's another model which you upload a legal
M&A folder, and it takes a day to think. And it's slow and expensive, but then it produces, you know,
output that would take you a team of people, you know, a month to produce. Or how do you think
about the world in terms of how all this is evolving or where it's heading?
You know, I think for us, like unification of our models is something that, you know, Sam has talked about publicly that, you know, we have this big crazy model switcher in chat GPT and there are a lot of choices and, you know, we have a model that might be good at any particular thing, you know, that a user might want to do, but that's not that helpful if it's not easy for the user to figure out, well, which model should I use for that task?
And so, yeah, making the models better able, you know, making this experience more intuitive is definitely something that is, is like valuable and something we're interested in doing.
And that applies to this, you know, question of like, you know, are we going to have like two models that, you know, people pick between or a zillion models that people pick between or do we put that decision, you know, inside the model?
I think everyone is going to try stuff and figure out what works well for the problems they're interested in and the users that they have.
But yeah, I mean, that question of like how do you make that sort of decision be like as effective, accurate, intuitive as possible is definitely top of mind.
Is there a reason from a research perspective to combine reasoning with pre-training or try to have more control of this?
Because if you just think about it from the product perspective of like the end consumer dealing with chat GPT, like, you know, we won't get into the naming nonsense here.
But they don't care.
They just want like the right answer and the amount of intelligence required to get there in as little time as possible, right?
The ideal situation is it's, like, intuitive that, like, how long should you have to wait?
You should have to wait as long as it takes for the model to, like, give you a correct answer.
And I hope we can get to a place where our models have a more precise understanding of their own level of uncertainty.
Because, you know, if they already know the answer, they should just kind of tell you it.
And if it takes them a day to actually figure it out, then they should take a day.
But you should always have a sense of, like, it takes exactly as long as it needs to for that.
current like models intelligence, and I feel like we're on the right path for that.
Yeah, I wonder if there isn't a bifurcation, though, between like an end user product
and a developer product, right? Because there are lots of companies that use, you know,
the APIs to all of these different models and then for very specific tasks. And then on some
of them, they might even use like open source models with really cheap inference with stuff that
they control more. I hope you could just kind of tell the model, like, hey, this is an API use
case. And yeah, you really can't be over there thinking for like 10 minutes. We got to get an
answer to the user. It'd be great if their models kind of get to be more steerable like that
as well. Yeah, I think it's just a general steerability question. Like at the end of the day,
if the model's smart, like you should be able to specify like the context of your problem
and the model should do the right thing. There's going to be some like limitations because, you know,
maybe just figuring out, given your situation, what is, like, the right thing to do might require
thinking in and of itself to figure out. So, like, it's not that you can obviously do this
perfectly, but, but yeah, pushing, you know, some, all the right parts of this into the model
to make things easier for the user is, like, seems, is a very good goal.
Can I go back to something else you said, like, so the first guest we ever had on the podcast
was actually Noam Brown.
Oh, nice. I've heard of him. You know, to do. I know.
Two plus years ago, yes, hi-no.
It'd be great to get some intuition from you guys
for why tool use helps, like, test time-scaling work much better.
I can give maybe very concrete cases for, like, the visual reasoning side of things.
There's a lot of cases where, and back to also the model being able to estimate its own uncertainty,
you'll give it some kind of question about an image, and the model will very transparently tell you
when I shouldn't have thought, like, I don't know, I can't really see the thing you're talking about very well.
or like it almost knows like that its vision is not very good and uh but it's kind of magical
is like when you give it access to a tool it's like okay well i got to figure something out uh let's
let's see if i can like manipulate the image or crop around here or something like this and um what
that means is that it's it's it's like much more productive use of tokens as it's doing that and
so your test time scaling slope you know goes from something like this to you know something
much deeper and uh we've seen exactly that like the the the test time scaling slopes for
without tool use and with tool use for visual reasoning specifically are very noticeably different.
Yeah, I also say for like writing code for something like there are a lot of things that an LLM could try to figure out on its own,
but would require a lot of attempts and self-verification that you could write a very simple program to do in like a verifiable and, you know, much faster way.
So, you know, I do some research on this company and, like, use this type of, you know, valuation model to tell me, like, you know, what the valuation should be.
Like, you could have the model, like, try to crank through that and, like, fit those coefficients or whatever in its context.
Or you can literally just have it, like, write the code to just do it the right way and just know what the actual answer is.
And so, yeah, I think, like, part of this is you can just allocate compute a lot more efficiently.
because you can defer stuff that the model doesn't have comparative advantage to doing
to a tool that is like really well suited doing that thing.
One of the ways I've been using some form of O3A lot is deep research, right?
I think that's basically a research analyst, AI that you all have built that basically
will go out, we'll look up things on the web, we'll synthesize information, we'll chart things
for you.
It's pretty amazing in terms of this capability set.
Did you have to do anything special in terms of, you know,
any form of specific reinforcement learning
specifically for it to be better at that
or other things that you built against it?
Or how did you think about the data training for it,
the data that was used for training it?
I'm just sort of curious how that product,
if it all is a branch off of this,
and how you thought about building that specifically
as part of this broader effort.
I think when we think about like tool use,
I think browsing is one of the most natural places
where, you know, you think of as a starting point of, like, okay, like, and it's, it's not always easy.
I mean, like, the, you know, initial kind of browsing that we included in GPT4 a few years back,
like, it was hard to make it, you know, work in a way that felt, like, reliable and, like, useful.
But, you know, in the sort of, you know, modern these days, last year, you know, two years ago as ancient history,
I think it feels like a natural place to start because it's like so widely applicable to so many types of queries.
Like anything that is, you know, requires up-to-date information, like it should help to browse for.
And so in terms of like a test bed for, hey, like, does, you know, the way we're doing RL, like, does it really work?
You know, can we really get the model to learn like longer time horizon kind of meaningful extended behaviors?
there's like it feels like kind of a natural place to start in some ways in that it, you know,
also is fairly likely to be like useful in a relatively short amount of time. So it's like, yeah,
let's let's try that. I mean, you know, in RL, like at the end of the day, you're defining an
objective. And if you have an idea for like who is going to find this most useful, like, you know,
you might like want to tailor your, the objective, you know, to who you expect to be.
be using the thing, what you expect they're going to want, you know, what is their tolerance
for, do they want to sit through a 30-minute rollout of deep research, you know?
Do they, when they ask for a report, you know, do they want a page or five pages or a gazillion
pages?
So, yeah, I mean, you're definitely, you know, you want to tailor things to, like, who you think
is going to be using it.
I feel like there's a lot of almost like white collar behavior work that, or knowledge work
that you all are really capturing through this sort of tooling going forward,
and you mentioned software engineering is one potential area.
Deep research and sort of analytical jobs is another,
where there's all sorts of really interesting work to be done.
That's super helpful in terms of augmenting what people are doing.
Are there two or three other areas that you think are the most near-term interesting applications for this,
whether opening eyes doing it or others should do it aside?
I'm just sort of curious how you think about the big application areas for this sort of technology.
I guess my very biased one that I'm excited about is coding.
And also research in general, being able to improve upon the velocity that we can do research at Open AI and others can do research when they're using our tools.
I think our models are getting a lot better very quickly at being actually useful.
And it seems like they're kind of reaching some kind of inflection point where they are useful enough to want to reach out to and use like multiple times a day for me at least, which wasn't the case.
they're always like a little bit, you know, behind what I wanted them to be,
especially when it comes to, like, navigating and using our internal code base,
which is not simple.
And it's amazing to say, like, more recent our models actually really spending a lot of time,
trying to understand the questions that we ask them and coming back with things that
save me, yeah, like many hours of my own time.
People say that's the fastest potential bootstrap, right,
in terms of each model subsequently helping to make the next model better,
faster, cheaper, et cetera.
And so people often argue that that's almost like a inflection point on the exponent
towards superintelligence is basically this ability to use AI to build the next
version of AI.
Yeah.
And there's so many like different components of research too.
There's, it's not just, you know, sitting off in the ivory tower thinking about things,
but there's, there's like hardware, there's, you know, various components of training and
evaluation and stuff like this.
And each of these can be turned to some kind of.
task that can be optimized and iterated over. So there's plenty of, you know, room to squeeze out
improvements. We talked about browsing the web, writing code, arguably the greatest tool of all,
right, especially if you're trying to figure out how to spend your compute, right, more efficient
code. Genering images, writing text. There are certainly like trajectories of action, I think,
are not in there yet, right? Like, reliably using a sequence of business software.
I'm really excited about the computer use stuff.
It kind of drives me crazy in some sense that our models are not already just on my computer all day watching what I'm doing.
Well, I know that could be creepy for some people, and I think you should be able to opt out of that or have that opt out by default.
I hate typing also.
I wish that I could just kind of be working on something on my computer.
I hit some issue, and I'm just like, you know, what am I supposed to do with this?
And I can just kind of ask.
I think there's tons of space for being able to improve on how we interact with the models.
This goes back to them being able to use tools in a more intuitive way.
I guess using tools closer to how we use them.
It's also surprising to me how intuitively our models do use the tools we give them access to.
It's like weirdly human-like, but I guess that's not too surprising given the data they've seen before.
I think a lot of things are weirdly human-like.
Like my intuition for like, well, why is tool use so impactful to test time scale?
Like, why is the combination so much better?
Take any role.
You can make a decision when you are trying to make progress against a task.
ask as to, like, do I get external validation or do I sit and think really hard, right? And usually
you want to do, like, one or is more efficient than the other. And it's not always just sit
in a vacuum and think really hard with what you know. Yeah, absolutely. You can seek out sort of new
inputs. Like, it doesn't have to be this closed system anymore. And I do feel like the closed
systemness of the models is still sort of a limitation in some ways. Like, you're not, you're not
necessarily, like, turning this. I mean, like, I think it'd be great if the model could control my
computer for sure but in some sense it's there's a reason we don't go hog wild and say like oh yes here's like
the keys to the kingdom like have at it um there are still you know asymmetric costs to like
the time you can save and the types of errors you can make and so we're trying to like iteratively
kind of you know deploy these things and like try them out and figure out like where are they
reliable you know and where are they not um because yeah like if you did just let
the model control your computer, it could do some cool stuff. Like, I have no doubt. But, you know,
do I trust it to, like, respond to all of the, you know, random emails that Brandon sends me?
Actually, maybe for that task, it doesn't require that much intelligence, but, you know, more generally,
like, do I, you know, do I trust it to do everything I'm doing? Like, you know, some things. And I'm
sure, like, that set of things will be bigger tomorrow than it was yesterday. But, yeah, I think
part of this is like we limit the affordances and keep it a little bit in the like
sandbox just out of caution so that you know you don't send some crazy email to your boss
or you know delete all your texts or delete your hard drive or something is there some sort
of like organizing mental model for like the tasks that one can do with uh you know
increasing intelligence test time scaling and tool improved tool use right because I look at this
And I'm like, okay, well, you have complexity of task and you have time scale.
Then you have like the ability to come up with these RL rewards and environments, right?
Then you have like usefulness.
Maybe you have some, of course, you have some intuition about like a diversity and generalization across the different things you can be doing.
But it seems like a very large space.
And scaling RL like new gen RL is not, it's just not obvious.
Like how, to me, it's not obvious how you do it or how you choose the path.
Is there some sort of organizing framework that, you know, you guys have that you can share?
I mean, I don't know if there's like one organizing framework.
I think there are a few like factors at least that I think about in like the very, very grand scheme of things is like how much, like in order to solve this task, like how much uncertainty with the environment do I have to like wrestle with?
like for some things where it's like this is a purely fat like who was the first president of the United States like there's zero like environment I need to interact with to like reach the answer to this question correctly I just need to remember the answer and say the answer you know if I want you to like write some code you know that like solve some problem well now I have to deal with a little bit of like not purely internal model stuff but also like okay I need to execute the code and like that's
code execution environment is maybe more complicated than my model can memorize internally.
So I have to do like a little bit of like writing code and then executing it and making sure
it does what I thought it did and then testing it and then giving it to the user.
And things get like the amount of that sort of stuff outside the model that you have to like,
you know, you can't just recall the answer and give it to the user.
You have to like test something and, you know, run an experiment in the world and then wait
for the result of that experiment.
Like the more you have to do that, the more uncertain.
the results of those experiments, like, in some sense, that's, like, one of the core, like,
attributes of, like, what makes the tasks hard. And I think another is, like, how, you know,
simulatable they are. Like, stuff that is really bottlenecked by, like, time, like, the physical
world is also, you know, just, just harder than stuff that we can simulate really well. You know,
it's not a coincidence that, you know, so many people are interested in coding and, you know,
coding agents and things, and that, like, you know, robotics is hard. And it, you know, it's
slower. And, you know, I used to work on robotics. And, like, it's frustrating in a lot of ways.
I think both this, like, how much of the external environment do you have to deal with?
And then, like, how much do you have to wrestle with the unavoidable slowness of the real
world are two, like, dimensions that I sort of think about?
It's super interesting because if you look at historically some of these models, one of the
things that I think has continued to be really impressive is a degree to which they're generalizable.
And so I think when GitHub co-pilot launched was on CodeX, which was like a specialized code model,
and then eventually that just got subsumed into these more general purpose models in terms of what a lot of people are actually using for coding-related applications.
How do you think about that in the context of things like robotics?
So do you know, there's like probably a dozen different robotics foundation model companies now?
Do you think that eventually just merges into the work you're doing in terms of there's just these big general purpose models that can do all sorts of things?
Or do you think there's a lot of room for these standalone other types of models over time?
I will say the one thing that's always struck me is kind of funny about us doing RL is that we don't yet do it on the most, like, canonical RL task of robotics.
And I personally don't see any reason why we couldn't have these be the same model.
I think there are certain challenges with, like, I don't know, do you want your RL model to be able to,
like generate an hour long movie for you natively as opposed to like a tool call like that's where
it's probably tricky to have you have more conflict between having like everything in the same
set of weights but um certainly like the things you see o three already doing in terms of like uh you know
exploring a picture and things like that are are kind of like early signs of uh something like an
Asian exploring like an external environment so I don't think it sounds too far fetched to me yeah I mean
I think the the the thing I came up earlier of the also the like intelligence per cost thing you know
the real world is like an interesting litmus test because at the end of the day, like,
there is a, you know, frame rate in the real world you need to live on.
And it doesn't matter if you get the right answer after you think for two minutes.
Like, you know, the ball is coming at you now and you have to catch it.
Gravity's not going to wait for you.
So you, that's an extra constraint that we get to at least softly ignore when we're talking about these purely disembodied things.
It's kind of interesting, though, because really small brains are very good at that.
You know, so you look at a frog.
You start looking at different organisms and you look at sort of relative compute.
Yeah.
And, you know, very simple systems are very good at that ants, you know.
Like, so I think that's kind of a fascinating question in terms of what's the baseline amount of capability that's actually needed for some of these real world tasks that are reasonably responsive in nature.
It's really tricky with vision, too.
So our models have some, I think, maybe famous edge cases of where they don't do the right thing.
I think Eric probably knows where I'm going with this.
I don't know if you ever ask our models to tell you what time it is on a clock.
They really like the time 10-10.
So, yeah.
It's my favorite time, too.
So that's usually what I tell people.
It's like over 90% or something like that of all clocks on the internet are 10-10.
And it's because it looks like a happy face and it looks like nice.
But anyways, like, what I'm getting at is, like, our visual system was developed by interacting with, you know, the external world and having to be good at, like, navigating things, you know, avoiding predators.
And our models have learned vision in a very different type of way.
And I think it'll, we'll see, like, a lot of really interesting things if we can get them to be kind of closing the loop by, you know, reducing their uncertainty by taking actions in the real world, just as opposed to, like, thinking about stuff.
Hey, Eric, you brought up the idea of, like, how what in the environment can be simulated, right, as an input is to, like, how difficult will it be to improve on this.
As you get to long-running tasks, like, let's just take software engineering.
Like, there is a lot of interaction that is not just me committing code continually.
It's like, I'm going to talk to other people about the project, in which case you then need to deal with the problem of, like, can you.
reasonably simulate how other people are going to interact with you on the project in an environment.
That seems really tricky, right? I'm not saying that, you know, O3 or whatever set of foundation
models now doesn't have the intelligence to respond reasonably. But like, how do you think
about that simulation being true to life as true to life, true to the real world as you
involve human beings in an environment in theory?
My spicy, I guess, take on that is like, I don't have the spicy, but 03 in some sense is already kind of simulating what it'd be like for a single person to do something with like a browser or something like that. And I don't know, train two of them together so that you have, you know, you have two people interacting with each other. And there's no reason you can't scale all this up so that models are trained to be really good at cooperating with each other. I mean, there's a lot of already existing literature on multi-agent.
RL and, yeah, if you want the model to be good at something, like collaborating with a bunch of people, like, maybe a not too bad starting point is making it good with collaborating with other models.
And someone should do that.
Yeah, yeah.
Yeah, we should really start thinking about that, Eric.
I think it is a little bit spicy because, yes, the work is going on.
It is interesting to hear you think that is a useful direction.
I think lots of people would still like to believe, not me, like, my comment was extra good on this poll request or whatever it is, right?
Okay, I can sympathize with that.
Sometimes I see our models training, and I'm like, oh, what are you doing?
You know, like, you're taking forever to figure this out.
And I actually think it would be really fun if you could actually train models in an interactive way.
You know, forget about just like a test time.
But I think it would be really neat to train them to do something like that,
be able to, like, intervene when it makes sense.
Yeah, just more me being able to tell the model to cut it out in the middle of its kind of chain of thought
and it being able to learn from that on the fly, I think would be great.
Yeah, I do think this is like the intersection of these two things where it's both like a point of contact with the external environment that is like can be very high uncertainty.
Like humans can be very unpredictable in some cases.
And it's sort of limited by the tick of time in the real world if you want to like, you know, deal with actual humans.
Like humans have a fixed, you know, clock cycle, you know, in their head.
So, yeah, I mean, this is, if you, you know, if you want to, like, do this in the literal sense, it's hard.
And so, you know, scaling it up and, you know, making it work well is, you know, it's not obvious how to do this.
Yeah, we are a super expensive tool call.
You know, if you're a model, you can either ask me, you know, meatbag over here to, you know, help with something.
And I'll try to think really slowly.
In the meantime, it could have, like, used browser and read, like, 100 papers on the topic and something like that.
So it's, yeah, how do you model the tradeoff there?
But the human part's important.
I mean, I think in any research project, like, my interactions with brand are the hardest part of the project.
You know, like writing the code is, that's the easy part.
Well, and there's some analog from self-driving.
A lot's going to say, you know, hang out with me every week is the hardest part of doing this podcast.
It's my favorite part.
Look at how healthy their relationship is, Eric.
We need to learn from this.
No, we're honest.
It's okay.
We've got to work through it.
Okay.
In self-driving, one of the, like, classically hard things to do was, like, predict the human and the child and the child and
the dog, like agents in the environment versus, like, what the environment was. And so I think
there's, like, some analogy to be drawn there. Going back to just, like, how you progress the
O series of models from here, is it a reasonable, like, assessment that some people have that the
capabilities of the models are likely to advance in a spikier way, because you're relying to some
degree more on the creativity of research teams and, like, making these environments and
deciding, you know, how to create these evals versus, like, we're scaling up on existing
data set in pre-training. Is that a fair contrast? Spikier, like, what's the plot here? What's
the, like, the X axis and the Y? Domain is the X axis and Y is capability? Yes, because you're,
like, choosing what domains you are really creating this URL loop.
in. I mean, I think this is a very reasonable, uh, hypothesis to, um, to hold. I think there is some like
counter evidence that I think should, you know, be factored into people's intuitions. Like,
you know, Sam tweeted an example of some creative writing from one of our models that, um, I think was
I'm not an expert and I'm not going to say this is like, you know,
publishable or like groundbreaking, but I think it probably updated some people's intuitions
on like what, you know, you can train a model to do really well. And so I think there is some
structural reasons why you'll have some spikiness just because like as an organization, you have
to decide like, hey, we're going to prioritize, you know, X, Y, Z stuff. And like, as the models get
better, the surface area of stuff you could do with them grows faster than, you know, you can
potentially, like, say, hey, this is the niche, you know, we're going to carve out,
we're going to try to do this really well.
So, like, there, I think there's some reason for spikiness, but I think some people will
probably go too far with this and saying, like, oh, yes, these models will only be
really good at math and code and, like, not, you know, like, everything else is, like,
you can't get better at them.
And I think that is probably not the right intuition to have.
Yeah, and I think probably all, like, major AI labs right now have some part
between let's just define a bunch of data distributions we want our models to be good at and then just like throw data at them and then another set of people and those same companies are probably thinking about how can you kind of lift all boats at once with some like algorithmic change and I think yeah we definitely have both of those types of efforts at open AI and I think especially on the data side like there are going to naturally be things that we have all
a lot more data of than others, but ideally, yeah, we have plenty of efforts that will not
be so reliant on the exact, like, subset of data we did RL on and it'll generalize better.
I get pitched every week, and I bet a lot does too, a company that wants to generate data
for the labs in some way. And or it's, you know, access to human experts or whatever it is,
but, like, you know, there's infinite variations of this. If you could wave a magic wand,
and have, like, a perfect set of data, like, what would it be that you know would advance model quality today?
This is a Dodge, but, like, uncontaminated evils, always super valuable.
And that's data.
And, I mean, yeah, like, you want, you know, good data to train on, and that's, of course, valuable for making the model better.
But I think it is often neglected how also important it is to have high quality.
data, which is like a different definition of high quality when it comes to an e-val.
But yeah, the e-vail side is like often just as important because you don't, you need to
measure stuff. And like, as you know from, you know, trying to hire people or whatever,
like evaluating the capabilities of like a general, like capable agent, it's really hard
to do in like a rigorous, you know, way. So yeah, I think e-vals are a little underappreciated.
But it's true.
Evals are, I mean, especially with some of our recent models where we've kind of run out of reliable Eval's track because they kind of just solved a few of those.
But on the training side, I think it's always valuable to have training data that is kind of at the next frontier of model capabilities.
I mean, I think a lot of the things that O3 and O4 mini can already do those types of tasks, like basic tool use, we probably aren't, you know,
super in the need for new data like that. But I think it'd be hard to say no to a data set that's
like a bunch of like multi-turn user interactions and some code base that's like a million
lines of code that, you know, is like a two-week research task of like adding some new feature
to it that requires like multiple poll requests. I mean, I mean like something that was like
super high quality and has a ton of supervision signals for us to learn from it. Yeah, I think that
would be awesome to have. You know, I definitely wouldn't turn that down. You play with the models all the
time. I assume a lot more than average humans do. What do you do with reasoning models that you think
other people don't do enough of yet? Send the same prompt many, many, many times to the model
and get an intuition for the distribution of responses you can get. I have seen, it drives me
absolutely mad when people do these comparisons on Twitter or wherever.
and they're like, oh, I put the same prompt into blah, blah, and blah, blah, and this one was so much better.
It's like, dude, you, like, I mean, something we talked about a bit when we were launching is like, yeah, O3 can do really cool things, like when it chains together a lot of tool calls.
And then, like, sometimes for the same prompt, it won't have that, you know, moment of magic or it will, you know, just take a little, it'll do a little less work for you.
And so, yeah, though, like, the peak performance is really impressive, but there is a distribution of behavior.
And I think people often don't appreciate that there is this distribution of outcomes when you put the same prompt in.
And getting intuition about that is useful.
So as an end user, I do this and I also have a feature request for your friends in the product org.
I'll ask, you know, Oliver or something.
But it's just I want a button where, like, assuming my rate limits or whatever.
support it. I want to run the prompt automatically like a hundred times every time, even if it was
really expensive. And then I want the model to rank them and just give me the top one and two.
Interesting. And just let it be expensive. Or a synthesis across it, right? You could also
synthesize the output and just see if there's some, although maybe you're then reverting to the
mean in some sense relative to that distribution or something. But it seems kind of interesting,
yeah. Maybe there's a good infrastructure reason you guys aren't giving us that button.
Well, it's expensive. But there are, I think it's a great suggestion.
Yeah, I think it's a great suggestion.
How much would you pay for that?
A lot, but I'm a price-insensitive user of AI.
I see.
Perfect.
But maybe there are a favorite.
Should have Sarah Tier as one of your tiers.
Exactly, exactly.
At the end.
I really like sending prompts to our models that are kind of at the edge of what I expect them to be able to do, just kind of for funzies.
Like a lot of the times before I'm about to do some, like, programming tasks, I will
just kind of ask the model to go see if they can figure it out. All the times with like no hope
of it being able to do it. And indeed, sometimes it comes back and I just am pretty like I'm like a
disappointed father. But other times it does it and it's amazing and it saves me like tons of time.
So I kind of use our models almost like a background queue of work where I just, I'll just like
shoot off tasks to them and sometimes those will stick and sometimes they won't. But in either case,
like it's always a good outcome if something good happens. That's cool. Yeah, I do that just to feel
better about myself when it doesn't work.
Oh yeah.
I'm still providing values.
Yeah, yeah, exactly.
When it works, I feel even worse about myself.
So that's very hit or miss.
Yeah.
There are some differences in terms of how some of these models are trained or are out or, you know, effectively produced.
What are some of the differences in terms of process in terms of how you approach
those series of models versus other things that have been done at opening eye in the past?
The tool stuff was very, was quite the experience getting working at a large scale,
setting. You can imagine if you're doing async RL with a bunch of tools that those are, you're just
adding more and more failure points to your infrastructure. And what you do when things that get
to be fail is pretty interesting like engineering problem, but also like an RL, like ML problem too,
because, you know, if you're, I don't know, if your Python tool, like, you know, it goes down in the
middle of the run and you're like, what do you do? Do you stop the run? Probably not. That's probably
not like the most sane thing to do with like that much compute. So the question is like how do
you handle that gracefully and not, you know, hurt the capabilities of the model like as an unintended
consequence. So there's been a lot of learnings like that of how you deal with like huge
infrastructure that's asynchronous for R.L is hard. This has been great guys. Thank you.
Yeah, thanks so much for coming. Yeah, thanks. That's fun. Thanks for having us.
Find us on Twitter at No Pryor's Pod. Subscribe to our YouTube channel if you want to see our faces,
Follow the show on Apple Podcasts, Spotify, or wherever you listen.
That way you get a new episode every week.
And sign up for emails or find transcripts for every episode at no dash priors.com.