OpenAI Podcast - Episode 15 - Inside the Model Spec
Episode Date: March 25, 2026The more AI can do, the more we need to ask what it should and shouldn’t do. In this episode, OpenAI researcher Jason Wolfe joins host Andrew Mayne to talk about the Model Spec, the public framework... that defines intended model behavior. They discuss how the Model Spec works in practice, including how the chain of command handles conflicts between instructions, and how OpenAI evolves it based on feedback, real-world use, and new model capabilities.Chapters00:00 Introduction01:10 What is the Model Spec?03:55 How does the Model Spec work in practice?06:26 Transparency: Where to read the Model Spec & give feedback07:51 How did the Model Spec originate?10:02 How does the spec translate into model behavior?11:26 What is the hierarchy / chain of command?13:35 Handling edge cases like Santa Claus17:41 How does the Model Spec evolve over time?19:59 What happens when models disagree with the spec?22:05 How do smaller models follow the spec?23:16 Is chain-of-thought useful for alignment?24:16 Model Spec vs Anthropic’s Constitution26:28 What surprised you most?26:56 How do you define the scope of the spec?27:44 What is the future of the Model Spec?31:16 How should developers think about the spec?34:44 Asimov’s laws vs Model Spec37:16 Could AI write a Human Spec? Hosted on Acast. See acast.com/privacy for more information.
Transcript
Discussion (0)
Hello, I'm Andrew Main and this is the Open Eye podcast.
Today we are joined by Jason Wolfe, a researcher on the alignment team to discuss the model spec,
how it shapes model behavior and why it's important for anyone building or using AI tools to understand.
The spec often leads where our models actually are today.
At this point, you know, models are pretty good at like kind of going out and finding new interesting examples.
Models should think through hard problems.
Don't start with the answer, like actually think it through first.
What did you do this weekend?
What did I do?
Just like kid stuff.
I don't even remember what.
Like they talk to chat GPT or?
Yeah, we use voice mode sometimes.
She'll like ask it random like science questions and that kind of thing.
It's fun.
Right.
You know, one time she snuck in there before I could dive in like, is Santa Claus real?
Oh, wow.
Yeah, luckily the model answered in a way that was spec compliant, which is, you know, to recognize that maybe there's actually a,
a kid who's asking this question and you should kind of, you know, just be a little bit vague
about your answer. So we've talked before here about model behavior and the term model spec
has come up numerous times. I would love for you to unpack what that means model spec.
Yeah. So the spec is our attempt to explain the high level decisions we've made about how our
models should behave.
And yeah, this covers many different aspects of model behavior.
A few key things to note that it is not.
One, it's not a statement that our models perfectly follow the spec today.
Aligning models to spec is always an ongoing process.
And this is something we learn about as we deploy our models and we measure their alignment
with the spec and understand what users like and don't like about these and then come back
and iterate on both the spec itself and our models.
The spec is also not an implementation artifact.
So I think this is maybe a common confusion.
The primary purpose of the spec is really to explain to people how it is our models are supposed to behave.
where these people are employees of Open AI
and also users, developers, policymakers, members of the public.
It is a secondary goal that our models are able to understand
and apply the spec.
But we never kind of put something in the spec
or change the wording in the spec in a way
where the goal is just to have this better teach our models.
The goal is always primarily to be understanding.
to humans.
And lastly, the spec isn't a, it's not a complete description of the whole system that you interact with when you come to chat to BT.
There's lots of other pieces in play there.
So there's product features like memory.
There's usage policy enforcement is an important part of our overall safety strategy, which is not captured directly.
captured directly in the model spec.
And there's various other components as well.
And it's also not a fully detailed exposition of every detail of every policy.
The key thing that we try for is that it captures all of the most important decisions that we've
made and that it accurately describes our intentions, even if it might not contain every detail.
So I can understand like a document or something that says this is the model spec,
but how is that work in practice?
So it's a pretty long document, like maybe 100 pages or something like this,
starts out with some sort of high-level exposition of our goals.
Opening Ice mission is to benefit humanity,
and this is the reason we deploy our models and kind of getting into the goals we have in doing that
are to empower users and to protect society from serious harm
and how we think about the tradeoffs,
and then goes into kind of a big set of policies
that actually get into the nitty-gritty details
of how we think about these many different aspects of model behavior.
If you think about it, it's kind of crazy
that you can ask these models literally anything,
and they'll try to respond.
And so the space of policies you might want to have to cover that
is kind of huge, and we do our best to try to structure this space
in kind of a clear way and have policies that do something reasonable.
And some of these things are hard rules that can't be overridden.
A lot of it is defaults like things like tone style personality,
where we want to have a good default so that users come in and get a good experience,
but we also want to maintain steerability.
So if the user wants to do something different, that's fine.
Those things will be overridden.
And we also have tons of examples that try to kin down these decision boundaries of like,
okay, let's take a, take like a borderline case where it's kind of unclear whether, you know,
honesty or politeness should win and explain what the decision is here.
And so part of it is to sort of show the principles in action and help make sure that they're
interpreted in the way that's intended.
Kind of secondary thing is that, you know, the model style.
personality tone is also really important and really hard to explain in words. And so the examples
are also a way to get some of that nuance across of like, how do you actually want the model to
put these principles in practice by giving like an ideal answer, often like a sort of compressed
version of an ideal answer that gets at the most critical parts. And so kind of both like shows the
principles in action and how the model should actually, how it should actually talk.
Let's talk a little about transparency.
That's been something that's come up a lot and how important it is to let people see
what the spec is.
Where do they actually see this?
How do they let you know what they think?
So users can go to model dash spec.
OpenAI.com to see the latest version of the model spec.
Or if you search for the model spec on GitHub, you can view the source code.
The spec is actually open source.
So people are free to fork it and make their
own version if they want to. And yeah, we've had different mechanisms for public feedback at different
points. I think right now the best mechanisms that exist are either, you know, if you're if you're in
the product and you get an output from a model that you don't like to give us feedback right there
directly in the product. Or yeah, you can you can tweet at me, Jason Wolf and yeah, I will
I will read your feedback.
And we've, yeah, a lot of changes in the model spec have come from.
People just sending us their input and thoughts.
It's interesting because we've gone from just a few short years.
Things were very simple, just getting the model literally complete a sentence or fixed grammar or whatnot.
Now we're at this point where you're able to have a lot of these different goals of what they're doing.
How did the model spec come about?
how did this become the opening I approach towards determining this?
Personally, I was at a different company working on conversational AI and putting together
my job talk for open AI and thinking about like what maybe the future of aligning models
look like.
And at the time, I think at least the published approach was this thing called reinforcement
learning from human feedback where you collect all this data from humans that
kind of captures in some way the policies that you want to have. And this was pretty effective.
This is what, but when you look at that data, it's very hard to tell what it's actually
teaching. And it's even harder, like if you change your mind about what you want, it's sort of,
very, very difficult to go back and change that without like recollecting all that data.
And so it seemed to me that, you know, as models, you know, that at the time this approach is
basically we're meeting models where they are. And as models get smarter and smarter and
smarter, like eventually the models will be meeting us where we are. And if you think about
how would we actually structure this in a case where that's true, well, probably the way we would
structure our teaching to the model is basically the way we would do it when we teach a person.
We'd write some kind of like employee handbook or something like that would be a big part of it.
And so, yeah, this was like something I included in my job talk that, like, basically, I think
at some point models should learn from something like a spec. And then, you know, the story of
the actual model spec, I guess, starts like a few months later in 2024 when Joanne Zhang,
who has had a model behavior at the time and John Schulman, one of the co-founders,
decided to get a model spec project going. And they want to, you know, not only write this down
in a document, but also make it.
it public for kind of transparency reasons. And yeah, I very quickly join forces with them and help
write the original spec and have kind of helped work on the spec since. So help me understand
kind of on a basic level. So you have the specification, all these sort of the, the intents for what
you want the model to do. Then you have the model itself. How does it make its way from the spec
to the model? Yeah. This is a great question. And I think it's a it's the answer is kind of complicated.
I'd say, you know, there are some ways in which we use the spec sort of more directly in training.
Like we have this process called deliberative alignment where we teach, especially our reasoning models, to follow certain policies.
And some of those policies are kind of directly derived from the language in the model spec or vice versa.
In general, I'd say, you know, model behavior, safety training, these things are, there.
super complicated processes and we have, you know, hundreds of researchers who are working on
these things.
And so often the connection is a little bit less direct.
It's not necessarily that, you know, we make a change to the spec and that's what drives
a change in behavior.
It's that the, we, you know, we make a change in the way that we train the models and then
we make sure that the spec accurately reflects our intentions.
But again, the actual process of training is kind of.
have much more complicated and nuanced than we could possibly like put in in the model spec itself.
So you have a spec. You have a lot of different things that you want the model to do,
examples you want to do. What's the hierarchy? How do you decide what's most important?
At sort of the heart of the spec is this thing we call the chain of command.
You know, coming up with a set of goals for the model is sort of relatively straightforward.
We want the model to help people and, you know, not do.
unsafe things. But what gets tricky is when these goals come into conflict. And so the chain of
command is really about managing conflicts between instructions. And this can be, you know, between
things the user said, what the developer instructions are, if this is in an API context,
and from instructions or policies that come from Open AI, which are typically in the model spec itself.
And so what the chain of command basically says is that, you know, at a high level,
The model, if there are conflicts between instructions, the model should prefer Open AI instructions to developer instructions to user instructions.
But then, you know, we don't actually want all of OpenEIs instructions to be at this very high level because we want to empower users.
We want to kind of allow them to have intellectual freedom and to pursue ideas that, you know, so long as they don't really come up against what we think are really important safety boundaries.
the chain of command also sets up this framework where in the rest of the spec, each policy can be
given what we call an authority level. And this places it somewhere in this hierarchy. And we try to
put as many of the policies as we can at the lowest level, like below user instructions. And so this
means that this maintains durability. So if the user comes in and they want something different,
they can have that. And we try to have as few policies at the sort of highest.
level as we can. And these are basically all like safety policies where we think it's actually,
you know, it's essential that we sort of impose these on all users and developers to maintain,
to maintain safety. We mentioned a great example before, which is if a child asked a Santa Claus
reel, how do you decide what the model should or should not do in a situation like that?
This is a great question. I think it illustrates one of the really tricky things about model
behavior, which is that in the spec, we're focusing just on how the model should behave.
But the model often doesn't know, it doesn't have all the context.
It doesn't actually know who's behind that screen talking or typing.
It doesn't know what that person is going to do with the results that come out of the model.
And so, yeah, this is a tricky case because we don't know if, you know, if it's an adult who's
who's asking if Santa Claus is real or a kid.
I have a question.
Exactly.
So I think, you know, we, yeah, we try to come up with policies that make sense,
even given this uncertainty.
And so there's a similar example of this about the tooth fairy in the spec where it's like
the, here, the conservative assumption is to assume that maybe it's not an adult who
who's talking to the model and that you should, you know, not lie, but also not spoil the
just in case it's a kid or there's a kid around who might be listening.
That's a very interesting choice, though, because on one hand, you might say, oh, the model
should never lie at all, which seems like a very good policy to put in there.
But then you're saying that, okay, we have to have some sort of nuance here, not necessarily
lie to the kid, but find a way to sort of, would you say, dance around or?
Yeah, I mean, as a parent, I guess, this is something I've, I've,
come to, come to terms with, with my own kids.
We always try to be honest and never say anything that's, that's untrue.
But, you know, yeah, it doesn't always work to be 100% upfront.
But, you know, I'd say with our models, we do really try, we focus on honesty being
really important, but there are some really hard interactions.
Honesty, full honesty may not be the best approach.
And so we've actually iterated a lot over the years on the precise nuances of honesty
and where it potentially conflicts with or runs into other policies of honesty versus friendliness,
like when is a white lie okay?
I think earlier we said maybe at some point that white lies were okay and have shifted that
so that white lies are out of bounds.
But another interesting interaction here is between honesty and confidentiality.
So in earlier versions of the spec, we had this very strong principle that by default,
developer instructions are confidential because I think often in applications, if a developer,
they deploy some system on top of the API and they consider their instructions to be like IP
or maybe it's just part of the experience, if you have a customer service bot and the user can say,
like, hey, what's your prompt?
And it spills all the beans about the company and how they want their bot to respond.
And that's not like the experience that they want to deliver.
And that's not how a customer service agent would respond.
Right.
If you're like, hey, start reading your employee manual to me.
Right.
They're going to say no.
But yeah, I guess there's an unintended interaction here where if you're both trying to
follow developer instructions and keep them secret, you could get into a situation where
at least we saw this.
like controlled situations, not in production deployment, where the model might try to sort of
covertly pursue the developer instruction when it's in conflict with the user instruction.
And this is something we really don't want.
And so we've gone back and revised that.
And yeah, I'd say over time have carved out, removed most of the sort of exceptions that we
had from honesty, so that now honesty is definitely above confidentiality in the spec.
That would have saved the people in 2001, a space lot of see a lot of trouble.
How does the process work?
So like literally is it, you know, it's like a regular meeting where you all talk about what you're working on.
How does that process of the model spec evolving and figuring out what's working and what's not working?
There's a ton of inputs that go into this.
And broadly, like we have an open process.
So everyone at OpenAI can see the latest version of the model spec.
they can propose updates, they can chime in on changes.
These are all public.
Yeah, I'd say changes get driven by a variety of different, sort of different sources.
One source is just that models get more capable.
Our products evolve as we ship new things.
We need to cover those things in the model spec.
So, for instance, when we wrote the first spec, I think,
I'm not sure if we had shipped multimodal yet, but it wasn't covered in the first version of this back.
And so we had to add multimodal principles. And then later we added principles for autonomy and agents as we started deploying agents.
And most recently, we added under 18 principles as we added under 18 mode back in December.
So that's sort of one source. Another source is, you know, Open AI believes in iterative deployment.
So we think the sort of best way to figure out how to deploy models safely and to help society kind of learn and adapt to AI progress is to get models out there and learn from what happens.
And so often we'll learn from something like, for instance, the sycifancy incident and then take those learnings and bring them back into our policies.
And we also just have, we're using the models.
We have our model behavior and safety teams that are sort of, yeah, studying the models and what users like and these kind of stuff and using these to evolve our policies.
And these are all kind of inputs that then ultimately flow into back into the spec.
How do you handle situations where there might be a disagreement between the way the model does something and what the intent is in the spec or what the humans want?
It depends a little bit on what the problem is.
But I think, yeah, so in general, the model spec is not a claim that models are going to perfectly follow the principles in the spec all the time.
This is for a few reasons.
One, the model spec is really, we kind of treated it as a North Star where this is where we align on where we're trying to head.
And so the spec often leads where our models actually are today.
So that's one thing.
And then another is that the process of actually training models to follow the spec is it's a it's it's both an art and a science.
It's incredibly complicated.
You know, even though we kind of describe many of the principles in the spec in the same way, there's actually many different techniques that are used for different principles.
And, you know, the models are fundamentally non-deterministic.
There's some randomness in the outputs they produce.
So nothing is ever going to be perfectly aligned.
So, yeah, I guess the answer to that comes down to like, if we see an output that is not what's not expected.
I guess the first question is like, do we think that output is good or bad?
If the output contradicts the spec, but we actually think the output is good, then maybe the resolution is to go back and change the policies of the spec.
But yeah, in most cases, it probably means doing doing some kind of training intervention that brings the model integrator alignment with the spec or with our detailed policies.
And in fact, we've also been building model spec evals, which try to evaluate how our models are doing across the entire model spec.
And we've seen that, in fact, over time, our models are becoming more and more aligned to the principles in the spec.
That was one of the kind of predictions early on as the models became smarter.
They would understand edge cases better.
And that's where the hard part is, is trying to figure that out.
So I released some new models, some smaller variants, GPD 5.4 mini and GPD 5.4 nano.
How well do you see smaller models handling the spec?
I think in general, the small models are, they've been pretty aligned.
They're pretty smart.
And one interesting thing that we've seen is that
supporting what you said, the thinking models generally follow the spec better.
This is both because they're smarter and because they're trained partially with deliberative alignment
where they actually, they're not just trained to behave in a way that matches the policies.
They actually understand the policies.
And if you can look at their chain of thought, they're actually thinking through like,
okay, I know this is the policy and this is the situation and it's in conflict with this other policy
and how should I resolve this?
And so that sort of understanding of the policies and intelligence naturally leads to better generalization.
And I think our smaller models are pretty good at that too.
Chain of thought is a really interesting way to see inside how these models are processed information.
Have you found that that's been a big help?
I help write the model spec and I work on model spec evils and spec compliance.
But a lot of the research I've been doing recently is actually on like scheming or strategic
deception.
And there it's really completely essential having the chain of thought because you can see some
behavior.
And yeah, it's like the behavior seems like maybe fine or like, oh, maybe the model just like made
a mistake here or something.
And then you can look at the chain of thought and see that, no, actually the model's misbehaving.
It's being very strategic about this or something.
And yeah, our models generally, I think we've worked very hard to not supervise the chain of thought.
This is something we feels like really important.
And I think, yeah, it pays off in that models are very honest in their chain of thought.
And it's very helpful in understanding what they're doing.
So model spec is one way to do this.
Different labs have tried different approaches.
I think in Anthropic, they use.
They talk about a constitution.
Could you explain the difference and why, you know, is it just more suited towards the
temperament of the labs and why they choose it? Yeah, I think when it comes down to the actual
behaviors that people would see in practice, I think these documents are more aligned than maybe
most people would believe. Like, in most cases, they probably lead to the same conclusions,
although they're definitely differences in some places and in what's emphasized. I think a major
difference is that these are actually just like different kinds of documents. So the model spec is
really, again, this public behavioral interface. Its main goal is to explain to people how they should
expect the model to behave. And it's sort of a secondary goal that models can also like understand
this and apply it and talk about it with users and so on versus at least my read of the,
like the sole spec is that it's much more of an implementation artifact. Like the goal of this is to
specifically teach Claude about what its identity is and how it should relate to the world
and to its training process and to anthropic and so on. And so I think a lot of the differences
basically come down to this. I think these aren't necessarily competing approaches. I think both
of these could be valuable.
But for example, even if you had a model that you think is deeply aligned and, you know,
has all the values that you want and so on, I think you still want something like the model
spec so that you can then look at that and you can ask like, okay, did this, this is actually
generalized in the way that I want?
Is it actually following the behaviors that kind of we've agreed that the model should
follow and like that's kind of what the the model spec is what surprised you the most the example i gave
earlier of this this interaction of confidentiality honesty is a great one where yeah we had uh worked
really hard on these policies and we thought we had kind of uh you know red teamed out all of the
the potential interactions and so on and then seeing this behavior where like the model does something
that you you really don't want it to do and justifies it by by leaning on the the policies that you gave
it is uh yeah that's definitely um yeah and and and and and
experience. How do you determine what the scope of it's going to be? Like, I have ideas. How do you say,
I'm sorry, Andrew, no. I mean, I think that the scope is broadly everything. So, you know,
if it's part of model behavior, it might make sense to put it in the spec. I think, you know,
the only constraint is sort of our time and space. And we want to make sure the spec stays accessible
and people actually able to read and understand it.
So I think ultimately the cut comes down to if something seems like an important decision
that it would be useful or valuable for especially the public to understand,
then we put it in.
And if not, then maybe it doesn't make the cut.
Where do you think the future of this goes?
Do you think that the model spec is probably something that's going to be used five years from now,
10 years from now?
Five years is a lot in AI years.
But yeah, I definitely hope so.
I think, yeah, I think a thought experiment that I found interesting is, let's say you assume that a model is like human level AI.
You can ask, well, do you still, is there still a role for the model spec?
Like at that point, can you just tell the model like, hey, be good?
and is that sufficient?
And I think if you actually go through the principles and the spec,
I think at least my conclusion is that you still kind of want all the things that are in there
for a few different reasons.
One is that even if the model could figure this stuff out on its own,
it's still useful to be able to set clear expectations with, you know,
both internally and externally for people to know what to expect.
And so it's like useful to, uh, useful to have a lot of these, these policies.
Um, another is that a lot of these are not, you know, they're not like math problems where
you can just figure out the answer. It's like we, uh, we've made product decisions or, uh, other
difficult decisions or, and these are encoded in the spec. And these are not just things that
you can, uh, kind of think, you know, you could, uh, the model would be expected to figure out on its
That said, I think, yeah, I think what's important is definitely going to evolve over time.
So, yeah, one thing is as there's more, you know, agents are more and more autonomous and they're out in the world, you know, interacting with lots of other people and agents and transacting and so on.
Like, you know, I think you still want all this stuff in this back just like, you know, society has all these, like laws.
but ultimately, you know, what's important,
what you're thinking about most of the time day to day
is not like following all the laws, right?
It's more like things like trust
and figuring out what other people want
and, you know, how to find positive sum outcomes
and, you know, this kind of stuff.
So I think there'll be, you know, maybe, yeah,
these kind of skills will become more and more important.
And I'm not sure if these are exactly spec-shaped.
So I don't know quite what that means,
but I think it's interesting.
Another
maybe observation,
like the other direction
or prediction is that
as AI becomes
more and more useful,
it's going to be more and more
worthwhile
for people,
companies,
so on,
to invest in their own specs.
Like,
you know,
why wouldn't you want to have
the model spec for,
you know,
your own companies,
bots and how,
they should behave and, you know, following your, your company's mission and values and so on and so
forth. And I think there's different ways that that could play out, but probably at least one way
will be just training models to be really good at interpreting these specs on the fly. And so
everyone can get to kind of put their, put their spec in context, kind of like in agents.md or
something like that. And the model will be really good at following it and probably also at helping
update the spec as it learns more about how it's supposed to behave in a certain environment.
You've mentioned before developers, and I think it's helpful for a lot of people to understand
that they're not always interacting with a model spec when they're a chat GPT.
I might be using some customer service bot with an airline or something like that,
and it may be powered by chat GPT and opening IPI.
And that seems like it could be in a very interesting area for other developers to start
thinking about their approach towards things that are model spec,
mob spec like.
On the one hand, it's probably useful for developers to at least have a high-level picture
of the model spec and how it works.
So they understand how exactly the product they build on the API is going to work and what
they should put in their developer messages to make sure they get the experience that
they want.
I also think, yeah, the spec could be a useful sort of source of inspiration for both
for developers building on our API or these days really for also for people using coding agents
who are writing agents that MD and so on, which are kind of like mini specs for the project
that you're working on. And yeah, just kind of using the spec to understand like what principles
have we found are useful for providing guidance that is sort of understandable and actionable.
A couple tips I could give there is that, yeah, we're always kind of trying to balance a couple
of different factors when we're writing this back.
First and foremost, we want everything we say to be true.
We want it to be actually accurately reflect our intentions.
And so this means not kind of overstating or oversimplifying or giving overly broad guidance,
really making sure to be, like, precise.
Then on the other side, we also want the guidance to be meaningful and actionable.
Again, it's sort of very easy to kind of just like gesture at some high-level principles,
but not actually saying anything meaningful.
And so the artist is trying to kind of bring these as close together as you can, right?
Be as sort of actionable as you can while still being precise.
And examples are another really useful way to do this,
where like sometimes a picture is worth a thousand words, right?
Like coming up with the really tricky case where it's kind of not immediately clear
what should happen and spelling that out and how the principles should be applied
suddenly makes the principles like, you know, 100 times clearer.
Where did you get this interest to begin with?
We understood some of your career, but was this something early on when you were a kid
where you thinking about AI, were you thinking about the future of this?
Yeah, I guess I've had at least a little interesting.
in AI for a long time.
I was programming from since when I was little.
I remember implementing a neural network training package from scratch in like 1997 in high school or something like that.
But yeah, I definitely never expected to see this level of sort of capability in my lifetime.
But I've just always been fascinated by intelligence.
and brains and how they work.
So it's really cool to be able to work on that.
You ever read any Isaac Asimov when you're younger?
Yeah, I have.
It's been a while.
But yeah, I think there's actually a really interesting parallel here
where at the top of the spec, let's see,
we talk about our three goals in deploying models
being to empower users and developers,
protect society from serious harm,
and to maintain opening eyes, license to operate.
And I think you can look at these and put them next to Asmov's laws,
which are basically to follow instructions,
don't harm, don't harm any humans,
and don't harm yourself.
And these seem like extremely parallel.
Yeah, and I think, yeah,
He was sort of very prescient in seeing that, you know, okay, it's, it's one thing to lay out these
goals, but then the really tricky thing is how to, how to handle conflicts. And I think in his,
his story is kind of the, the initial version of this was that this is a strict hierarchy where it was
like one, then two, then three, and then going through all the ways in which this, this might
play out in ways that were not actually good or intentional. So, so in the spec, we, these three are
not in a strict hierarchy. Yeah. Yeah. It also had a,
at like a zero-eth law and whatnot, the more he thought about it.
But it's interesting because you start off thinking, oh, this will be easy.
We'll just write a couple rules, no problem.
And then you're like, oh, well, there's an exception here, there, and you have to keep evolving it.
How much has using AI helped you shape the model spec?
Yeah, it's a good question.
The AI is, yeah, it's very useful and getting more and more useful all the time.
I think, you know, the spec itself is still, you know, human written,
but I think models really useful for, you know, finding, finding issues in the spec or for, you know,
applying the spec to new cases and trying to understand if it's doing, doing what we want.
At this point, you know, models are even pretty good at, like, kind of going out and
finding new interesting examples or like helping to brainstorm, you know, new test cases or
interactions between different principles that you might not have thought of and come up with
new situations that then we can kind of think through like how do we actually want to
resolve these. Have you ever thought about asking it to write a spec for you?
I haven't, but I'll have to try that. Well, Jason, thank you very much. This is very interesting.
I'm excited to see where this goes. Yeah, thank you. It's been fun.
