No Priors: Artificial Intelligence | Technology | Startups - National Security Strategy and AI Evals on the Eve of Superintelligence with Dan Hendrycks
Episode Date: March 5, 2025This week on No Priors, Sarah is joined by Dan Hendrycks, director of the Center of AI Safety. Dan serves as an advisor to xAI and Scale AI. He is a longtime AI researcher, publisher of interesting AI... evals such as "Humanity's Last Exam," and co-author of a new paper on National Security "Superintelligence Strategy" along with Scale founder-CEO Alex Wang and former Google CEO Eric Schmidt. They explore AI safety, geopolitical implications, the potential weaponization of AI, along with policy recommendations. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @DanHendrycks Show Notes: 0:00 Introduction 0:36 Dan’s path to focusing on AI Safety 1:25 Safety efforts in large labs 3:12 Distinguishing alignment and safety 4:48 AI’s impact on national security 9:59 How might AI be weaponized? 14:43 Immigration policies for AI talent 17:50 Mutually assured AI malfunction 22:54 Policy suggestions for current administration 25:34 Compute security 30:37 Current state of evals
Transcript
Discussion (0)
Hi, listeners, and welcome back to No Pryors.
Today, I'm with Dan Hendricks, AI researcher and director of the Center for AI Safety.
He's published papers and widely used evals, such as MMLU, and most recently, Humanity's
last exam.
He's also published Super Intelligence Strategy, alongside authors, including former Google CEO, Eric Schmidt,
and scale founder, Alex Wang.
We talk about AI safety and geopolitical implications.
analogies to nuclear, compute security, and the state of e-vowals.
Dan, thanks for doing this.
Glad to be here.
How'd you end up working on AI safety?
AI was pretty clearly going to be a big deal if I would just think through its conclusion.
So early on, it seemed like other people were ignoring it because it was weirder or not that pleasant to think about.
It's hard to wrap your head around, but it seemed like the most important thing during this century.
So I thought that that would be a good place to devote my career toward.
And so that's why I started on it early on.
And then since it be such a big deal, we'd need to make sure that we can think about it properly, channel it in a productive direction, and take care of some sort of tail risks, which are generally systematically under addressed.
So that's why I got into it.
It's a big deal, and people weren't really doing much about it at the time.
And what do you think of as the center's role versus safety?
efforts within the large labs? Well, there aren't that many safety efforts in the labs even now.
I mean, I think the labs can just focus on doing some very basic measures to refuse
queries related to like help me make a virus and things like that. But I don't think labs
have a extremely large role in safety overall or making this go well. They're kind of predetermined
to race.
can't really choose not to unless they would no longer be a relevant company in the arena.
I think they can reduce like terrorism risks or some like accidents. But beyond that, I don't
think they can dramatically change the outcomes in too substantial of a way they could or because
a lot of this is geopolitically determined. If companies decide to act very differently, there's
the prospect of competing with China or maybe maybe Russia will become relevant.
later. And as that happens, this constrains their behavior substantially. So I've been
interested in tackling AI at multiple levels. There's things companies can do to have some very
basic anti-terrorism safeguards, which are pretty easy to implement. There's also the economic
effects that will need to be managed well, and companies can't really change how that goes either.
It's going to cause mass disruptions to labor and automate a lot of digital labor. If they
you know, tinker the design choice or add some different refusal data. It doesn't change that fact.
Safety are making AI go well and the risk management is just much more of a broader prom.
It's got some technical aspects. But I think that's a small part of it.
I don't know that the leaders of the labs would say, like, we can do nothing about this.
But maybe it's also a question of, you know, everybody also has, like, equity in this equation, right?
Maybe it's also a question of semantics. Like, can you describe how you think of the difference between, like, alignment and
safety as you think about it? I'm just using safety as a sort of catch-all for like dealing with
risks. There are other risks. Like if you never get really intelligent AI systems, that poses
some risks in itself. There's there's other sorts of risks that don't run that are not as
necessarily technical, like concentration of power. So I view the distinction between alignment and
safety as alignment as being a sort of subset of safety. Obviously, you want the value systems of the
AIs to be in keeping with or compatible with, say, the U.S. public for U.S. AIs or for you as an individual.
But that doesn't make it necessarily say if you have an AI that's reliably obedient or aligned to you, this doesn't make everything work totally well.
China can have AIs that are totally aligned with them. The U.S. can have AIs that are totally aligned with them.
You still are going to have a strategic competition between the two. This is going to, they're going to need to integrate it in their militaries.
They're probably going to need to integrate it really quickly.
Competition is going to force them to have a high risk tolerance in the process.
So even if the AIs are doing their principles as biddings reliably, this doesn't necessarily make the overall situation perfectly found.
I think it's not just a question of reliability or whether they do what you want.
There are other structural pressures that cause this to be riskier like the geopolitics.
At the highest level, like, fundal of weights increasingly capable, like, why do we care about AI from a national security perspective?
Like, what's the most practical way it matters in geopolitics or gets used as a weapon?
I think that AI isn't that powerful currently in many respects. So in many ways, it's not
actually that relevant for national security currently. This could well change within a year's time.
I think generally I've been focused on the trajectory that it's on, as opposed to saying
right now it is extremely concerning. That said, there are some, for instance, for cyber,
I don't think AIs are that relevant for being able to pull off a devastating cyber attack on the grid by a malicious actor currently.
That said, we should look at cyber and be prepared and think about its strategic implications.
There are other capabilities like Virology.
The AIs are getting very good at STEM, Ph.D. level types of topics, and that includes Vrology.
So I think that they are sort of rounding the corner on being able to provide expert level capabilities in terms.
of their knowledge of the literature, or even helping in practical wet lab situations.
So I do think on the virology aspect, they do have already national security implications,
but that's only very recently with the reasoning models.
But in many other respects, they're not as relevant.
It's more prospective that it could well become the way in which a nation might try and
dominate another nation and the backbone for not just war, but also just economics,
security, the amount of chips that the U.S. has versus China might be the determiner or the
determinant of which country is the most prosperous and which one falls behind. But this is all
prospective. I don't think it's just speculative. It's speculative in the same way that like
Nvidia's valuation is speculative or the valuations behind AI companies are speculative. It's
something that I think a lot of people are expecting and expecting fairly soon. Yeah, it's quite
hard to think about time horizons in AI. We invest in things that I think of it as like medium term
speculative, but they get pulled in quite quickly. You know, just because you mentioned both cyber
and bio, we're investors in companies like culminate or Sybil on the defensive cybersecurity side
or chai and somite on the biotech discovery side or, you know, modeling different systems in biology
that will help us with treatments. How do you think about the balance of like competition and
benefits and safety, because some of these things I think are, you know, we think they're working
effectively in the near term on the positive side as well. Yeah, I mean, I don't get this
this big tradeoff between safety and, I mean, you're just taking care of a few tail risks.
For bio, which if you want to expose those capabilities, just like talk to sales, get the
enterprise account. Here you can have the little refusal thing for Vrology, but if you just
create an account a second to go and you're asking it how to culture this virus and you're, here's
picture of repeat tradition, what's the next step that you should do? That, yeah, if you, if you want
to access those capabilities, you can speak to still. So that's basically in XAI's risk management
framework, it's just we're not exposing those expert level capabilities to people who we don't
know who they are. But if we do, then sure, have them. So I think you can, and likewise with cyber,
I think you can just very easily capture the benefits while taking care of some of these pretty
avoidable tail risks. But then once you have that, you've basically taken care of malicious use
for the models behind your API.
And that's about the best that you can do as a company.
You could try and influence policy by using your voice or something.
But I don't see a substantial amount that they could do.
They could do some research for trying to make the models more controllable
or try and make policymakers be more aware of the situation more broadly in terms of where we're going.
Because I don't think policymakers have internalized what's happening.
at all. They still think it's a like a, they're just selling hype and they don't actually
believe, or the companies that the employees don't actually believe that this stuff could,
you know, we could get EGI and, so to speak, in the, in the next few years. So I don't know,
I don't see like really substantial tradeoffs there. I see much more, I think that the,
the complications really come about when we're dealing with, like, what's the right stringency
in export controls, for instance. That's, that's complicated.
If you turn the pain dial all the way up for China and export controls, and if AI chips are the currency of economic power in the future, then this increases the probability that they want to invade Taiwan.
They already want to.
This would give them all the more reason if AI chips are the main thing and they're not getting any of it and they're not even getting the latest semiconductor manufacturing tools for even making cutting edge CPUs, let alone GPUs.
So those are some other types of complicated problems that we have to address.
and think about and calibrate appropriately.
But in terms of just mitigating virology stuff,
just to speak to sales if you're Genentech or a bio startup
and then you have access to those capabilities, problem solved.
What is a way you actually expect that AI gets used as a weapon?
Beyond virology and security, yeah.
I wouldn't expect a bio weapon from a state actor,
from a non-state actor, that would make a lot more sense.
the, I think cyber makes sense from state actors and both non-state actors.
Then there's drone applications.
These could disrupt other things.
These could help with other types of weapons research, like help explore exotic EMPs,
could help create better types of drones, could substantially help with situational awareness
so that one might know where all the nuclear submarines are.
some advancement in AI might be able to help with that, and that could disrupt our second
strike capabilities and mutualisture destruction. So those are some geopolitical implications. It could
potentially bear on nuclear deterrence, and that's not even a weapon, the example of just heightened
situational awareness and being able to pinpoint where hardened nuclear launches are or where
nuclear submarines are is just informational, but could nonetheless be extremely disruptive.
or into stabilizing.
Outside of that, the default conventional AI weapon would be drones,
which is, I don't know, that makes sense that countries would compete on that.
And I think that it would be a mistake if the U.S. weren't trying to do more in manufacturing
drones.
Yeah, I started working recently with an electronic warfare company.
I think there's a massive lack of understanding of just like the basic concept
of, you know, we have autonomous systems. They all have communication systems. Our missile systems
have targeting communication systems. And from a battlefield awareness and control perspective,
like a lot of that thought will be one with radio and radar and related systems, right? And so I think
there's an area where AI is going to be very relevant and is ready very relevant in Ukraine.
Speaking about AI is assisting with like command and control, I mean, I was hearing some
story about how on Wall Street, humans used to not be able to, you always had a human in the
loop for each decision. So at a later stage, before they removed that requirement on Wall Street,
you just had rows of people just clicking the accept, accept, accept, accept, check button.
And we're kind of getting to a similar state in some context with AI. It wouldn't surprise me
if we'd end up automating some more of that decision making. But so this just turns into questions
of reliability. And doing some reliability research seems useful. To return to that larger question
of where are the sort of safety tradeoffs. I think people are largely thinking that the push
for risk management is to do some sort of pausing or something like that. An issue is you need
teeth behind an agreement. If you do it voluntarily, you just make yourself less powerful and
you let the worst actors get ahead of you. You could say, well, we'll sign a treaty.
we will not assume that the treaty will be followed.
Like, that would be very imprudent.
You would actually need some sort of threat of force or something to back it up, some
verification mechanism.
But absent that, if it's entirely voluntary, then this doesn't seem like a useful thing
at all.
So I think people's conflation of safety, what we must do is we want to voluntarily slow it
down.
It just doesn't make as much geopolitical sense unless you have some threat of force to back it up
or some very strong verification mechanism.
But absent that, as a proxy, there's clearly been very little compliance to either treaties
or norms around cyber attacks and around corporate espionage, right?
Yeah.
I mean, corporate espionage, for instance, that was one strategy.
This sort of voluntary pause strategy, believing that equal safety.
And then maybe last year, there was that paper, situational awareness for people,
were in by Leopold Ashenbrenner, and he's a sort of a safety person.
So his idea was, let's instead try and.
beat China to superintelligence as much as possible. But that is some sort of weaknesses
because it assumes that corporate espionage will not be a thing at all, which is very difficult
to do. I mean, we have, you know, some places, you know, 30% plus of the employees at these
top AIA companies are like Chinese nationals. I mean, this is not feasible. If you're going to
get rid of them, they're going to go to China and then they're probably going to beat you because
they're extremely important for the U.S.'s success. So you're going to want to keep them here.
But that's going to expose you to some information, security types of issues, but that's just too bad.
Do you have a point of view on how we should change immigration policy, if at all, given these risks?
So I would, of course, claim that this is the policy on this to be totally separate from southern border policy and other, and broader policy.
But if we're talking about AI researchers, if they're very talented, then I think you'd want to make it easier.
And I think that it's probably too difficult for many of them to stay currently.
And I think that that discussion should be kept totally separate from Southern Borden policy.
Just in terms of broad strokes, like things that you think won't work, voluntary compliance, and assuming that'll happen, or just straight race.
So we want to be competitive.
And I think it's, I think racing in other sorts of spheres, say drones or AI chips, seems fine.
If you're saying, let's race to super intelligence to try and get and turn them into a weapon to crush them.
And they're not going to do the same or they're not going to have access to it or they're not going to prevent that from happening.
That seems like quite a tall claim.
I mean, if, if we did have a substantially better AI, they could just co-opt it.
You could just steal it.
Unless you had really, really strong information security, like you move the AI researchers out to the desert, but then you're reducing your probability of actually beating them because a lot of your best scientists ended up going to back to China.
Even then, if there were signs that they were really pulling ahead and going to be able to get some powerful AI that will crush, that will initiate.
China or that would enable the U.S. to crush China, they would then try to deter them from doing
something like that. They're not going to sit idly by and say, you know what? Yeah, go ahead. Develop your,
develop your super intelligence or whatever. And then you can boss us around and we'll just
accept your dictates until the end of time. So that I think that there is kind of a failure of
some sort of second order reasoning going on there, which is, well, how would China respond to this
sort of maneuver if we're building a trillion dollar compute cluster in the desert, totally visible
from space. And it's basically the only plausible read on this is that this is a bid for
dominance or a sort of monopoly on superintelligence. So I think it's, it reminds me of in the
nuclear era, there's a brief period where some people were saying, you know, we got to just
like preemptively destroy or preventively destroy the USSR. We got to nukemen, even pacifists or
people who are normally pacifists like Bertrand Russell were advocating for this. The opportunity
window for that was like maybe didn't ever exist. But there was there is a prospect of it for some
time. But I don't think that the opportunity window really exists here because of the complex
independence and the multinational talent dependence in the United States. But I don't think you can
have China be totally severed from any awareness or any ability to gain insight.
or imitate what we're doing here.
We're clearly nowhere close to that as a real environment right now, right?
No, it would take years.
It would take years to do well.
And, like, I don't even think the timelines for some very powerful AI systems,
there might not even be enough time to do that securitization anyway.
So, okay, in reaction, you propose, along with some, you know, other esteemed authors
and friends, Eric Schmidt and Alex Wang, a new deterrence regime, mutually assured
AI malfunction.
I think that's the right name.
maim, a bit of a scary acronym, and also a nod to mutually assured destruction. Can you explain
maim in plain language? Let's think of what happened in nuclear strategy. Basically, a lot of,
a lot of states deterred each other from doing a first strike because they could then retaliate.
So they were, we're not going to do this really aggressive action of trying to make a bid to
wipe you out because that will end up causing us to be damaged. And we have a somewhat similar
situation later on, when AI is more salient, when it is viewed as pivotal to the future
of a nation, when people are on the verge of making a superintelligence more, when they can
say automate, you know, pretty much all AI research, I think states would try to deter
each other from trying to leverage that to develop it into something like a super weapon
that would allow the other countries to be crushed or use those AIs to do some
really rapid automated AI research and development loop that could have it bootstrapped from
its current levels to something that's a super intelligent vastly more capable than any other
system out there. I think that later on, it becomes so destabilizing that China just says we're going
to do something preemptive like do a cyber attack on your data center. And the U.S. might do that to
China. And Russia, coming out of Ukraine, will reassess the situation, get situationally where it's
think, oh, what's going on with the U.S. and China? Oh, my goodness, they're so head on AI,
AI is looking like a big deal. Let's say it's later in the year when, you know, a big chunk
of software engineering is starting to be impacted by AI. Oh, wow, this is looking pretty
relevant. Hey, if you try and use this to crush us, we will prevent that by doing a cyber
attack on you. And we will keep tabs on your projects because it's pretty easy for them to do that
espionage. All they need to do is do a zero day on Slack. And then they can know what deep mind is up
to in very high fidelity and Open AI and XAI and others. So it's pretty easy for them to do
espionage and sabotage. Right now, they wouldn't be threatening that because it's not at the
level of severity. It's not actually that potentially destabilizing. It's still too distant the
capabilities. A lot of decision makers still aren't taking this AI stuff that seriously,
relatively speaking. But I think that'll change as it gets more powerful. And then I think that this is
how they would end up responding. And this makes us not wind up in a situation where we are doing
something extremely destabilizing, like trying to create some weapon that enables one country
to, like, totally wipe out the other, and as was proposed by people like Leo.
What are the parallels here that you think makes sense to nuclear and don't?
I think that more broadly, to say, as a dual-use technology, dual-use to be it as civilian applications,
it has military applications, its economic applications are still, you know, in some ways
limited and likewise its military applications are still limited, but I think that will keep changing
rapidly. Like chemical, it was important for the economy. It had some military use, but they kind of
coordinated not to go down the chemical route and bio as well can be used as a weapon and has
enormous economic applications. And likewise, with nuclear too. So I think it has some of those
those properties for each of those technologies, countries did eventually coordinate to make sure
that it didn't wind up in the hands of rogue actors like terrorists.
There have been a lot of efforts taken to make sure it doesn't, that rogue actors don't get
access to it and use it against them because it's in neither of their interests.
Basically, like bio-weapons, for instance, and chemical weapons are a poor man's atom bomb,
and this is why we have the Chemical Weapons Convention and Bio-Weapons Convention.
That's where there's some shared interests.
So they might be rivals in other senses.
way that the U.S. and the Soviet Union were rivals, but there's still coordination on that
because it was incentive compatible. It doesn't benefit them in any way if terrorists have access
to these sorts of things. It's just inherently destabilizing. So I think that's an opportunity
for coordination. That isn't to say that they have an incentive to both pause all forms
of AI development, but it may mean that they would be deterred from some particular forms of
AI development, in particular ones that have a very plausible prospect of enabling one country
to get a decisive edge over another and crush them. So no super weapon type of stuff, but more
conventional type of warfare, like drones and things like that, I expect that they'll continue
to race and probably not, maybe not even coordinate on anything like that, but that's just how
things will go. That's just, you know, bows and arrows and nuclear. It just made sense for them
to develop those sorts of weapons and threaten each other with him.
You all could propose a magical adoption tactically of some policy or action to the current administration.
What is the first step here?
It is the, you know, we will not build a super weapon and we're going to be watching for other people building them too.
And so I've sort of been alluding to throughout this whole conversation.
Like, what would the companies do?
Like, not that much.
I mean, add some basic anti-terrorism safeguards, but I think this is like pretty technically easy.
This is unlike refusal for other things.
Refusal robustness for other things is harder.
Like if you're trying to get it like crime.
and torts, that's harder because it's a lot messier. It overlaps with typical everyday interaction.
I think likewise here, the asks for states are not that challenging either. It's just a matter
of them doing it. So one would be the CIA has a cell that's doing more espionage of other
states' AI programs. So that way they have a better sense of what's going on and aren't caught by
surprise. And then secondly, maybe some part of government, like let's say cybercom, which has a lot of
cyber offensive capabilities, gets some cyber attacks ready to disable other data centers
in other countries if they're looking like they're doing something, running a, or creating
a destabilizing AI project.
That's it for the deterrence, for nonproliferation of AI chips to rogue actors in particular.
I think there'd be some adjustments to export controls, in particular, just knowing where
the AI chips are at reliably.
We want to know where the AI tips are at for the same reason we want to know where our fissile material is at,
for the same reason that we want Russia to know where its fissile material is at.
Like, that's just generally a good bit of information to collect.
And that can be done with some very basic states craft of having a licensing regime.
And for allies, they just notify you whenever it's being shipped to a different location and they get a license exemption on that basis.
And then you have enforcement officers prioritize doing some basic inspections for,
AI chips for her and use checks in. So I think like all of these are a few texts away or a basic
document away. And I think that kind of like 80-20 is a lot of it. Of course, this is always a
changing situation. Safety isn't, has been trying to reinforce not really that much of a
technical problem. This is more of a complex geopolitical problem with technical aspects.
Later on, maybe we'll need to do more. Maybe we will, there might be some.
new risk sources that we need to take care of and adjust. But I think, like right now, I think
that SB and ISSU CIA sabotaged with cybercom, building up those capabilities, buying those
options seems like that takes care of a lot of the risk. Let's talk about compute security.
If we're talking about a 100,000 networked state-of-the-art chips, you can tell where that is.
How does DeepSeek and the recent releases they've had factor into your view of compute security,
given expert controls have clearly led to innovation toward highly compute efficient pre-training
that works on ships that China can import at what one might consider like an irrelevant scale,
a much smaller scale today. It's hard for me to see directionally that training becoming less
efficient, even if we, you know, people want to scale it up. And so like, does that change your
view at all? No, I think it just sort of undermines other types of strategies like the,
this Manhattan project type of strategy of let's, you know, move people out to the desert and do a
big cluster there. And what it shows is that you can't rely as much on restricting in other
superpowers as capabilities, their ability to make models. So you can restrict their intent,
which is what deterrence does. But I don't think you can reliably or robustly restrict their
capabilities. You can restrict the capabilities of rogue actors. And that's what I would want
things like compute security and export controls to facilitate with, make sure it doesn't, you know, wind up in the hands of Iran or something.
China will probably keep getting some fraction of these chips, but we should basically just try and know where they're at more and we can tighten things up.
But I would primarily, you could even coordinate with China to make sure that the chips aren't winding up in rogue actors' hands.
I should also say that the export controls, it wasn't actually a priority among leadership at BIS to my understanding, a substantial priority, the AI chip.
for some people, but for the enforcement officers, like, did any of them go to Singapore to see
where the 10% of invidious chips were going? I think that would have, they would have very
quickly found, oh, they were going to China. So some basic and use check would have taken care
of that. I don't think this is that export controls don't work. We've done nonproliferation
of lots of other things like chemical agents and fissile material. So it can be done if people
care. But even so, I still think if you really tighten the export controls, you mean,
so that China can't get any of those chips at all. And this is one of your biggest priorities.
They're just going to steal the weights anyway. I think it'll be too difficult to totally
restrict their capabilities. But I think you can restrict their intent through deterrence.
It also seems like, are those stuff as powerful or it's not? It seems infeasible to me,
given the economic opportunity that China will say, we don't need the capability.
Yeah. Yeah.
I fail to see a version of the world where leadership and another great power that believes
that there is value here says we don't need that from an economic value perspective.
Yeah, that's right.
Yeah.
Just for a lot of these, it would be maybe it would be nicer if everything went, you know, 3X slower.
And maybe there'd be fewer like messups.
If there's like some magic button that would do that.
I don't know whether that's true or not, actually.
I don't have a position on that.
Given the structural constraints and the competitive pressures between these companies, between these states, it just makes a lot of these things infeasible, a lot of these other gestures that could be useful for risk mitigation.
When you consider them or when you think about the structural realities of it, it just becomes a lot less tractable.
That said, there still would be in some way some pausing or halting of development of particular projects that you could potentially lose control of or that.
or that if, if controlled, would be very destabilizing because it would enable one country
to crush the other. I think people's conceptions about what risk management looks like is
it's, people think it's a peacnick thing or something like that. Like, it's all kumbaya,
and we just have to ignore structural realities in operating in this space. I think instead,
the right approach toward this is that it's sort of like.
like nuclear strategy, like it is an evolving situation. It depends. There's some basic things
you can do. Like, you're probably going to need to stockpile nuclear weapons. You're going to need
to secure a second strike. You're going to need to keep an eye on what they're doing. You're going
to need to make sure that there isn't proliferation of rogue actors when the capabilities are
extremely hazardous. And this is a continual battle, but it's not, you know, it's not going to be
clearly an extremely positive thing no matter what. It's not going to be doomsday, no matter what,
for nuclear strategy, it was obviously risky business. The Cuban Missile Crisis became pretty close
to an all-out nuclear war. It depends on what we do. And I think there's some basic interventions
and some very basic states craft can take care of, can take care of a lot of these sorts of risks
and make it manageable. I imagine then we're left with more domestic type of problems, like
what to do about automation and things like that. But I think maybe we'll be able to get a handle
on some of the geopolitics here.
I want to change tax for our last couple minutes
and talk about evalves.
And it's obviously very related
to safety and understanding
where we are in terms of capability.
Can you just contextualize where you think we are?
You came out with triggeringly named
Humanity's last exam eval
and then also Enigma.
Like, why are these relevant
and where are we in evals?
Yeah, yeah.
So for context,
I've been making evaluations
to try and understand
where we're at in this in AI
for, I don't know,
about as long as I've been doing it,
research. So previously I've done some data sets like MMLEU and the math data set. Before that,
before chat GPT, there's things like ImageNetC and other sorts of things. So humanity's last
exam was basically an attempt at getting at what would be the end of the road for the evaluations
and benchmarks that are based on exam like questions, ones that test some sort of academic type of
knowledge. So for this, we asked professors and researchers around the world to submit a really
challenging question, and then we would add that to the data set. So it's a big collection of
what professors, for instance, would encounter as challenging problems in their research that have
a definitive closed-ended objective answer. With that, I think the genre of here's a closed-ended
answer where it's, you know, multiple choice or a simple short answer, I think that genre will
be expired when performance on this data set is near the ceiling. So, and when performance is
near the ceiling, I think that would basically be an indication that, like, you have something
like a superhuman mathematician or a superhuman STEM scientist for, in many ways, for when
there are, when closed-ended questions are very useful, such as a math. But it doesn't get at
other things to measure, such as what's its ability to perform open-ended tasks. So that's
more agent type of evaluations. And I think that will take more time. So we'll, you know,
try and measure just directly what's its ability to automate various digital tasks, like collect
various digital tasks, see, you know, have it work on them for a few hours, see if they
successfully completed them, something like that coming out soon. We have a test for closed-ended
questions, things that test knowledge in the academy and, like, things like mathematics. But they still
are very bad at agent stuff. This could possibly change overnight, but it's still near the floor.
I think they're still extremely defective as agents. So there'll need to be more evaluations
for that. But the overall approach is just to try and understand what's going on. What's the rate of,
what's the rate of development so that the public can at least understand what's happening.
Because if all the evaluations are saturated, it's difficult to even have conversation about
the state of AI. Nobody really knows exactly where it's at or where it's going or what the rate of
improvement is. Is there anything that qualitatively changes when, let's say, these models and model
systems are just better than humans, right? Like, exceeding human capability and how we do eval.
Does it change our ability to evaluate them? So I think the intelligence frontier is just so jagged.
What things they can do and can't do is often surprising. They still can't fold clothes. They can't answer a lot of
tough physics problems, though. Why that is, it's, you know, they're complicated reasons. So it's not
all uniform. And so in some ways, they'll be better than humans. Seems totally plausible that
they'll be better at humans and mathematics not too long from now. But still not able to book a flight.
The implications of that are when you have them being better, they just might be better in some
limited ways. And that just might have kind of limited, just influenced its domain, but not
necessarily generalized to other sorts of things. But I do think,
it's possible that they'll be better at reasoning skills than us. We still could have humans checking
because they can still verify. If an AI mathematician is better than a human, humans can still
run the proof through a proof checker and then confirm that it was correct. So in that way,
humans can still understand what's going on in some ways. But in other ways, like if they're getting
better taste in things, if that makes any sense, maybe it doesn't make any philosophical sense,
That would be pretty difficult for people to confirm.
I think we're on track overall to have AIs that are like have really good Oracle-like skills.
Like you can ask them things and just, wow, it just totally said something insightful or very non-trivial or push the bounds of knowledge in some particular way, but not necessarily able to carry out tasks on behalf of people for some while.
So I think this is why we don't take the AIs that seriously because they still can't.
can't do, like, a lot of, a lot of very trivial stuff. But when they get some of the agent skills,
then I don't think that there are many barriers for their economic impacts or people thinking
that this is kind of an interesting thing to this being the most important thing. I think that's
an emergent property with agent skills, that the vibes really shift. And it's pretty clear
that this is the much bigger than, you know, some prior technology like the app store or social
media. It's in a category of itself. Well, Dan, thanks for doing this. It's a great conversation.
Yeah, glad. Thank you for having me. Yeah. Find us on Twitter at NoPriars Pod. Subscribe to our
YouTube channel. If you want to see our faces, follow the show on Apple Podcasts, Spotify, or
wherever you listen. That way you get a new episode every week. And sign up for emails or find
transcripts for every episode at no-dash priors.com.