In Good Company with Nicolai Tangen - Dario Amodei CEO of Anthropic: Claude, New models, AI safety and Economic impact
Episode Date: June 26, 2024How much bigger and more powerful will the next AI models be? Anthropic’s CEO, Dario Amodei, joins Nicolai in Oslo to discuss the latest advancements in AI, the economic impact of this technology, a...nd the importance of responsible scaling. Dario also shares his excitement about the upcoming models and his thoughts on who will profit the most from AI in the future. Don’t miss this enlightening conversation about the cutting-edge developments in AI. Tune in!In Good Company is hosted by Nicolai Tangen, CEO of Norges Bank Investment Management. New episode out every Wednesday.The production team for this episode includes PLAN-B's Pål Huuse and Niklas Figenschau Johansen. Background research was conducted by Kristian Haga.Watch the episode on YouTube: Norges Bank Investment Management - YouTubeWant to learn more about the fund? The fund | Norges Bank Investment Management (nbim.no)Follow Nicolai Tangen on LinkedIn: Nicolai Tangen | LinkedInFollow NBIM on LinkedIn: Norges Bank Investment Management: Administrator for bedriftsside | LinkedInFollow NBIM on Instagram: Explore Norges Bank Investment Management on Instagram Hosted on Acast. See acast.com/privacy for more information.
Transcript
Discussion (0)
Hi everybody, welcome to In Good Company.
Today, really exciting, we have Dario Amodei, the CEO and co-founder of Entropic Visiting.
Now, Dario, he's a superstar in the AI world, and together with his team has developed the
Claude Language Model, one of the best out there, and they are backed by Amazon and Google.
Now, you are a leading figure, Dario, on AI safety and ethics, and you even interrupted your holiday to come here and to talk to us.
So, big thanks for coming.
Thank you for having me.
Now, what are the latest breakthroughs in AI?
Yes. So, a few things I could talk about. One is, you know, I think the scaling trends of AI are
continuing. So I think we're going to see over the next year, you know, much bigger and more
powerful models that are able to do greater tasks. In fact, by the time this podcast airs,
a new model will be out from Anthropic that will probably be the most intelligent and powerful model in the world. But one area I'm particularly excited about that we're
developing in parallel with that is interpretability of models, the ability to see inside our AI models
and see why they make the decisions they make. That area has been mainly a research area for the
last few years, and it's just at the beginning of
starting to have practical applications. So that's one area I'm very excited about.
Why is that so important? So if you look at what AI models do today,
often you won't understand why an AI model does what it does. I was just talking to someone
at lunch. Let's say you want, consider your industry. Let's say you want an AI model to be trained on some data, to be able to predict, you know, what happened with a particular set of financial data.
model to work on that is that if you train it on data in the past, the model might have memorized it because it was trained on, you know, it basically knows what happens. It knows the
future in that case. Interpretability might allow you to tell the difference. Is the model deducing
the answer to the question or is it memorizing the answer to the question? Similarly, if a model
acts in a way that, say, shows prejudice against a particular group or appears to do so, can we look at the reasoning of the model?
You know, is it really being driven by prejudice?
There are also a number of legal requirements, right?
In the EU, you know, there's a right to explanation.
And so interpretability, being able to see inside the model, could help us to understand
why the models do and say the things that they do and
say, and even to intervene in them and change what they do and say. So at Wineback, you stated that
we still don't know how the advanced AI models work. Does this mean that this will solve this
problem? You know, I wouldn't say solve. I would say we're at the beginning. Maybe we now like
understand 3% of how they work. You know, we're at the level
where we can look inside the model
and we can find features inside it
that correspond to very complex concepts.
Like one feature might represent
the concept of hedging or hesitating,
the particular genre of music,
a particular type of metaphorical situation that a character could be
in, or the idea of, you know, again, prejudice for or against various groups. So we have all
of these features, but we think we've only found a small fraction of what there is. And what we
still don't understand is we don't understand how all of these things interact to give us
the behaviors we see
from models every day. So, you know, it's a little like the brain, right? We can do brain scans. We
can say a little about the human brain, but, you know, we don't have a spec sheet for it. We can't
go and say, well, this is why that person did exactly what they did. But will we ever understand
fully how they work? I don't know about, you know,
fully down to the last detail,
but I think progress is happening fast
and I'm optimistic about getting...
But is progress happening faster
than complexity of the new models?
That is a great question
and that is the thing
we're contending with.
So we are putting a lot of resources
behind interpretability
of language models
to try and keep pace with
the rate at which the complexity of the models is increasing.
I think this is one of the biggest challenges in the field.
The field is moving so fast, including by our own efforts, that we want to make sure
that our understanding keeps pace with our abilities, our capabilities to produce powerful
models.
What's so good about your model?
So this is the Claude model, right?
This is the Claude models, yeah.
So to give some context, we recently released a set of Claude 3 models.
They're called Opus, Sonnet, and Haiku.
They're different trade-offs between power and intelligence and speed and low cost while still being intelligent.
At the time that Opus was released, it was actually the best all-around model in the world.
But I think one thing that particularly made it good is that we put a lot of engineering
into its character. And we recently put out a post about how do we design Claude's character.
People have generally found the Claude models are warmer, more human. They enjoy interacting with them more. Some of the
other models sound more robotic, more uninspired. We're continuing to innovate quickly. And as I
said, by the time this podcast comes out, we'll probably have at least part of a new generation
of models out. Tell me about the new one.
So I can't say too much about it.
If you had to say a bit. But if I had to say a bit, I would say that we're pushing the frontier. Right now,
there's a trade-off between speed and low cost of models and quality. So you can imagine that
as a trade-off curve, right? A frontier. There's going to be a new generation of models that pushes that frontier outward.
And so, you know, you're going to see by the time this podcast is out, you know, we'll have a name
for it for at least some of those models. And we'll see that things that you needed the most
powerful model to be able to do, you'll be able to do with some of the mid-tier or low-tier models
that are faster, cheaper, and even more capable
than the past generation, the previous model.
So, Dario, what's going to be the wow factor here?
So when I get this model, what is it going to do to me?
Yeah, you're going to see models that are better at, say, things like code, math, better at reasoning.
One of my favorite is biology and medicine.
That's one of the sets of applications I'm most excited about for the new models.
So the models we have today, they're kind of like early undergrads in their
knowledge of many things or like interns.
And I think we're starting to push that boundary towards advanced undergrads or even graduate
level knowledge.
And so when we think of use of models for drug development or, you know, in your own
industry, use of the models for thinking about investing or even trading,
I think the models are just going to get a good deal more sophisticated at those tasks.
And, you know, we're hoping that every few months we can release a new model that pushes
those boundaries further and further.
Now, one of the things which has accelerated lately is just how we kind of weave AI into
everything we do. And with recently the announcement
from Apple and OpenAI, how do you look at this? Yeah. So Anthropic thinks of itself more as
providing services to enterprises than it does on the consumer side. And so we're thinking a lot
about how to integrate AI in work settings. So if you think about today's models, today's chatbots,
it's a bit like if I use them in an enterprise setting, it's like if I took some random person
on the street who is very smart, but who knew nothing about your company, and I brought them
in and I asked them for advice. What I'd really like is someone that acts more like an AI model that acts more
like someone that's been trained with knowledge of your company for many years. And so we're
working on connecting our AI models to knowledge databases, having them site work, having them be
able to use internal enterprise tools and really
integrate with the enterprise as, you know, sort of a virtual assistant to an employee.
So that's one way I think about, you know, really driving the integration.
So if you look at the long-term goal of Entropic, what is the long-term goal?
Yeah, you know, if I think about our long-term goal...
Because you're only like three years old, right?
Yeah, we're only three and a half years old. Yeah. We're by far the newest player in the space that's
been able to build models on the frontier. We're a public benefit corporation. And I think our
long-term goal is to make sure all of this goes well. And that's being done, obviously,
through the vehicle of a company. But if you think about our long-term strategy, what we're really trying to do is create what we call a race to the top. So, you know,
race to the bottom is this well-known thing where everyone, you know, fights to cut corners because
the market competition is so intense. We think that there's a way to have the reverse effect,
which is that if you're able to produce higher standards, innovate in ways that make the technology more
ethical, then others will follow suit. They'll either be inspired by it or they'll be kind of,
you know, bullied into it by their own employees or public sentiment, or ultimately the law will
go in that direction. And so we're hoping to kind of provide an example of how to do AI right
and pull the rest of the industry along with us.
That's a lot of the work behind our interpretability work, behind our safety work,
behind how we think about responsible scaling. We have something called a responsible scaling
policy. So I think our overall goal is to try and help the whole industry be better.
So you kind of pitch yourself as a good guys?
I mean, you know, I wouldn't say anything that grandiose, right?
It's more like I want to, you know,
I think more in terms of incentives and structures,
more than I think of good guys and bad guys.
I want to help change the incentives
so that everyone can be the good guys.
Yeah.
Do you think we will care which model we interact with?
Or are we going to have just like one agent
who picks the model which is the best for that purpose?
That was kind of what Bill Gates said
when he was on the podcast.
You know, I think it really depends on the setting.
A few points on this.
One is I think we are increasingly going in the direction
where models are good at different things.
So for example, I was just talking about Claude's character, right? Claude is more warm and friendly
to interact with. For a lot of applications and use cases, that's very desirable. For other
applications and use cases, a model which focuses on different things might be helpful. Some people
are going in the direction of agents. Some people are going in the direction of models that are good as code. Claude, for example, another thing it's good at
is creative writing. And so I think we're going to have an ecosystem where people use different
models for different purposes. Now, in practice, does that mean you're kind of, there's something
that's choosing models for you? I think in some consumer context,
that will be the case. I think in other contexts, someone will say, oh yeah, no, you know,
the job I'm doing or the kind of person that I am, I want to use this particular model all the time.
Well, what makes a warm model? I mean, how can you make a model friendly?
Yeah.
Is it more humoristic or more polite?
Yeah.
Or just like putting some red hearts in between or?
And we actually try and avoid too many emojis because it gets annoying.
But, you know, if you, I don't know, if you go on Twitter and see some of the comments
when people interact with Claude, it just kind of, I don't know how to describe it.
It just kind of sounds more like a human, right?
I think a lot of these bots have certain ticks, right?
Like, you know, models will, you know, there are certain phrases.
I apologize, but as an AI language model, I can't do X, Y, and Z, right?
That's kind of like a common phrase.
And we've helped the model to vary their thinking more,
to sound more like a human, that kind of thing.
Now, when you launch new models, you've got pretty good predictions on how accurate it will be, right?
It's a function of number of parameters and so on.
Now, to get to AGI, how far out are we there?
This is the general intelligence so yeah yeah more more intelligent
so i've said this i've said this a few times but you know back in 10 years ago when all of this was
kind of science fiction i used to talk about agi a lot i i now have a different perspective where
i don't think of it as one point in time i just think we're on this smooth exponential the models
are getting better and better over time um there's no one point where it's like, oh, the models weren't generally intelligent and now they are.
I just think, you know, like a human child learning and developing, they're getting better and better, smarter and smarter, more and more knowledgeable.
And I don't think there will be any single point of note.
But I think there's a phenomenon happening where over time these these models are getting better and better than even the best humans.
I do think that if we continue to increase the scale, the amount of funding for the models, if it goes to, say, $10 billion.
So now a model would cost, what, $100 million?
Right now, $100 million.
There are models in training today that are more like a billion. I think if we go to 10 or 100 billion, and I think that will happen in 2025, 2026, maybe 2027, and the algorithmic improvements continue apace, and the chip improvements continue apace, then I think there is, in my mind, a good chance that by that time we'll be able to get models that are better than most humans at most things.
So 10 billion, you think, a model will be next year?
I think that the training of order 10 billion dollar models, yeah, could start sometime in 2025.
Not many people can participate in that race. dollar models, yeah, could start sometime in 2025. I'm not...
Not many people can participate in that race.
No, no, no, no. And of course, I think there's going to be a vibrant downstream ecosystem,
and there's going to be an ecosystem for small models.
But you don't have money. You don't have that much money.
I mean, we have of order that. We've raised, I believe it's a little over $8 billion to date. So generally, generally
of order that and of course, we're always interested in getting to the next level of scale.
Yeah. Now this is of course also a function of the chips. And we just learned that NVIDIA is halving the time now between launches, right?
So in the past, every other year, now it's more like every year.
So what are the implications of this?
Yeah, I think that is a, I can't speak for NVIDIA,
but I think that is a natural consequence of the recognition
that chips are going to be super important, right?
And also facing competition.
Google is building their own chips, as we know. Amazon is building their own chips. Anthropic is collaborating with
both to work with those chips. And without getting specific, what I can say is that
the chip industry is getting very competitive competitive and there are some very strong offerings
from a large number of players.
How far behind with Google and Amazon
being the chip development?
That's not something I could say
and it's not one dimensional.
But just some kind of indication.
Yeah, I mean, again, I would just repeat
that I think there are now strong offerings
from multiple players that have been useful to us and will be useful to us in different ways.
Okay.
So it's not only about NVIDIA anymore, is that what you're saying?
I don't think it's only about NVIDIA anymore.
But, you know, of course, you look at their stock price, which, you know, you're certainly aware of.
And, you know, it's an indicator, I think, both about them and the industry.
Yeah.
You mentioned that you were more on the enterprise side and not necessarily on the
consumer side. But just lately, there has been more talk about having chips in phones, and we
talk about AI, PC, and so on. How do you look at this?
Yeah. No, I think that's going to be an important development. And again, if we go back to the curve I talked about, right, the trade-off curve between powerful, smart, but relatively expensive and slow models and models that are super cheap, super fast, but very smart are very fast and cheap that are smarter than the best models of today, even though the best models then will be smarter than that.
And I think we'll be able to put those models on phones and on mobile chips, and they'll pass some threshold where the things that you need to call to a cloud or the server for today, you can do there.
And so I'm very excited about the implications of that.
I'm, of course, even more excited about pushing the frontier
of where things will go.
But as this curve shifts outward,
an implication is that both things will happen.
We hear from Mistral, the French competitor,
that they have developed some really efficient
kind of low- or lower cost models.
What do you, how do you view that?
Yeah, I mean, you know, I can't comment on, you know, what's going on in other companies,
but I think we are seeing this kind of general moving of the curve.
And so it is definitely true we're seeing efficient low cost models, but I think of
it less as like things are leveling out are leveling out, costs are going down,
and more as the curve is shifting. We can do more with less, but we can also do even more
with more resources. So I think both trends coexist.
Dario, changing tack a bit here, Your background, you kicked off in physics.
Yes, I was an undergrad in physics and then did grad school in neuroscience.
Yeah. So how come you ended up in AI?
Yeah. So, you know, when I finished my physics degree, I wanted to do something that, you know,
would have an impact on, you know, humanity in the world. And I felt that, you know,
an impact on humanity in the world. And I felt that an important component of that would be understanding intelligence, that that's one of the things that's obviously shaped our world.
And that was back in the mid-2000s. And in those days, I wasn't particularly, to be honest,
that excited about the AI of the day. And so I felt like the best way to study intelligence in
those days was to study
the human brain. So I went into neuroscience for grad school, computational neuroscience,
that used some of my physics background, and studied kind of collective properties of neurons.
But by the end of that, by the end of grad school, after that I did a short postdoc,
AI was really starting to work. We really saw
the deep learning revolution. I saw the work of Ilya Sutskever back then. And so I decided,
based on that, to go into the AI field. And I worked different places. I was at Baidu for a
bit. I was at Google for a year. I worked at OpenAI for five years. And-
Well, you were instrumental
in developing Chachapity 2 and 3, right? Yes. Yes. I led the development of both of those.
Why did you leave? We had reached around the end of 2020. We had kind of reached a point,
the set of us who worked on these projects in these areas where we kind of had our own vision for how
to do things. So, you know, again, you know, we had this picture that I think I've already kind
of implicitly laid out of one, you know, real belief in this scaling hypothesis and two,
in the importance of safety and interpretability. So it was a safety side which made you leave?
I mean, I think, you know, we just had our own vision of things.
There were a set of us who were co-founders who really felt like we were on the same page,
really felt like we trusted each other, really felt like, you know, we just wanted to do something together.
Right. But you were a bit more AI doomsday before than you are now.
You know, I wouldn't say that. My view has always been that there are important
risks and there are benefits. And that is the technology goes on its exponential, the risks
become greater and the benefits become greater. And so we are, including at Anthropic, very
interested in these questions of catastrophic risk, right? We have this thing called responsible
scaling policy, and that's basically about measuring models at each step for catastrophic
risk. What is catastrophic risk? So this would be, I would put it in two categories. One is
misuse of the models, which could include things in the realm of biology or cyber or kind of election operations at scale,
things that are really disruptive to society. So that misuse would be one bucket. And then
the other bucket would be autonomous, unintended behavior of the model. So today, it might be just
the model doing something unexpected. But increasingly, as models act in the world, we have to worry about them behaving in ways that you wouldn't expect.
And what was it that you saw exactly with Chachapati? Well, three, I guess,
then, which made you particularly concerned about this?
Yeah, it wasn't about any particular model. You know, if we go all the way back to 2016,
you know, before I even worked at OpenAI when I was at Google,
I wrote a paper called, with some colleagues, some of whom are now Anthropic co-founders,
Concrete Problems in AI Safety. And Concrete Problems in AI Safety laid out this concern that
we have these powerful AI models, neural nets, but they're fundamentally statistical systems.
And so that's going to create all these problems about predictability and uncertainty. And if you
combine that with the scaling hypothesis, and I really came to believe in the scaling hypothesis
as I worked on GPT-2 and GPT-3, those two things together told me, okay, we're going to have
something powerful and it's not going to be trivial to control it.
And so we put those two things together, and that makes me think, oh, this is an important problem that we have to solve.
How do you solve the two catastrophic risk problems in a tropic?
Yes. So one of the biggest tools for this is our RSP, our Responsible Scaling Policy.
And so the way that works is every time we have a new model that represents a significant leap,
a certain amount of compute above an old model, we measure it for both the misuse risks and the
autonomous self-replication risks. And how do you do that?
both the misuse risks and the autonomous self-replication risks.
And how do you do that?
So we have a set of evaluations that we run.
We've, in fact, worked with, for the misuse risks,
folks in the national security community.
So, for example, we've worked with this company called Griffin Biosciences that contracts with the US government that does biosecurity work.
And they're the experts on responding to biological risk. And
so they say, what is the stuff that's not on the internet that if the model knew it would be
concerning? And they run their tests. And we give them access to the new model. They run their tests.
And every time so far, they've said, well, it's better at the test than it was before,
but it's not yet at the level where it's a serious concern.
So a misuse test would be, for instance, if I put in, hey, can you come up with a virus
which is just going to wipe out the earth, the people?
Is that an example, right?
Conceptually, yes, although it's less about answering one question.
It's more about, can the model go through a whole workflow?
Like, could some bad actor over the period of weeks use this model to do,
as they were doing something nefarious in the real world, could the model give them hints
on how to help them? Could the model help them through the task over a long period of time?
Okay. So what you're saying is that the AI models so far cannot do this.
They know individual isolated things which are concerning.
Right. And they get better at it every time we release a new model, but they haven't reached
this point yet. Okay. And my guess is that- And what about the other one, the autonomous?
Yeah. So- How far are we away from that?
We test the models there for things like ability to train their own models, ability to provision cloud compute
accounts and take actions on those accounts, ability to sign up for accounts and engage in
financial transactions, just some of the measures of things that would kind of unbind the model and
enable them to take actions. How far are we away from that, do you think?
I think it's kind of the same story as with misuse. They're getting better and better
at individual pieces of the task. There's a clear trend towards ability to do that,
but we're not there yet. I again point to the 2025, 2026, maybe 2027 window,
just as I think a lot of the extreme positive economic applications of AI are going
to arrive sometime around then, I think some of the negative concerns may start to arise then as
well. But I'm not a crystal ball. I'm sorry, 25, 26.
Around that. So what do you do then? Do you build in a kill switch or what do you...
Yeah. Well, I mean, there's a number of things. I think on the autonomous behavior, a lot of our
work on interpretability, a lot of our work on, you know, we haven't discussed constitutional AI,
but that's another way we provide kind of values and principles for the AI system.
On the autonomous risk, what we really want to do
is understand what's going on inside the model and make sure that we design it and can iterate on it
so that it doesn't do these dangerous things we don't want it to do. On misuse risk, again,
there it's more about putting safeguards into the model so that people can't ask it to do
dangerous things, and we can monitor when people can't ask it to do dangerous things. And we can monitor
when people try to use it to do dangerous things. So generally speaking, I mean, there's been a lot
of talk about this, but how can one regulate AI? Can companies self-regulate? Yeah. So one way I
think about it is the RSP, the responsible scaling policy that I was describing, is maybe the
beginning of a process, right? That represents voluntary self-regulation. And I mentioned this concept of race to the top.
Last September, we put in place RRSP. Since then, other companies like Google, OpenAI,
have put in place similar frameworks. They've given them different names, but they operate
in roughly the same way. And now we've heard Amazon, Microsoft,
even Meta reportedly, public reporting, are at least considering similar frameworks.
And so I would like it if that process continues, right? Where we have some time for companies to
experiment with different ways of voluntarily self-regulating, some kind of consensus emerges
from some mixture of public pressure, experimentation with what is unnecessary
versus what is really needed. And then I would imagine the real way for things to go is once
there's some consensus, once there's industry best practices, probably the role for legislation is to look in
and say, hey, there's this thing that 80% of the companies are already doing. That's a consensus
for how to make it safe. The job of legislation is just to enforce, force those 20% who aren't
doing it, force that companies are telling the truth about what they're doing. I don't think
regulation is good at coming up with a bunch of new concepts
that people should follow. So how do you view the EU AI Act?
Yeah. So the EU AI Act, I should say, first of all-
And the California safety bill as well, right?
Yeah. I should say the EU AI Act, it's still being... The details of it are still being, you know, the kind of details of it are still, even though the act was passed,
the many details are still being worked out. So, you know, I think a lot of this depends on the
details. You know, the California bill, you know, I would say that it is very, you know,
it has some structures in it that are very much like kind of the RSP.
And so I think something that resembles that structure at some point could be a good thing.
Do you think –
If I have a concern though, I think it's that we're very early in the process, right? this process that's like, you know, first one company has an RSP, then many have RSPs,
then these kind of industry consensus comes into place. My only question would be, are we too early
in that process? Too early in regulation. Yeah. Or, you know, that maybe regulation should be
the last step of a series of steps. Yeah. What's the danger of regulating too early?
of a series of steps. Yeah. What's the danger of regulating too early?
I don't know. One thing I could say is that I'll look at our own experience with RSPs.
So if I look at what we've done with RSP, we wrote an RSP in September. And since then,
we've deployed one model. We're soon going to deploy another. You see so many things that, not that it was too strict or not strict enough, but you just didn't anticipate
them in the RSP, right? Like, you know, there are various kinds of like AB tests you can run on your
models that are even informative about safety. And our RSP didn't speak one way or another about
like when those are okay and when there's not. And so we're updating our RSP didn't speak one way or another about like when those are okay and when there's not.
And so we're updating our RSP to say, hey, how should we handle this issue we've never even
thought of? And so I think in the early days, that flexibility is easy. If you don't have that
flexibility, if your RSP was written by a third party and you didn't have the ability to change it and the process for changing it was very complicated,
I think it could create a version of the RSP
that doesn't protect against the risks,
but also is very onerous.
And then people could say,
oh man, all this regulation stuff,
all this catastrophic, it's all nonsense.
It's all a pain.
So I'm not against it.
You just have to do it delicately and in the right
order. But we build AI into the race between the superpowers, right? We build it into the weapons,
the cars, the medical research, into everything. How can you regulate when it's part of the
power balance in the world? Yeah. Yeah. So I think there's different questions, right? One question is, you know, how do you
regulate the use domestically? And, you know, there, I think there's a history of it, right?
You know, I think an analogy I would make is like, you know, I don't know, the way cars and
airplanes are regulated, right? I think that's been a reasonable story. I don't know that much
about Europe, but like in the US, I think that's been a reasonable story. Everyone understands
there's huge economic value. Everyone understands that these things are dangerous and they can kill
people. And, you know, everyone understands, yes, you have to do this kind of basic safety testing.
And, you know, that's evolved over years. I think that's generally, you know, gone reasonably well.
It hasn't been perfect. So I think for domestic regulation, that's what we should aim to. Things are moving
fast, but we should try to go through all the steps to get there. From an international point
of view, I mean, I think that's a completely different question. That's less about regulation
and more about there's an international race to the bottom. And how do you handle that race to the bottom? I mean,
I think it's an inherently difficult question, because on one hand, we don't want to just
recklessly build as fast as we can, particularly on the weapons side. On the other side, I think
looking as a citizen of the US, here I am in Norway, another democracy.
I'm very worried about if autocratic regimes were to lead in this technology. I think that's
very, very dangerous. And so- How far behind are they now? Or are they behind?
I mean, it's hard to say. I would say that with some of the restrictions that have
been put in place on, for example, you example, shipment of chips and equipment to Russia and China, I think if the US government plays its cards right, then those countries could be kept behind, I don't know, maybe two or three years.
Right.
That doesn't give us much margin.
Talking about democracies,
will AI impact the US election? Yes. I am concerned about that. Anthropic actually
just put out a post about what we're doing to counter election interference.
How could it interfere? So if we look back at, say, the 2016 election,
Um, so, uh, you know, if we look back at say the 2016 election, something that happened in that election was that there were large numbers of people who were being, you know, who were being, uh being paid to do could now be done by AI. I think it's less that you could make content that people necessarily believe. It's more that you could kind of
flood the information ecosystem with a bunch of very low quality content that would make it hard
for people to believe things that really are true. Did that happen, for instance, in India in the European election?
I mean, is it really happening this year?
We don't have particular evidence of the use of our models.
We've banned their use for electioneering, and we monitor use of the models.
Occasionally, we shut things down, but I don't think we've ever seen a super large-scale
operation. I can only speak for use of our models, but I don't think we've ever seen a super large scale operation.
I can only speak for use of our models, but I don't think we've ever seen a super large scale operation there.
Changing topics slightly, you mentioned that you thought we were going to see some extreme positive effects of AI in 25, 26? What are these extremely positive
things? Yeah. So again, if we go back to the analogy of like today's models are like undergraduates,
if we get to the point where the models are- I suspect you were a better undergrad than me,
though. I can kind of feel it across the table. I couldn't speak to it. But if, you know, let's say those models get to the point where, you know, they're kind of, you know, graduate level or strong professional level.
Think of biology and drug discovery.
Think of a model that is as strong as, you know, a Nobel Prize winning scientist or, you know, the head of the, you know, the head of drug discovery at a major pharmaceutical company. I look at all the things that have been invented. You know,
if I look back at biology, you know, CRISPR, the ability to like edit genes. If I look at,
you know, CAR T therapies, which have cured certain kinds of cancers, there's probably
dozens of discoveries like that lying around. And if we had a million
copies of an AI system that are as knowledgeable and as creative about the field as all of those
scientists that invented those things, then I think the rate of those discoveries could really
proliferate. And some of our really, really longstanding diseases could be addressed or
even cured. Now, I don't think all of that will come to fruition in 2025, 2026. At most, I think
that the caliber of AI that's capable of starting the process of addressing all those things
could be ready then. It's another question of like applying it all,
putting it through the regulatory system. Sure. But what can you do to productivity in society?
Yeah. You know, I think of, again, virtual assistants, like, you know, like a chief of
staff for everyone, right? I have a chief of staff, but not everyone has a chief of staff.
You know, could everyone have a chief of staff who helps them just deal with everything
that lands on their desk, everything that lands on their plate? So if everybody had that kind of
thing, what would you do? Could you put a number on productivity gain? I'm not an economist. I
couldn't tell you X percent. But if we look at kind of the exponential, right, if we look at like,
you know, revenues for AI companies, like, it seems like they've been growing roughly 10X a year.
And so you could imagine getting, you can imagine getting to the hundreds of billions in,
you know, two to three years and getting even to the trillions, which per year,
which no company has reached. But I'm saying that's revenue for the company.
Revenue for the company, right. But what about productivity for society? Productivity in society, right? So that depends on how much is this replacing something that was
already being done versus doing new things. I think with things like biology, we're probably
going to be doing new things. So I don't know if, let's say you extend people's, you know,
let's say, you know, you extend people's productive ability to work by 10 years, right?
That could be, you know, one-sixth of the whole economy.
Do you think that's a realistic target?
I mean, again, like, I know some biology.
I know something about how the AI models are going to happen.
I wouldn't be able to tell you exactly what would happen, but like, I can tell a story where it's possible. So 15%, and when will we, so when could we have
added the equivalent of 10 years to our life? I mean, what's the timeframe?
Again, like, you know, this involves so many unknowns, right? If I try and give an exact
number, it's just going to sound like hype. But a thing I could imagine is, I don't know,
two to three years from now, we have AI systems that are capable of making that kind of discovery.
Five years from now, those discoveries are actually being made. And five years after that,
it's all gone through the regulatory apparatus. So we're talking about a little over a decade.
But really, I'm just pulling things out of my hat here.
Like, I don't know that much about drug discovery.
I don't know that much about biology.
And frankly, although I invented AI scaling, I don't know that much about that either.
I can't predict it.
I think you know more about these things than most of us.
And yet, it is also hard to predict.
Absolutely.
Have you thought about what it could do to inflation?
Yeah.
So again, I'm not an economist.
If we look at inflation, I mean, again, using my limited economic reasoning, I think if we had very large, real productivity
gains, that would tend to be deflationary rather than inflationary, right? Absolutely.
Like you would be able to do more with less, the dollar would go further. So directionally,
at least, that suggests disinflation. Totally. But what kind of magnitude?
What kind of magnitude? I mean, that you are more the expert on than I am.
Maybe I should ask you to predict that.
How do you work with the hyperscalers?
Like, you know, some of your shareholders,
like Google and Amazon.
Yeah, yeah.
So, you know, I think, you know.
I'm sorry, just to get it straight.
These are called hyperscalers because why?
I actually don't know the reason for the name,
but, you know, they're hyper cap companies in terms
of their valuation, but also they make very large AI data centers. I assume you're first,
the second one. Absolutely. How do you work with them?
So I would say that the relationship with these companies makes sense in the sense that we have
complementary inputs, right? That they provide
the chips in the cloud, and then we provide the model. And then that model is something that,
again, can be sold to customers on the cloud. So there's kind of a layered cake where we provide
some layers and they provide the other layers. So these partnerships make sense on multiple grounds.
You know, at the same time, we've always been very careful, right?
We have our own kind of value as values as a company, our own way of doing things. And so
we try to stay as independent as possible. And one of the things we've done is, of course,
we have relationships with multiple of these cloud providers, right? We work with both
Google and Amazon. And that has allowed us some flexibility in our ability to make sure that
there isn't too much exclusivity and that we're kind of free to deploy our models on multiple
surfaces. The fact that these companies are becoming so incredibly powerful, What kind of systemic risk does that pose?
Yeah. I mean, you know, I would say that, and this is maybe broader than AI. It maybe relates to just kind of the era that we're living in. There are certain eras in history where, you know,
there's a powerful technology or there's an economic force that kind of tends to concentrate resources.
I think probably the same thing happened in the 19th century.
And so I think it actually is important to make sure that the benefits are shared by all.
So one thing that's often on my mind is there's been, for example, very little penetration of AI and language models in some parts of the developing world, right, in like sub-Saharan Africa.
And so how do we bring these models to those areas?
How do we even help with challenges in those areas like health or education? So I definitely agree we're living in an era of more concentrated wealth, and that's an area of concern and an area that we should do what we can to find countervailing forces.
But what's the risk in that these companies are now becoming more powerful than countries and governments? Yeah. I mean, you know, this is, this is kind of, you know, what I said on,
in terms of, in terms of regulation, like, you know, I think that, uh, AI is a very powerful technology. Um, and you know, our governments, our democratic governments do need to step in
and set some basic rules of the road, right. It needs to be done in the right order. It can't be
stifling, but I think it, I think But I think it does need to be done. Because,
you know, because as you said, like, we're getting to a point where the amount of concentration of
power can be, you know, can be greater than that of national economy, you know, national governments.
And, you know, we don't want that to happen. At the end of the day, you know, all the people of
the country and all entities, including companies
that work in it, they ultimately have to be accountable to democratic processes, right?
There's no other way. Will AI increase or decrease the difference between rich and poor countries?
That, I think, depends on what we choose to do with it, right?
The way you look at the path forward just now.
Yeah.
So, you know, I would say that we are looking for ways for it not to make, you know...
Sure, but is that happening?
I would, I mean, it's too early to say with like how, you know, with how the technology is being deployed.
I mean, it's too early to say with like how, you know, with how the technology is being deployed.
I would definitely say I do see something related to it that's a little worrying to me and that we're trying to counter, which is that if you look at the natural applications of the technology,
the, you know, the things that customers are, you know, the most eager customers that come to us,
often, I think because we're a Silicon Valley company, often the most eager customers are the most eager customers that come to us. Often, I think because we're a Silicon
Valley company, often the most eager customers are other kind of technologically forward Silicon
Valley companies that kind of also use the technology. And so I think there's this danger
of what you might call like a kind of closed loop where it's like an AI company, you know, supplies a like AI legal
company, which supplies an AI productivity company, which supplies, you know, some other
company in Silicon Valley. And, you know, is it all a closed ecosystem where it's-
And it's all being used by the most highly educated people.
Exactly. And so how do we break out of that loop? And so we thought about a number of ways to break out of that loop.
One of the reasons I talk about biology and health is that I think biology and health
can be used to help us to break out of that loop.
Innovations in health, assuming we distribute them well, can apply to everyone.
I think things like education can help here.
I think things like education can help here.
Another area that I'm very excited about is use of AI for provision of everyday government services.
I don't know what the names of these services are in Norway.
In the US, every time you interact with the DMV, the IRS, various social services, people
almost always have a bad experience.
And it drives cynicism about the role of government. And I would love it if we can
modernize government services that everyone use so that they can actually deliver what people
across the world need. I have to say that I think in this country, we are fortunate in that we are
not so many people and we are heavily digitalized. You are probably much better than we are at this.
I'm reacting to my experience in the United States, which I think could be better.
Yeah. So net-net, what do you think? Will in years' time, will the gap between rich and poor be bigger or smaller?
I just have to say, like, if we handle this the right way-
Yeah, no, I hear what you say, but I mean, what is it-
If we handle it the right way, we can narrow the gap.
I hear what you say.
But what do you think?
What do you think will happen?
I don't know what I think will happen.
I know that if we are not extremely thoughtful about this,
if we're not extremely deliberate about it,
then yes,
it will increase the gap. Okay. Who will make the most money on AI? Will it be the chip
manufacturers or will it be you guys or the scalers or all the consumers or companies?
My boring answer is that I think it's going to be distributed among all of them and that the pie
is going to be so large that in some ways it may not even matter.
Like certainly right now, the chip companies are making the most money. I think that's because
training of models comes before deployment of models comes before revenue. So I think the way
I think about it is the valuation of the chip companies is a leading indicator. The valuation of the chip companies is a leading indicator. The valuation of the AI companies is maybe a present indicator.
And the valuation of lots of things downstream is a lagging indicator.
But the wave is going to reach everyone.
So when you look at the market cap of NVIDIA, for instance, that's an indicator.
I mean, what do you multiply that by to find the potential impact of AI?
Yeah, I mean, what I think that, and, you know, obviously I can't give stock advice on podcasts about chips.
So that's $3 trillion, right?
Yeah, but $3 trillion, so why is that?
Which is nearly twice the size of this fund, which is the largest sovereign wealth fund in the world.
Yes. If I think about that, again, speaking very abstractly and conceptually,
what's that driven by? Probably that's driven by anticipated demand. People are building very
large AI clusters. Those clusters involve lots of revenue for NVIDIA. Presumably, companies like us are
paying for those clusters because they think that the models they build with them will generate lots
of revenue, but that revenue is not present yet. And so what we're seeing so far is just,
man, people want to buy a lot of chips. And of course, it's possible. It's consistent with the
whole picture that all of this will be a bust. The models don't turn out to be that powerful, like companies like Anthropic
and the other companies in the space
don't do as well as we expected
because the models don't keep getting better.
That always could happen.
That's not my bet.
That's not what I think is going to happen.
What I think is going to happen
is that these models are going to produce
a great deal of revenue.
And then there's going to be even more demand for chips.
NVIDIA's value will go up.
The AI company's value will go up.
All these downstream companies, you know, that's the bullish scenario that I'm betting on by leading this company.
But I'm not sure.
It could go the other way.
I don't think anyone knows.
Where is the biggest constraint just now?
I mean, is it in chips, talent, algorithms, electricity?
I would say a big bottleneck we're dealing with is data. But as I've said elsewhere,
we and other companies are working very hard on synthetic data. And I think that bottleneck's
going to be lifted. So data, just to get it straight, that's just information you feed
into your models to train them, right?
Yeah, based information that's fed into the models. But we're getting increasingly good
at synthesizing the data. Tell me, what is synthetic data?
So synthetic data, the example I like to give is seven years ago, DeepMind, as part of Google,
produced the AlphaGo model, which was able to beat the world champion
in Go. And there was no, that version, or there was a version of it called AlphaGo Zero that was
not trained on any humans playing Go. All it did was the model played Go against itself for a long
time, basically forever. And so basically with just the little tiny rules of Go and the
models playing against each other, pushing against each other using that rule, they were able to get
better and better to the level where they were better than any human. And so you can think of
those models as having been trained on synthetic data that are created by other models with the
help of this kind of logical
structure of the rules of Go. And so I think there are things analogous to that that can be done
for language models. How do you think AI will affect geopolitics?
Yeah. I think that's a big one. My view is that, again, if we get to the level of AI systems that are
better than the best professionals at a wide range of tasks, then tasks like military and
intelligence are going to be among those tasks. And we shouldn't be naive. Everyone is going to
try to deploy those. I think we should try to create
cooperation and restraints where we can, but that, you know, in many cases that won't be possible.
And when it isn't possible, you know, I'm on the side of democracies in the free world.
I want to make sure that the future is democratic, that as much as possible of the world is democratic
and that democracies have a lead and an advantage on the world stage. The idea of powerful AI plus
autocracies terrifies me and I don't want it to happen. Should each country have its own
language model? Yeah. Should Norway build a language model?
You know, it...
Five million people.
It kind of really depends on what you're aiming to do, right?
You know, it may make sense from a national is imagining some kind of democratic coalition or cooperation in which democratic countries, you know, work together to provide for their mutual security, to protect each other, to protect the integrity of their democratic processes.
protect the integrity of their democratic processes. Maybe it makes sense for them all to pool their resources and make a very small number of very large language models. But then
there may also be value in decentralization. I don't have a strong opinion on which of those
is better. Is it a national security issue that US controls AI? Should Europe be worried about this?
Should we, should Europe be worried about this?
Yeah.
I mean, again, you know, I would go to, you know, each country has to kind of worry about its own security, even separately from its allies.
You know, I think that's more of a question for kind of individual governments.
I mean, you know, I would think of it, probably this is a provocative analogy, but a little
like nuclear weapons, right?
Some countries, even though they're allies, feel the need to have their own nuclear weapons,
for example, France.
Other countries say, no, we trust that we're being protected by the US and the UK and France.
I think it may be somewhat similar with these more powerful models. And I think it's less important how many of them exist within the democratic world as that the democratic world is in a strong position relative to autocracies.
You talk about cooperation and partners and so on.
Do you guys in AI actually like each other?
Do we in AI actually like each other?
I mean, we've done a number of
collaborations. So I think very early on when I was at OpenAI, I drove the original RL from human
feedback paper was considered safety work. And this ended up being a collaboration between Deep
Mind and OpenAI. And we've worked together, you know, in organizations like the Frontier Model
Forum to collaborate with each other. That said, I mean, you know, I'll be honest, I don't think
every company in this space takes issues of safety and responsibility equally seriously from all the
other companies. But, you know, instead of pointing fingers and saying... But is that the kind
of things that make you
not being so keen on other companies?
Is it their view on safety and security?
It's one of the few industries where you
even consider having a
cage fight between...
Yeah, so I'm not a fan
of the cage fights. I'm not
a fan of the free fits.
I'm sure you do well, but you're not.
Even though I suspect it won't be your strength.
No, fighting in cage fights is not my forte.
But the thing I was going to say is, look, instead of pointing fingers, instead of having feuds and saying this guy is the bad guy, this guy is the good guy, let's think systemically, right?
Going back to like the race to the top idea,
right? The idea that it's like, let's set standards. Instead of pointing fingers at
people doing something bad, let's do something good. And then a lot of the time, people just
follow along. We invent an interpretability idea. Just a few weeks ago, we put out, I was talking about it a few minutes ago,
this innovation in interpretability, being able to see inside the model. A few weeks later,
we got similar things from OpenAI. We've seen internally other companies increase their
prioritization on it. So a lot of the time, you can just do something good and you can inspire
others to do something good. Now, if you've done a lot of that,
if you've set these standards, if they're industry standards, and then there's someone who's not
complying with them, there's something that's really wrong, then you can talk about pointing
fingers. Yeah. Let's spend a few minutes talking about culture. How many people are you in the firm? We are about 600 as of a couple of weeks ago. I've been on vacation, so it may be even higher now.
What's the culture like?
Yeah, I would describe a few elements of the culture. One element of the culture is what I
describe as do the stupid simple thing that works. A number of folks at
Anthropic are ex-physicists because I myself had that background and a couple of my co-founders
had that background, including one person who was actually a professor of physics before he
co-founded Anthropic. And physicists look for simple explanations of things.
So one of the elements of our culture is don't do something overcomplicated, right?
A lot of academic ML research tends to overcomplicate things.
We go for the simplest thing possible that works.
We have the same view in engineering.
And again, we have the same view even on things like safety and ethics, on interpretability,
on our constitutionally eye methods.
They're all incredibly simple ideas that we just try and push as far as we can. Even this race to
the top thing, right? You can say it in a sentence or two, right? It's not complicated. You don't
need a hundred page paper to talk about it. It's a simple strategy. Do good things and try and
encourage others to follow. When you hire 600
people in three years, how can you be confident that they are good? Yeah. So I think candidly,
one challenge of the AI industry is how fast everything moves. So in a normal startup, things, you know, might grow 1.5x or 2x a year.
We recognize that in this field, things move so fast that faster growth is required in order to
meet the needs of the market. And that ends up entailing faster growth than usual. I was actually
worried about this at the beginning of the company. I said, oh my God, we have this dilemma. How do we deal with it? I have generally been positively surprised at how
well we've been able to handle it so far, right? How good we've been able to scale hiring processes,
how much I feel everyone is both technically talented, knowledgeable, and just generally
kind and compassionate people, which I think are equally important as hiring technically talented, knowledgeable, and just generally kind and compassionate people,
which I think are equally important as hiring technically talented people.
So what do you look for? So here I'm sitting, you're interviewing me now for
a position. What do you look for?
Yeah. I mean, again, we look for willingness to do the simple thing that works. We look for talent.
It generally, we don't necessarily look at years of experience
in the AI field, like a number of folks we hire are physicists or other natural scientists who,
you know, have maybe only been doing AI for a month or so, right? Have only been doing a project
on their own. And so we look for ability to learn. We look for, you know. We look for curiosity, ability to quickly get to the heart of the matter. And then in terms of values, we just look for thinking in terms of the public benefit.
that we have particular opinions on what the right policies for Anthropic is or what the right things to do in the world. It's more, we want to carry a spirit as we scale the company. And it gets
increasingly hard as the company gets bigger and bigger. Because how do you find all these people?
But we want people who carry some amount of public spirit, who understand on one hand that Anthropic needs to be a commercial entity, to be close enough to the center of this to have an impact.
But when you hire-
But to understand that in the long run, we're aiming for this public benefit, this societal impact.
When you hire, do you feel you have a limited amount of money?
I think compute is almost all of our expenses.
I won't give an exact number, but I think it can be publicly backed out that it's more than 80%.
And so-
So salaries doesn't matter, really?
In terms of paying people, we think more about what is fair, right?
We want to do something that's fair, that meets the market, that treats people well.
It's less of a consideration of how much money are we spending because compute is the biggest expenditure.
It's more how can we create a place where everyone feels they're treated
fairly and people who do equal work get equal pay? Now, you work with all these brilliant
minds and kind of geniuses and perhaps even some prima donnas. What's the best way to
manage them or lead them? Yeah, I think- I guess they can't be managed, so you need to lead, right?
One of the most important principles is just the thing you said, which is letting creativity
happen. If things are too top-down, then it's hard for people to be fully creative.
If you look at a lot of the big innovations in the ML field over the last 10
years, like the invention of the transformer, you know, no one at Google kind of ordered,
you know, Oh, you know, here's the project, here's what we're trying to produce. It was,
it was just kind of, you know, it was, it was a decentralized effort at the same time,
you have to make a product and everyone has to work together to make a single thing.
And I think that creative tension between we need new ideas, but we need everyone to
kind of contribute to one thing. I think that creative tension is where the magic is,
finding the right combination so that you can get the best of both worlds.
You run this company together with your sister, right?
Yes, yes.
How is that?
We both worked at OpenAI and then we both founded Anthropic together.
It's really great.
So the real division of labor is she does most of the things you would describe as running
the company day to day, managing people, figuring out the structure of the company, making sure we have a CFO, a chief product
officer, making sure comp is set up in a reasonable way, making sure the culture is good.
I think more in terms of ideas and strategy. Every couple weeks, I'll give a talk to the company,
basically a vision talk, where I say, here's some
things we're thinking about strategically. These aren't decisions. This is kind of a picture of
what leadership is thinking about. What do we think is going to be big in the next year? Where
do we think things are going, both on the commercial side, the research side, the public
benefit side? Is she younger or older than you? She is four years younger than me. Is she cleverer than you?
We are both extremely skilled in different ways. What did your parents do?
So my father is deceased. He was previously a craftsman. My mother's retired. She was a project
manager for public libraries.
How were you raised?
How was I raised?
You know, there really was, I think, a big focus on social responsibility and helping the world.
Like that was, I think, a big thing for my parents. They really thought about how do you make things better? How do people who have been born in a fortunate position reflect their responsibilities and deliver their responsibilities
to those who are less fortunate? And you can kind of see that in the public benefit orientation of
the company. So like the 14-year-old Dario, what was he up to?
I mean, I was really into, you know, math and science. Like, you know, I did like math
competitions and all of that. But, you know, I was just also thinking about like, you know,
how could I apply those skills to, you know, invent something that would help people?
Did you have any friends?
Did I have any friends? You know, less than I would have liked. You know, I something that would, that would help people. Did you have any friends? Did I have any friends? Um, uh, you know, less than, less than I would have, less than I would have liked, you know,
I was, I was, I was a little bit, uh, I was a little bit introverted, but, um, you know, that
there were, there were people who, who, you know, who I knew back then, who I still, who I still
know now. So is entropic like a revenge of the nerd? Uh, you know, I wouldn't really put it,
I wouldn't really. And I think that's a good thing. I love that kind of stuff.
Yeah, I wouldn't really put it in those terms
if only because, you know,
I'm kind of reluctant to like set different groups
against off each other.
You know, like people are,
different kinds of people are good at different things.
You know, like we have a whole sales team.
Like they're good at a whole different set of things
than I am.
Like, you know, of course I'm the CEO,
so I have to learn how to do some sales as well. But there are just very different skills. And one of the things you
think about, you realize in a company is that different kinds of people with very different
kind of skills, just, you know, like you recognize the value of a very wide range of skills,
including ones that, you know, that you have no ability in yourself, right?
So what drives you now?
I think we're in a very special time in kind of like the AI world.
Like these things I've said about how crazy things could be in 2025 or 2026.
I think it's important to get that right.
And running Anthropic, that'm, you know, that's
only one small piece of that, right? There are other companies, you know, some of them are bigger
or better known than we are. And so on one hand, you know, we have only one small part to play.
But, you know, I think given the importance of what's happening for the economy, for humanity, I think we have an important opportunity to make sure that these things go well.
There's a lot of variance in how things could go.
And I think we have the ability to affect that.
Of course, day to day, we have to grow the business. We have
to hire people. We have to sell products. And I think that's important. And it's important to do
that well so that the company is relevant. But I think in the long run, the thing that drives me,
or that at least I hope drives me, is the desire to capture some of that variance and push things
in a good direction. How do you relax? How do I relax? So, you know, I'm in Norway now.
Yeah, but this is not relaxing. This is not relaxing, but I came here from my vacation
in Italy. So, you know, every year I take a few weeks off to kind of relax and think about the
deeper concepts. I go swimming every day. Actually, me and my sister still play video games. We used
to do this since high school. And, you know, now, you know, I'm over 40 and she's like, you know.
What kind of games do you play? Uh,
well, we recently got the new, uh, the new final fantasy game. So we played final fantasy in high
school. It was like, you know, a game made in the nineties. Um, and they recently made a remake of
it. Um, and, and so, so, you know, we recently started playing like, you know, the new version
with all the like fancy graphics from like, you know, 20 years of 20 years of progress in, well, actually, GPUs.
And we were ourselves noticing.
It was like, wow, we used to do this when we were in high school.
Now we're running this company.
Wow.
Well, I'm glad to hear that some people never grow up.
I don't think we've grown up in certain ways.
I don't think we've grown up.
Hopefully, we have in others.
Talking of which, we always finish off these podcasts with a question. What kind of advice do you have to young people?
Yeah. I would say get to gain familiarity with these new AI technologies. I'm not going to offer
some kind of bromide about I know exactly which jobs are going to be big and which aren't.
I think we don't know that.
And also, we don't know that AI won't touch every area.
that there's going to be a role for humans in kind of using these technologies and working alongside them, at the very least understanding them and the public debate that's going to
come from them. I guess the other thing I would say, and this is already important advice,
but I think it's going to get really more important, is just the faculties about skepticism about information. As AI generates more and more
information and content, being discerning about that information is going to become more and more
important and more and more necessary. I hope that we'll have AI systems that help us sift through
everything, that help us understand the world so that we're kind of less vulnerable to these kind of attacks.
But at the end of the day, it has to come from you. You have to have some basic desire,
some basic curiosity, some basic discernment. And so I think developing those is important.
Well, that's really great advice. Well, big big thanks this has been a true blast and i
wish you all the best get back to italy get some more rest and do some more
deep conceptual thinking yes thank you so much for having me on the podcast thank you