No Priors: Artificial Intelligence | Technology | Startups - Your AI Friends Have Awoken, With Noam Shazeer
Episode Date: April 13, 2023Noam Shazeer played a key role in developing key foundations of modern AI - including co-inventing Transformers at Google, as well as pioneering AI chat pre-chatGPT. These are the foundations supporti...ng today’s AI revolution. On this episode of No Priors, Noam discusses his work as an AI researcher, engineer, inventor, and now CEO. Noam Shazeer is currently the CEO and Co-founder of Character AI, a service that allows users to design and interact with their own personal bots that take on the personalities of well-known individuals or archetypes. You could have a socratic conversation with Socrates. You could pretend you’re being interviewed by Oprah. Or you could work through a life decision with a therapist bot. Character recently raised $150M from A16Z, Elad Gil, and others. Noam talks about his early AI adventures at Google, why he started Character, and what he sees on the horizon of AI development. No Priors is now on YouTube! Subscribe to the channel on YouTube and like this episode. Show Links: Noam Shazeer - Google Scholar Noam Shazeer - Chief Executive Officer - Character.AI | LinkedIn Character.AI Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @Character_ai Show Notes: [1:50] - Noam’s early AI projects at Google [7:13] - Noam’s focus on language models and AI applications [11:13] - Character’s co-founder Daniel de Freitas Adiwardana work on Google’s Lambda [13:53] - The origin story of Character.AI [18:47] - How AI can express emotions [26:51] - What Noam looks for in new hires
Transcript
Discussion (0)
Do you view all this as a path to AGI or sort of super intelligence?
Sure, yeah.
And is that part of the goal?
For some companies, it seems like it's part of the goal.
And for some companies, it seems like it's either not explicitly an anti-goal or if it happens, it happens.
And the thing people are trying to build is just something useful for people.
What a flex.
AGI is a side effect.
Yeah.
Well, I mean, that was a lot of the motivations here.
I mean, my main motivation for working on AI other than that it's fun, well, I mean, fun to secondary.
The real thing is, like, I want to drive technology forward.
This is the No Pryor's podcast. I'm Saragua.
I'm Alad Gail.
We invest in, advise, and help start technology companies.
In this podcast, we're talking with the leading founders and researchers in AI about the biggest questions.
Transformers, large language models, AI chat.
These are the foundation supporting today's AI revolution.
And this week on NoPriars, we have AI researcher, engineer, and inventor who is a key part
to these innovations and is considered one of the smartest people in AI.
Noam Shazir is the CEO and co-founder of Character AI, a service that allows users to
design and interact with their own personal bots to take on the personalities of well-known
individuals or archtypes.
You could have a Socratic conversation with Socrates, or you could pretend you're being
interviewed by Oprah.
or you could work through a life decision with a therapist bot.
Character recently raised $150 million from Andresen Horwitz, myself, and others.
We talk about how Noam got to start at Google, his groundbreaking AI discoveries, and what he's doing at character.
So Noam, welcome to No Pryors.
Hey, Alad. Thanks for having me on. Hi, Sarah.
Good to see it.
Yeah, thanks for joining. So you've been working on NLP and AI for a long time.
So I think you were at Google for something like 17 years off and on.
And I think even your Google interview question was something around spell checking, an approach that eventually got implemented.
there. And when I joined Google, one of the main systems being used at the time for ads targeting was
like fill and fill clusters and all the stuff, which I think you wrote with George Herrick.
And so it just be great to get kind of your history in terms of working on AI, NLP, language
models, how this all evolved, what you got started on and what sparked your interest.
Oh, thanks, you lot. Yeah. Just always naturally drawn to AI. Wanted to make the computer do something
smart. Seems like pretty much the most fun game around was lucky to find Google early on and really
as an AI company.
So, yeah, I got involved in a lot of the early projects there that maybe you wouldn't
call AI now, but seemed pretty smart at the time.
And then more recently was on the Google Brain Team starting in 2012.
It looked like a really smart group of people doing something interesting.
I had never done deep learning before, or neural networks, I guess, as it was called then.
I forget when the rebrand happened.
But, yeah, it turned out to be really fun.
That's cool.
And then, you know, you were one of the main people working on the transformer paper in design in 2017.
And then you worked on mesh TensorFlow, I think, sometime within the following year.
Could you talk a little bit about how all that got going?
Yeah.
I mean, I messed around a few years on the Google Brain Team and, like, utterly failed at a bunch of stuff until I kind of got the hang of it.
Really, the key insight is that what makes deep learning work is that it is really well suited to modern hardware,
where you have the current generation of chips that are great at matrix multiplies
and other forms of things that require large amounts of computation relative to communication.
So basically deep learning really took off because it runs thousands of times faster than anything else.
And as soon as I got the hangabout started designing things that actually were smart and ran fast.
But, you know, the most exciting problem out there is language modeling.
It's like the best problem ever because there's like an infinite amount of data, you know,
just scrape the web and you've got all the training data you could ever hope for.
And like the problem is super simple to define.
It's predict the next word.
The fat cat sat on the, you know, what comes next.
Like it's extremely easy to define.
And if you can do a great job of it, then you get everything that you're seeing right now and more.
you can just talk to the thing and it's really AI complete.
And so got started around like 2015 or so,
working on language modeling and messing with the recurrent neural networks,
which was what was great then.
And then Transformer kind of came about as someone had the bright idea.
Jakub was great that, hey, these RNNs are just annoying.
Let's try to replace them with something better, you know, in the attention.
And overheard a couple of colleagues talking about it in the next cube over.
I was like, that sounds great.
Let me help you guys.
These RNNs are annoying.
It's going to be so much more fun.
Can you quickly describe sort of the difference between an RNN
and a transformer-based or attention-based model?
Yeah, sure.
Okay, so the recurrent neural network is the sequential computation
where every word, you read the next word,
and you kind of compute your current state of your brain
based on the old state of your brain and what this next word is,
and then you predict the next word.
So you have this very long sequence of computations that has to be executed in order.
So that, you know, the magic of transformer, kind of like convolutions, is that you get to process the whole sequence at once.
I mean, it's still a function of the predictions for the later words are dependent on what the earlier words are,
but it happens in like a constant number of steps where you get to take advantage of this parallelism.
You can look at like the whole thing at once.
and like, that's what modern hardware is good at, is parallelism,
and now you can use the length of the sequences, your parallelism,
and everything works super well.
Attention itself, it's kind of like you're creating this big key value
associative memory, where you're like building this big table,
like with one entry for every word in the sequence,
and then you're kind of looking things up in that table.
It's all like fuzzy and differentiable and a big differentiable function
that you can backprop through,
And people have been using this for problems where there are two sequences, where you've got machine translation.
You're translating English to French.
And so while you're producing the French sequence, you are like looking over the English sequence and trying to pay attention to the right place in that sequence.
But the insight here was, hey, you can use the same attention thing to like look back at the past of the sequence that you're trying to produce.
And the beauty is that it runs great on GPUs and TPUs.
it's kind of parallel to like how deep learning is taken off because it's great on the hardware
that exists. And this sort of brought the same thing to sequences. Yeah, I think the classic
example to help people picture it is like, you know, saying the same sentence in French and
English, the ordering of the words is different. You're not mapping like one to one in that
sequence. And to figure out how to do that with parallel computation without information loss is like
a really elegant thing. Yeah. It seems like the technology has also been applied in a variety
different areas. The obvious ones are
these multimodal language models,
so it's things like chat GPT or what you're doing
a character. I've also been surprised by
some of the applications into things like AlphaFold,
the protein folding efforts
that Google did, where it actually worked in an enormously
performant way. Are there any
application areas that you found really unexpected
relative to how Transformers work
and relative to what they can do?
Oh, I've just had my
head down in language. Like, here you have
a problem that can do like anything. I want
this thing to be good enough, so I just ask it,
how do you cure cancer and it like invents a solution? So I've been totally ignoring what everybody's
been doing in all these other modalities where I think a lot of the early successes in deep learning
have been like in images and people are like all excited about images and kind of completely ignored it
because, you know, an image is worth a thousand words, but it's a million pixels. So the text is like
a thousand times as dense. So kind of big text nerd here. But very exciting to see it take off in,
you know, in all these other modalities as well. And, you know, those things are going to be great.
It's super useful for building products that people want to use.
But I think a lot of the core intelligence is going to come from these text models.
Where do you think the limitations for these models, what do you think creates the asymptote that all this is being built against?
Because people often talk about just scale, like you just throw more compute and this thing will scale further.
There's data and different types of data that may or may not be available.
There's algorithmic tweaks.
There's adding new things like memory or loopbacks or things like that.
What do you think are the big things that people still need to build against?
And where do you think this sort of taps out as an architecture?
Yeah, I don't know that it tops out. I mean, we haven't seen the tap out yet. The amount of work that has gone into it is probably nothing compared to the amount of work that will go into it. So quite possibly there will be all kinds of like factors of two in efficiency that people are going to get through better training algorithms, better model architectures, better ways of building chips and using quantization and like all of that. And then there are going to be factors of 10 and 100 and 1,000 of just like scaling.
and money that people are just going to throw into the thing because, hey, everyone just
realized this thing is phenomenally valuable.
At the same time, I don't think anyone's seen a wall in terms of how good this stuff is.
So I think it will just, it's just going to keep getting better.
I don't know what stops it.
What do you think about this sort of idea that we can increase compute, but the largest
models are under-trained?
We've used all the text data on the Internet.
that's easily accessible.
We have to go improve the quality.
We have to go do human feedback.
How do you think about that?
Yeah.
I mean, in terms of getting some more data,
like there are a lot of people talking all the time.
I mean,
what do you think we do this podcast?
Right.
Like there's like order 10 billion people like producing about, you know,
like I don't know, 10 thousand words a day.
I mean, that's like a lot of words that, you know,
and pretty soon many of those people will be doing a lot of that talking to AI systems.
So I have a feeling like a lot of,
a lot of data is going to find its way into some AI systems.
I mean, in privacy preserving ways, I would hope.
And then the data requirements tend to go up with the square root of the amount of computation
because you're going to train a bigger model and then you're going to throw more data at it.
I'm not that worried about coming up with data.
And I feel like we could probably just generate some more with the AI.
And then what do you think are the main things to solve for these models going forward?
Is it hallucinations?
Is it memory?
Is it something else?
I don't know.
I kind of like hallucinations.
It's also a feature, yeah.
They're fun.
Yeah, we'll call it a feature.
Yeah, some of the things we want to work on the most are memory
because our users definitely want their virtual friends to remember them.
There's so much you can do with personalization
and you want to dump in a lot of data and use it efficiently.
Yeah, there's a ton of great work going on
and trying to figure out what's real and what's hallucinated, of course.
I think we'll solve those.
Then, do you want to talk a little bit about Lambda and your role with it and how that led eventually to character?
Yeah, my co-founder, Daniel DeFretas, he's like the scrappiest, most hardworking, really, you know, smartest guy.
He's kind of been on this lifelong mission to build chatbots.
Like, since he was like a kid in Brazil, he's like always been trying to build chatbots.
So he came to join us at Google Brain because I think he had read some papers and figured that this neural language model technology would be like, you know, something that could actually generalize.
and built something truly open domains.
And he did not get a lot of headcount.
He started the thing as a 20% project
where people are encouraged to spend 20% of their time
doing whatever they want.
And then he just recruited like an army of 20% helpers
who were like ignoring their day jobs
and like actually just helping him with the system.
And he went as far as going around
and panhandling people's TPU quota.
He called this project Mina.
Because I guess it came to him
in a dream and like at some point I'm looking at the scoreboard and what is this thing called
Mina and why does it have 30 TPU credits and it was just gotten a bunch of people to contribute
and then he was really successful at this because you know in building something really cool
that actually worked where like a lot of other systems were just like totally failing either
because people were not you know just weren't scrappy enough or we're going for like rule-based
systems that were just never going to generalize so at some point I was like
okay, there are so many ways we can make this technology better by like factors of two.
But the biggest thing is just convince everyone that this is like worth trillions of dollars
by demonstrating some application that is clearly super valuable to like billions of people.
And Lambda was this, I believe, is the internal chatbot pre-GPT at Google that was famously in the news
because an engineer thought it'd become sentient, right?
Yeah, yeah, yeah.
So that was like a renaming of Mina.
So I guess I went and helped Daniel on Mina.
We got it on some giant language models and then kind of became like an internal viral
sensation and then got renamed to Lambda.
And yeah, we had left before the business about somebody thought it was sentient.
I'm flattered.
Can you talk a little bit about just why it wasn't released, what some of the concerns were?
I think just large companies have concerns around launching products that can say anything.
I would guess it's just like a matter of risk.
versus, you know, how much you're risking versus how much you have to gain from it.
So figured, hey, startup seems like the right idea that you can kind of just move faster.
Yeah, so tell us about character.
What's the origin story there?
Did you and Daniel look at each other one day and were just like, we have to get it out there?
Yeah, pretty much.
We're like, yeah.
And kind of noticed, hey, there are people who like just go out and get some investors and start doing something.
So we're just like, okay, let's just like build this.
this thing and launch it as fast as we can.
So I hired a total rock star team of engineer researchers and got some compute.
One thing that comes up a lot is people say that you all have one of the truly
extraordinary teams in the AI world.
Are there specific things that you recruited against?
Or how did you actually go about finding these people?
You know, some people we knew from Google happened to get introduced to Milat, formerly from
meta, who's had launched a lot of, well, built a lot of their large language model stuff.
and their neural language model infrastructure
and a bunch of other people followed him,
and they were great.
Is there anything specific that you would look for in people
or ways to test for it,
or was it just standard interviewing approaches?
I mean, a lot of it was just kind of motivation.
I think Daniel tends to very, very highly valued motivation.
I think he's looking for something
between burning desire and childhood dream.
So, like, there were a lot of great people
that we did not hire because they didn't quite meet that bar, but then we got a bunch of people
who were kind of up for joining a startup and really talented and highly motivated.
I mean, speaking of childhood dreams, do you want to describe the product a little bit?
Like you have these bots, they can be user created, they can be character created,
you can be public figures, fictional figures, anybody with like a corpus that you can make up
or historic figures.
How did you even arrive there as the right form for this?
Yeah, I mean, like basically, this is kind of a technology.
that's so accessible that billions of people can just invent use cases, you know,
and it's so flexible that you really just want to put the user in control
because often they know way better than you do what they want to use the thing for.
And I guess we had kind of seen some of the assistants from sort of assistant bots
from large companies.
You know, you've got Siri and Alexa and Google assistant.
And, like, some of the problems there are that when you're just projecting one persona to the world, people will, A, expect you to, like, be very consistent in, say, your likes and dislikes, and B, just not be offensive to anyone and not really have an opinion.
It's kind of like, you know, like you're the Queen of England and you can't say something that's going to disappoint someone.
Or, I don't know, like, I remember, like, I think it was George H.W. Bush said he didn't like broccoli.
And then like the broccoli farmers were like all mad at him or something.
So if you're like such a public, trying to present like one public persona that everyone likes,
you're going to end up just being boring, essentially.
And people just don't want boring.
You know, people want to interact with something that feels human, you know.
So basically you need to go for multiple personas, you know, like let people invent personas as much as they want.
And kind of I like the name character because it's got a few different meanings.
You know, he was character like, you know, an Aski character, like unit of text, character like a persona or character like good morals.
But anyway, so it's, I think that's just how people like to relate to this stuff.
It's okay, I kind of know what to expect from an experience if I can kind of define it as a person or a character.
Maybe it's someone I know.
Maybe it's just something I invent.
But it kind of helps people like kind of use their imagination.
So what do people want?
Like, do they do their friends?
Do they do fiction?
to do entirely new things?
Yeah, I mean, there's, like, a lot of role-playing, role-playing games are big, you know,
like text adventure, where it's just making it up.
As it goes, there's a lot of, like, video game characters in anime, and there's, you know,
some amount of people talking to public figures and influencers.
And, like, I think a lot of people have these existing parasocial relationships where there's,
they've got characters they're following, like, on TV or some, you know, or internet or
influencers or whatever.
And so far, they just have not had the experience of, okay, now this character responds because, like, it's always something you can watch, or maybe you're in, like, a thousand-on-one fan chat or something where, like, this V-tuber will write back to you, like, once in an hour or something.
But now they get the experience of, oh, like, I can just create a version of this privately and just talk to it, and it's pretty fun.
We also see, like, a lot of people using it because they're lonely or troubled and need someone to talk to.
like so many people just don't have someone to talk to.
And a lot of use kind of crosses all of these boundaries.
Like somebody will post, okay, this video game character is my new therapist or something.
So like it's a huge mix of fun and people who need a friend and connecting with, you know,
game playing, all kinds of stuff.
How do you think about emotion both ways, right?
Like people's relationships with characters or like what level we are at in expressing coherent emotion and how important that is?
Oh, yeah.
I mean, probably you don't need that high-end level of intelligence to do emotion.
I mean, emotion is great and it's super important, but like a dog probably does emotion pretty well, right?
I mean, I don't have a dog, but I've heard that people will, like a dog is great for like emotional support and it's got pretty lousy linguistic capabilities.
But the emotional use case is huge and people are using the stuff for all kinds of emotional.
support or relationships or whatever, which is just terrific.
How do you think the behavior of the system will change as you kind of scale things up?
Because I think the original model was trained on not a ton of money.
Like on a relative basis, you fixed were incredibly frugal.
Yeah, I think we should be able to make it smarter in all kinds of ways, both algorithmically
and scaling, you know, get more compute and train a bigger model and train it for longer.
I should just get more brilliant and more knowledgeable and better attuned to what people want,
what people are looking for.
You have some users that are on the service, like many hours a day.
How do you think about your target user over time and what the usage patterns you expect
to be are?
We're going to just leave that up to the user.
Our aim has always been like get something out there and let users decide what they
think it's good for.
And, you know, we see like somebody who's on the site today is active for about two hours
on average today.
That's of people who send a message today, which is pretty wild.
But it's a great metric that people are.
finding some sort of value in it. And as I said, it's really hard to pin down exactly what that value is
because it's really a big mix of things. But our goal is like make this thing more useful to people
and let people kind of customize it and decide what they want to use it for. If it's brainstorming
or help or information or fun or like emotional support, let's get into user's hands and see what
happens. How do you think about commercialization? We're just going to lose money
on every user and make it up in volume.
Oh, good.
It's good strategy.
No, I'm joking.
The traditional 1990s business model.
It's kind of a 2022 business model, too.
You should issue a token and just make it a crypto thing.
No, we're going to monetize at some point pretty soon because, again, this is the kind of thing
that benefits from having a lot of compute and, you know, rather than burn investor money,
the most scalable way to fund something is.
actually provide a lot of value to a huge number of people. So probably try some premium
subscription type of service where, you know, we can, as we develop some new capabilities that
might be a little more expensive to serve, then start charging for them. I really like that
anyone can use it now for free because it's, you know, there's so many people that it's providing
so much value. I mean, it's really taken off as a consumer service in a really striking way,
if you look at the numbers of users and the number of hours of usage per user, which is insane.
Are there any scenarios where you think it's likely to go down like a commercial setting
where you have like customer service bots who provide like a brand identity around support
or is that just not that interesting right now as a direction?
I mean, right now we have 22 employees, so we need to prioritize.
We are hiring definitely enough work for way, way more people.
Priority number one is just get it available to the general public.
It would be fun to like launch it as customer service bots when we're,
people would just stay on customer service all day.
They're like chatting with a friend effectively.
Yeah.
Let's start with the customer support.
And that actually happened apparently on some old e-commerce sites.
Like eBay apparently was effectively a social network really early on as people were buying
and selling things and just kind of hanging out because there weren't that many places to hang out online.
So I always think it's kind of interesting to see these immersion social behaviors on different types of almost like commercial products or sites.
But it makes a lot of sense.
So you said one of the obvious reasons, Lambda didn't ship immediately.
at Google was safety.
How do you guys think about that?
Remember, everything character says is made up.
Exactly, right.
Make sure the users are aware that this is fiction.
There's anything factual that you're trying to extract from it at this point.
It's best to go look it up somewhere that you find reliable.
You know, I mean, there are other things that types of filters we've got there.
Like, you know, we don't want to encourage people to hurt themselves or hurt other people
or blocking porn, there's been a bit of protest around that.
Yeah.
And do you view all this as a path to AGI or sort of superintelligence?
Sure, yeah.
And is that part of the goal?
For some companies, it seems like it's part of the goal.
And for some companies, it seems like it's either not explicitly an anti-goal or if it happens, it happens.
And the thing people are trying to build is just something useful for people.
What a flex.
AGI is a side effect.
Yeah.
Well, I mean, that was a lot of the motivations here.
Because, like, I mean, my main motivation for working on AI other than that it's fun, well, I mean, fun is secondary.
Like, the real thing is, like, I want to drive technology forward.
There are just so many technological problems in the world that could be solved.
For example, like all of medicine, like there are all these people who die from all kinds of things that we could come up with technological solutions for.
I would like that to happen, like, as soon as possible, which is why I've been working on AI because, okay, rather than
working on, say, medicine directly, let's work on AI, and then AI can be used to accelerate
some of these other things. So basically, that's why I'm working so hard on the AI stuff.
And I wanted to have a company that was both AGI first and product first, because product
is great that lets you build a company and motivates you. And so, like, the way you have a company
that's both AGI first and product first is that you make your product depend entirely on the
quality of the AI.
The biggest determining factor in the quality of our product is how smart things
going to be.
So now we're fully motivated, A, to make the AI better and to make the product better.
Yeah, it's a really nice sort of virtuous feedback loop because, to your point, as you make
the product better, more people interact with it, and that helps make it a better product
over time.
So it's a really smart approach.
How far away do you think we are from AIs that are smarter than people?
And obviously they're smarter than people on certain dimensions already.
But I'm just thinking of something that would be sort of equivalent.
Yeah, I guess we just always get surprised at like what dimensions.
Yeah, it gets better than people.
That's pretty cool that some of these things can now like do your homework for you.
I wish I had that as a kid.
What advice would you give to people starting companies now who come from backgrounds similar to yours?
Like what are things that you learned as a founder that you didn't necessarily learn while working at Google or other places?
Oh, good question.
basically, like, you learn from horrible mistakes, but I don't feel like we've made really,
really bad ones so far, or at least we've kind of recovered. But I guess, yeah, just build the
thing you want really fast, hire people who are just, like, really motivated to do it.
Yeah. So one quick question just for users, like, what's the secret to making a good character?
Like, if I'm going to go make a copy of a lot instead of rubber ducking with myself, like,
what do I need? Oh. Just like my texture.
with a lot. Yeah, stop disappearing
the chat a lot. I'm just trying to protect
myself from becoming a character.
I mean, so
you can do it just as
simply as like, put in a greeting,
a name in a greeting is all you need
typically for famous characters
or famous people because the model probably
already knows what they're supposed
to be like. If it's, you know,
something that the model is not going to know about
because it's a little less famous
than you can create
an example conversation to
like show it how the character is supposed to act.
It's insane that character is only 22 people.
Like you're hiring, what are you hiring for?
What are you looking for?
So far, 21 of the 22 are engineers.
So we're going to hire more engineers.
No, I'm joking.
We are going to hire more engineers.
Shocked.
Both in deep learning, but also like, you know,
friend and back end, definitely hire more people on like the business and product side.
Yeah, we've got a recruiter starting on Monday.
Hard requirement burning desire or childhood dream to bring characters to life.
Yeah.
Yeah.
An exceptional person.
Do you mind if ask you like two or three quick fire questions and then we'll wrap up?
Sure.
Okay.
Who's your favorite mathematician or computer scientist?
Oh, that's a good one.
They were all standing on the shoulders of giants.
It's hard to pick out in this big tower of mathematicians and computer scientists.
I got to work with Jeff Dean a lot of Google.
He's really nice.
fun to work with. I guess he's now running their large language model stuff. It's a little bit
of a regret of having left Google, but hopefully collaborate in the future. Yeah. Do you think math
has invented or discovered? Oh, that's interesting. Okay, I guess discovered. Maybe all of it's
discovered. Everything and we're just discovering it. And then last question, what do you think is
something you wish you'd invented? Let's see. Teleportation.
Ooh, that seems hard. That sounds like a good one.
I'm not going to step into a teleporter.
Some physics involved here.
Yeah, I do not want to be like disassembled or anything.
No beaming.
I'll walk.
I don't want like.
Take the elevator.
We didn't need to teleport.
My brain upload into a computer.
Like, I think I would like to keep my physical body.
Please thank you.
Oh, I don't care.
Let me out of the meatbox.
What do you wish you'd invented?
Oh, what we'd wish we'd invented.
Sorry, I was dodging the question.
Just focused on inventing AI.
that, you know, that can push the technology forward.
Such a good founder answer.
Makes sense.
Working on it.
Very focused.
That's great.
Well, No, this was an incredible conversation.
So thank you so much for joining us today on the podcast.
Thank you a lot.
Thank you, Sarah.
Good to see you, too.
Yeah, good to see you.
Thanks, No.
All right.
Thanks for the time.
Bye.
Thank you for listening to this week's episode of No Priors.
Follow No Priors for new guests each week and let us know online what you think and
who an AI you want to hear from.
You can keep in touch with me and conviction by following
at Serenormus. You can follow me on Twitter at Alad Gill. Thanks for listening.
No Pryors is produced in partnership with Pod People. Special thanks to our team, Cynthia
Galdea and Pranav Reddy, and the production team at Pod People. Alex Vigmanis, Matt Saab, Amy Machado,
Ashton, Ashton, Danielle Roth, Carter, Carter Wogan, and Billy Libby. Also, our parents, our children,
the Academy, GovGBT, and our future AGI overlords.
Thank you.