a16z Podcast - World’s Largest Supercomputer v. Biology’s Toughest Problems
Episode Date: June 14, 2020Proteins are molecular machines that must first assemble themselves to function. But how does a protein, which is produced as a linear string of amino acids, assume the complex three-dimensional struc...ture needed to carry out its job? That's where Folding at Home comes in. Folding at Home is a sophisticated computer program that simulates the way atoms push and pull on each other, applied to the problem of protein dynamics, aka "folding". These simulations help researchers understand protein function and to design drugs and antibodies to target them. Folding at Home is currently studying key proteins from the virus that causes COVID-19 to help therapeutic development. Given the extreme complexity of these simulations, they require an astronomical amount of compute power. Folding at Hold solves this problem with a distributed computing framework: it breaks up the calculations in the smaller pieces that can be run on independent computers. Users of Folding at Home - millions of them today - donate the spare compute power on their PCs to help run these simulations. This aggregate compute power represents the largest super computer in the world: currently 2.4 exaFLOPS!Folding at Home was launched 20 years ago this summer in the lab of Vijay Pande at Stanford. In this episode, Vijay (now a general partner at a16z) is joined by his former student and current director of Folding at Home, Greg Bowman, an associate professor at Washington University in St. Louis, and Lauren Richardson. We discuss the origins of the Folding at Home project along with its connection to SETI@Home and Napster; also the scientific and technical advances needed to solve the complex protein folding and distributed computing problems; and importantly what does understanding protein dynamics actually achieve?
Transcript
Discussion (0)
Hello and welcome to the A16Z podcast. I'm Lauren Richardson. Today's episode celebrates the 20th anniversary of Folding at Home, the distributed computing project for simulating protein dynamics. Today, Folding at Home is run on millions of devices, is the world's largest supercomputer and tackle some of biology's toughest problems, including COVID-19. A16Z general partner, Vijay Ponday, who founded Folding at home in his lab at Stanford, joins this episode.
along with its current director, Greg Bowman,
an associate professor at Washington University in St. Louis.
In this conversation, we discuss the origins of the Folding at Home project,
along with its connection to SETI at Home and Napster.
Also, the scientific and technical advances needed to solve
the complex protein folding and distributed computing problems.
And importantly, what does understanding protein dynamics actually achieve?
First, some context into what protein folding is and why it matters.
Proteins are the main structural and functional molecules in a cell and are produced as a linear string of amino acids.
But to do their work, this string must first fold into a complex three-dimensional structure, aka the folding.
As proteins carry out their various jobs, they must also change their shape.
Folding at home simulates these constantly changing arrangements to help us understand how proteins function.
But these simulations require an astronomical amount of compute power, which is where splitting
the calculations into pieces that can be performed by a large group of independent computers
helps. Understanding protein folding and dynamics allows us to design drugs and antibodies
to target disease-related proteins and much more. We begin our conversation with Greg briefly sharing
how folding at home is contributing to the fight against COVID-19 and then go into the origin of
the project, which is also a story about how new innovations are developed.
One of the great things is that we're giving people around the world, millions of them now,
the opportunity to actually do something proactive about this virus that's a little more
emotionally gratifying than washing your hands and hiding inside.
People will sometimes ask, well, like, how is folding these proteins going to help?
Well, we're not actually folding them, right?
We're trying to understand how these moving parts play a role in these proteins function.
and one of the first targets we jumped on is called the spike.
And it's actually a complex of three proteins that bind to a human cell in order to initiate infections.
So the spike actually closes up and protects them being recognized by immune surveillance systems
when it's opened up that it's able to actually form this interaction with a human cell.
But we really don't know for sure what the open structure looks like or what that opening process looks like.
All the stages along the way of this opening motion might be interesting therapeutic targets
because they'll present new nooks and crannies where small molecules or antibodies could potentially bind.
With all the compute power, we're really free to try a lot of things in parallel.
So I should also note that two of the other PIs involved in this, John Kader and Vince Vowles have been doing a lot with setting up simulations of proteins in the presence of large sets of small molecules to ask which of those small molecules.
binds most tightly to the protein.
It might be a more advantageous starting point
for drug discovery, for example,
or could we do the same things
with helping to inform antibody design?
So COVID-19 is the problem that you guys are addressing today,
but let's go back 20 years.
And Vijay, will you tell me about the problems you were facing then
and how the Folding at Home project originated?
When I came to Stanford in 99,
and I spent some time thinking about
where the sort of big opportunities could be.
One of the things that I came down on at that time was that there was this inflection point
for computation to start to make a big impact in understanding areas of structural biology
and drug design and related areas.
And that the problem wasn't necessarily that we were so far off collectively as a
community for what to do.
The challenge was that we were just maybe a few.
factor of a thousand to a million times off in terms of computer power that we had to solve the
problem. And so the idea for getting together a whole bunch of computers make sense that you
could build it, a huge amount of power in aggregate. The challenge at the time was, though,
is that one computer that's a million times faster is not the same thing as a million computers.
And so how to develop algorithms to use all these delocalized machines and make it as powerful
as one machine was the big challenge.
But I think the insight that I think really many of us had was that if we could do this,
there could be a huge advance in terms of what we'd be able to do computationally.
I have a question for you, actually.
Yeah.
I was just curious, did Sedi at home play a role in inspiring you?
I always have this image of you sitting in your office and be like, hey,
if people are willing to help hunt for aliens, I'll bet they'd help with our science.
Yeah.
You know, I certainly heard of SETI home, actually.
The fine thing is like there was another.
another project that seems less related, but I think got my attention also, which actually was
Napster. The thing about Napster, even though it kind of came and went, was that there was this
huge capability of the combination of lots of PCs and network power, and that could really
be paradigm shifting and game shifting. I think the thing about SETI, though, is that they, in some
ways, had a much more distributed computing sort of friendly problem. So can you explain that a bit more
why SETI was better for distributed computing as opposed to protein folding, which was what you were
looking at? Yeah, so the huge problem here is that what protein folding, what everyone thought
protein folding needed was just one computer that was a million times faster. And the reason why
that's not the same thing as a million individual computers is that how do you get a million
individual computers to speed up the problem? If you want to do something faster, maybe if there's two
of you, you can do it twice as fast, or if there's three of you, maybe three times as fast. But that really
hits its limits. If you had a class of 60 people and you had an hour exam and you let the
students work together on the exam, they might like the sound of that, but then if you tell them,
okay, but you're going to have to finish the exam in one minute instead of one hour. You might
think, well, you know, maybe 60 people could do 60 times faster. But the reason why you can't
is just all the communication, all the organization have to be done. The study problem was each one
of those bits were already broken down and it's just the same repetitive analysis. And the
challenge for fooling at home was how to take something which seemed inherently sequential
that you can't simulate the next step until you've done the first step. You're watching
the process of something happened in time. How to break that up was, I think, the real
breakthrough that we came up with and that allowed the protein fooling problem and all related
problems, such as the things that Greg and team are doing now with COVID to be run on in a very
distributed fashion. How did you do that? What was the breakthrough that led you to figuring out
how to solve that problem. Well, first off, there's a couple different starts. And like tackling any
really difficult problem, I think we had an initial version of it, which was really simplified.
One way to think about is a lot of what we're looking for is rare events. And this is sort of a
rough analogy. If you're doing a rare event, like trying to win the lottery, and let's say there's
one in a million lottery tickets are going to win, you could play the lottery like every week
and wait for a million weeks. And, you know, eventually you should win the lottery if you play enough
times. Or you could maybe buy a million lottery tickets and win that lottery in a week. And so you
could speed it up by doing things in parallel. So the idea is that, you know, you say your protein can
either be folded or unfolded, right? In each simulation, you start it off unfolded and give it a
little bit of time and see if it will fold, right? And now you can say, ah, you know, you can either
have one simulation that tries a thousand times like Vijay said. And maybe at some point you see
it fold or you can run a thousand independent simulations. And each of them has
some chance of folding in a smaller timing there.
There's a lot of convenient math for how you could possibly break things up,
but the problem starts to get very complicated once you realize that it isn't as simple
as simply winning one lottery. There's more complexity behind it.
And I think over time, we continue to refine our methods,
and this eventually led to the development of a whole new theory of how to do simulations
called Markov State Models.
What did a Markov model allow you to do?
What Markov models does is that it does two things, is that it creates a way actually to break up this problem, which looks like it could be really hard to do with more than one computer and to allow you to have like hundreds of thousands of millions of computers to break up the problem.
And also it actually helps us realize that actually we really want really more than just how to get from A to B.
And A to B could be from an unfolded protein to a folded protein.
It could be from one confirmation to another, like an inactive to an active, which would be very very.
relevant for drug design because for a lot of these systems there isn't going to be just one way
and there's all these different possibilities and all the different possibilities are interesting
and from biology's point of view biology will take all these different possibilities so even if
you had one computer that was a million times faster it actually wouldn't be the full picture
it would just be one road and the reason why that's important is that we don't just want to show
movies we don't just want to sort of have pretty pictures we want to have some deep
understanding and quantitative predictions.
I think Vidja's lottery analogy was a good one.
Like each ticket you either won or didn't,
and that was kind of an underlying assumption
of some of the original technology, right?
And then as we moved to Markov models,
you know, it became a little bit more like cartography.
So you imagine you're trying to explore the Rockies
and you don't have the capacity to take a satellite image.
You could have one hiker try and walk around,
but it's going to take a really long time
for them to visit all the different valleys.
And so it was devising ways to say, well, what if we had a thousand hikers and we could send them all off in
different directions and then collect that data to build our map at the end.
And now building off of this, you know, inspired by things that Vijay and some of his other students
did originally, we're starting to become more clever about it and, you know, send our hikers out
to explore some and then decide what's worth exploring more and focus our hikers there in order to
but to build these maps as efficiently as possible.
And I think what's really nice about Greg's analogy is that at the heart of what
Folling Home is doing and heart of what molecular simulation doing is really exploring.
That for COVID, for example, Greg and the Folling Home team can start from some experimental
crystal structures and so on.
But that's unfortunately not typically the structure of the protein that's really most
relevant for drug design.
And what the relevant structure is, people don't know.
That's why we're doing the simulation.
So it is inherently a kind of exploration.
And now it's just now the question of how to do the exploration most efficiently,
how to do it in a sort of statistically rigorous way.
And then something probably will get to is once you've explored,
what can you do with this information and how will it change the course of learning something
about the biology or advancing new types of drugs?
So how do you know when you've hit the right simulation or when you've hit the lottery?
Yeah.
So this is something where since you're exploring, you can have a sense whether you
you are starting to not see any new things anymore.
Kind of almost like if you were sort of blindfolded going around a room or something like that
and you're going through all the parts and you're looking at stuff eventually,
you'll repeat and repeat and repeat.
The challenge is going to be is whether the timescales that we're simulating
are going to be relevant for the timescales of the motion of the proteins we're looking at.
And I think this was at the heart of folding at home 20 years ago
where there, the typical timescales that one could simulate just on supercomputers or so on
would be in the sort of nanosecond range, so like billionths of a second.
And that really to start getting interesting, we needed to be in the microsecond to millisecond range,
so millionths to thousands of a second.
And so that's a big gap.
And in those early days of folding at home, we were able to go from nanoseconds to microseconds
to milliseconds.
And at milliseconds, it starts getting really interesting.
There starts being some more relevant conformational change, especially for disease-relevant
proteins. I think one thing going a little meta too with like, you know, how do you know when
you're done is like, well, how do we know we're done with anything, right? Like, you know, even with
our understanding of physical reality, like there's no guarantee that everything won't
change tomorrow, right? I think the question is, you know, how do you know when you've done enough
to be useful? You know, one of the exciting opportunities now is that we can really start
addressing problems that are biological problems in a way that we can tightly integrate with
experiments. So like my lab is half experimental now. And so we can run these simulations and have
some sense that we've started to converge on a consistent answer from a purely computational
perspective. And now the ultimate question isn't so much like, well, what would happen if we ran
10 times longer as it is, can we go to the wet lab now and do something that no one would have
thought to do if it wasn't for having those simulations to inform our understanding and
ability to formulate new hypotheses.
Were there any surprises when you were building the system where you thought something was
going to work and you were never able to get it to work, but, you know, something that you
thought this is never going to pan out, actually ended up working?
One of the most transformative were the early days of programming GPUs.
So folding at home, I think a lot of people don't realize was one of the first real applications
of any compute on a GPU.
People had to make this paradigm shift from how to write algorithms for a single core.
versus using multiple cores.
And so that shift from CPU to GPU was a way to take advantage of the massive parallelism
that was built into the GPU.
It's a classic example of disruption where it seems like a toy.
Like, oh, isn't it cute that we could run this on a GPU, but it's not really going to be
competitive to now it dominates the compute and falling home.
Circo 2010, the things that we were doing there that Greg was doing at the sort of bleeding edge
at that time now are being done at startups.
And it's helped that technologies like GPUs bring a huge amount of computer power straight to
desktop. But that's also in some ways just is Moore's law. And maybe it's hard to talk about
folding home without acknowledging the power of Moore's law, which people think in some ways
is dead. But it really isn't in a sense that if you can break up the problem in a lot of bits,
Moore's law is still very much alive. And the aggregate power of folding home is following Moore's
law. And so it is mind-blowing to me that folding home is, and Greg knows the up-to-date number now,
is 2.4 exoflops, or maybe, Greg, is it more than that today as of today?
That's the last estimate that I've seen, actually.
Kind of comically, the machine that puts these numbers up is so busy with other things.
But even more than an exoslop is still super mind-blowing for me,
because it was in 2007 that we got the Guinness World Record for reaching a pedal flop.
And we were very quick to a pedal flop.
So that's, you know, 13 years to get a thousand times.
more performance. And so to be at Xoflop is very exciting. But what's mind-blowing from
me thinking about is where we could be in this 10 to 20 years from now. And with a thousand
times more performance, I can already fantasize by the things we're doing because you could use
models that are at the very highest accuracies that don't make some of the compromises that we
have to do now just for performance, getting to extremely long time scales in very large
systems, you know, that thousand X would be just, it feels like science fiction to be talking about
it. Even beyond more systems opens up lots of interesting opportunities, right? Because of it's
a thousand times faster. We can do a thousand things in parallel. And this has fun implications for
things like personalized or precision medicine where you have questions about, you know,
oh, we've got all these variants. What do they all mean? You know, or how do we target each of them
with small molecules and how do we screen large libraries of small molecules to find the winners?
Well, and it's fun to think about what those grander problems would be.
I think there's all these different directions.
There's a couple of different axes.
So one axis is the size of the system.
Another axis is the length scale, time scale of the simulation.
Another axis is sort of the resolution or accuracy of the model that you want to go after.
And that could even include studying things where we're not just talking about biophysical dynamics,
but even more quantum mechanical properties and, you know, studying enzyme behavior.
There are a lot of things that you can do in that direction.
And so we could take them one by one.
So once you start going up in system size, then you start sort of talking about not individual
proteins, but multiple proteins and how starting protein-protein interactions
or even how a combination of proteins work in terms of genetic circuits.
Yeah, and it opens up whole new classes of problems, right?
I think we could, in addition to the axes that Vijay mentioned,
and there's a number of systems access to bring up again,
where we can start saying, oh, well, now we can really start exploring sequence space
and taking these algorithms that we developed to explore these complicated structural spaces
and leveling them up.
And sequences can be generalized to many things, not just protein sequences or nucleic acid sequences,
but different types of lipids, but different small molecules,
different variations of the small molecules, congeneric series,
whole big screens, the sort of the benefit of that computer power plus the very clever way
to use algorithms has led to very efficient approaches that are much beyond what you could do
by simply just trying to brute forces.
The other thing, actually, imagine the stuff that Greg and coworkers are with folding
at home on COVID that maybe takes the most powerful computer in the world by far a month
to do.
In time, that would be something that could be done, let's say, in a day.
And so could that be done on a patient-by-patient basis, you know, by running on the cloud somewhere?
When you're thinking about sequence space or thinking about variations, the ultimate variation is how am I different?
And how are my snips or my variations going to be relevant for this drug performance to work?
And, you know, there's one of the things that I think is poorly appreciated is that people often say, well, you know,
folding home is a model of reality, but it's not reality.
But in the end, like, you know, in vitro experiment is a model of, let's say, an animal experiment,
which is a model of a specific human, which is a model of me.
But I don't want people to do experiments on me, the ability to simulate and to build a model of me
that is far better than any in vitro model, any animal model, and any other human model.
It's something that gets really interesting for personalized medicine.
And I think we're a little bit far from moving that right now.
But philosophically, there's no reason why that couldn't happen.
And it's just really just this world where we can hopefully shift from sort of an empirical,
we'll just see what happens when we do experiment view of biology to an engineering view of biology.
That is maybe the hallmark of the shift and that we're seeing that shift from what you can have to
discover to what you can engineer. And that scope of engineering is just increasing over time
and increasing essentially with Moore's Law. Let's dig into the protein folding problem and talk about
why knowing a protein's confirmation is so important, that intersection between experimental,
until how the, you know, crystal structures, cryoem structures inform this, but don't inform
this and kind of the dynamic nature of proteins.
There's really three protein folding problems, and often they get intertwined.
So one version of the protein folding problem is just what's the final structure?
And that kind of already supposes that there's a single structure, which is already a
massive implication, but really that version of the protein flowing problem is, can you predict
what a crystallographer would measure in a crystallography experiment?
And so there's been a lot of interesting advanced users in machine learning to do this.
That's meant to be essentially like a replacement of crystallography.
The second one that people talk about is the so-called inverse folding problem,
which is if you know the structure already, can you give me a sequence that would fold into that structure?
And that's really an interesting question of protein design and protein engineering.
And a lot of people were also working on that.
And you could see how those first two problems are interrelated.
What we've always been interested in for folding at home was not just what the final
would be, but what is the dynamical pathway for how you get there? And where do you go along
the way? And the reason why that was relevant for protein folding was that is just this astounding
intellectual question that proteins even fold at all on any sort of reasonable time scale. But the
complexity of protein folding is really high in terms of all the different possible things
that could be happening. How can you do this? And especially how can you do this without
massive amounts of misfolding and getting stuck in the wrong place, which we know is really
important even for a disease because there's plenty of cases where proteins misfold and then cause
disease, like it's Alzheimer's or ALS or CJD and so on.
One of the things I like to tell people is, you know, it's as if you put a whole bunch of car
parts out on your yard and they suddenly sprung together into a functioning car and you
shake your head and blink a couple times. You're like, whoa, how did that happen?
It's literally a machine that is assembling itself before it does. And how that happens
was sort of one of the central questions we set out to study in folding at home.
And I think in simulating it, we learned a huge amount about what happens and just the very tight
balance between folding and misfolding and how these processes occur.
And that was, I think, some of the early wins that folding home helped pioneer.
Yeah, I think this is one of the cool things about the protein folding problem is it's a sort
of huge thorny problem where it's like, well, if we can even make a dent in the protein folding
problem. It also means that we can make a dent in how kinases work and how mutations cause cancer and
how drugs work and how we can better inform the design of drugs. So it's been really cool to see
some of these very basic research concepts from the beginning of the project start to have more
immediate biomedical applications and potentially industrial applications. Greg's also pioneered a lot
of work in using folding home-like methods to identify cryptic sites, like just even drug-binding
sites that you wouldn't know existed if you just looked at the crystal structures.
That's right. The structure that you might derive from an x-ray crystallography experiment
or a cryo-e-m experiment, it's always just the tip of the iceberg. And so there's all of these
hidden confirmations that we as a scientific community are typically blind to, but with our
simulations, we can go and get a sense of what's there and what new.
opportunities there are. So sticking with the car idea, you know, if you took a snapshot of what
your car usually looks like, it's like parked in the parking lot, right? And you might not realize
that you can get into the car, right? Because you don't see an open door in that snapshot. But if you
can watch someone getting into the car, you immediately are like, ah, ah, there's more to the story than
what I saw in this snapshot. And so what we're able to do with these simulations is take these
starting points and see what all the moving parts are and start saying, oh, well, maybe in the
original structure from the crystallography, for example, there weren't any places that a small
molecule drug could bind and interfere or enhance the protein's activity. But in almost all cases that
we've looked at, we've seen as the components of these proteins are moving about, we start seeing
binding sites that weren't present in that starting structure that we have come to call cryptic
pockets. And we've now followed up and experimentally confirmed that these things exist and designed
our first small molecules that bind them and inhibit or activate protein targets. And again,
experimentally confirmed that those work is intended. So it's an exciting thing because there's
many proteins. They're currently considered undruggable because no one's seen a drug that binds them.
And you look at the structure and you say, well, there's nowhere our drug could bind. So I'm not even
going to go hunting experimentally. And now we're able to say, well, hold on.
a second. Maybe we should revisit some of these targets because we now have the data to say that
there are handles we can tap into to manipulate their functions.
Yeah, probably 30 years ago, 40 years ago, people were not routinely doing crystal
structures. Even 30 years ago, it was a big, big deal to have a crystal structure. And now crystal
structures have become fairly routine. You can imagine a similar path for computation, which is that
once people can realize that this is a powerful technology to do things that you couldn't
do by any other means, then I think what starts to happen is that people really want to be
able to have access to that data and the game shifts. And we're starting to see this also,
you know, in early days in various startups where they're adopting these types of technologies
at a scale, I think much smaller than what Folling at Home can do, but maybe, you know, what Fuling Home
was doing five years ago is the type of thing that you can now do in real drug design processes.
With the scale of folding at home, we really have a leading indicator of what's going to be
commonplace five years from now.
So folding at home gives us a peek into the future.
Vijay, looking back to the inception of folding at home, how do you feel about the progress made?
Part of what's exciting for me is just even having this conversation is looking back 20 years
and thinking about the way we felt then and having that very much be validated with what's being done right now
and then therefore being very excited to look forward 20 years and thinking about 20 more years of exponential growth,
which can happen due to a variety of technologies, even if it just means a lot more GPUs or it means a lot more dies on a GPU.
we can go on to what it could look like.
I apologize if I garbled us, but was it, is it Eric Schmidt from Google that talked
about the internet disappearing, right?
And you're like, oh, my goodness, how could the internet go away?
And that wasn't his point as all, right?
His point was that it's going to become so ubiquitous that we won't think about it
and such a fundamental thing.
And, you know, I really think that the long-term future of projects like Folding at Home
as the technology disappears.
Now that Greg is running the show at Folding at Home, do you have any questions for him?
I'm curious to ask Greg some sort of more sentimentalist questions, especially with
Foley Home 20th coming up. As a junior student, you overlapped with the first generation. And then
as a senior one, you overlapped with the subsequent generation after you. And then afterwards,
you've collaborated with a lot. And one of the things that's been gratifying for me is that a lot
of people from the lab have gone off to be leaders in pharma and biotech and in academia and have often
actually worked with each other.
the really fascinating things with having been around since nearish the beginning and overlapping
with lots of the starting people and being here now with, you know, COVID-19 being the center
of the world's attention and everyone trying to figure out what it is that they can do to
maximize our chance of dealing with this pandemic in a productive way, is that we're basically
having a huge family reunion. It's amazing, right? So, you know, a month ago, we had 30,000.
and active volunteers in folding at home, and there were three of us that had spent time in
Vijay's lab who were running the show. And now we've shot up to, you know, over two million
devices that are running and folding at home. And we've got quite the cast of characters
that have come to help out. So, you know, Guha. Guha Jaya Chanchun, who I think helped work on one of
the early versions of folding at home, has a startup where they're thinking about cryptocurrencies and
and they're trying to figure out how to help us bring on board more compute
and how to store all this data that we're generating.
Adam Beberg,
who also helped some of the early folding at home stuff is at Nvidia now.
So he's jumped on getting the folding at home code into Docker containers
so that people can spin them up easily without having to go through set up things
or spawn off lots of copies on the cloud.
Del Lucent is setting up a server.
and Shu Huang's gotten re-involved from Hong Kong because he's interested in the polymerases and COVID also.
So, yeah, it's been super fun.
And some of the volunteers, too, so there's a bunch of people who helped volunteers who helped with our forum,
providing user support, who were very active when I was in grad school and had shifted their attention to other things,
have come back to help us as we answer questions for dramatically expanded user base.
So it's a super cool kind of heartwarming experience.
Yeah, and actually, Greg, you make a really good point there
because this isn't something that was ever done just by me or just by my lab.
And Greg is out working with John Kodera and with Vincent Volts,
who are now professors at Stone Kettering and Temple, respectively.
But the community part is really important.
It's amazing how many people have helped out and volunteered not just in computer.
power, but in time. And so Bruce Borden has been one of the key volunteers over this journey
with us for, you know, 20 years. And Bruce has been on the key sort of points between helping us
and with the whole band of volunteers that help other people learn how to use following home
and answer questions. So their contributions are at least as significant, I think, as the
contributions of computer time and in many ways much more significant. The social part,
is actually critically important.
And I think the virality of people getting other people
to join their teams and communicating, it was important.
A lot of the war stories there were just understanding
that human element.
Like, you know, we have gamified it from the beginning.
And so if you run, you get a certain number of points.
And I remember in the early days,
we were giving one point for roughly one day's worth of compute.
And people were very upset at that.
And, you know, to me, I could give one point,
point. I gave a million points. I gave how many points do you want? But I had to realize that this is a
very, very important thing because this is the currency that drives motivation. And so there was this
number in the early days. I think we came on a hundred. I think people had this psychology that
points were like dollars and you donate your thing a hundred a day. That feels right. It's arbitrary.
There is like that element of human psychology that wants the reward to feel substantial, but also not
be completely ridiculous. Yes, yes, yes. It blew my mind and I actually, I think I did not take it
seriously enough in the beginning until I realized, oh, no, this is the heart of human psychology
of what's going to make this work. And this is something very important. Another big surprise for me
is how many people got drawn into folding at home by running it and that have come to me over
the last 20 years saying, hey, I got excited about biology or I got excited about computational biology
because they ran folding at home. And these are people that became Stanford grad students or
became startup CEOs or all these different areas.
And I think there is a generation of kids that are home from high school that are running
this right now and are seeing what the power of computation can do.
And perhaps one of the greatest contributions even beyond any of the things that we calculate
will be to empower them with the knowledge that there is such a huge potential for what
they can do.
So much of like folding in home is like looking for these surprises, these outliers.
and we have all of this computer power to find these outliers that are so rare.
And that's what's really important about being able to run the computation at scale.
One of the biggest surprise for me in the early days of folding at home was that we got to meet
these outliers in human nature.
That's right.
That's right.
Yeah.
And, you know, I think speaking to human nature, the reason that folding at home has something
to offer in the current situation is that we already had this really vibrant community
with 30,000 people volunteering there, their computer.
power that we were able to immediately bring to bear. I think that really speaks to the like
interdisciplinary nature of folding at home. It's computing. It's statistics. It's proteins. It's also like
a human psychology experiment and, you know, a competition. It's so many different things all wrapped up
into one program. Thank you guys. Yeah, my pleasure. Thank you. Thank you so much. This was fantastic.