Big Ideas Lab - Drug Discovery
Episode Date: February 25, 2025For centuries, drug discovery was a slow, trial-and-error process—sometimes taking decades to develop life-saving treatments. But what if we could speed up that timeline? At Lawrence Livermore Natio...nal Laboratory, scientists are using supercomputing, machine learning, and AI to revolutionize how new medicines are found, tested, and developed.In this episode, we dive into the cutting-edge technology behind faster drug discovery, how researchers are using high-performance computing to target diseases more precisely, and the potential to bring new treatments—from cancer drugs to viral countermeasures—to patients in record time.-- Big Ideas Lab is a Mission.org original series. Executive Produced by Lacey Peace and Levi Hanusch.Sound Design, Music Edit and Mix by Daniel Brunelle. Story Editing by Daniel Brunelle. Audio Engineering and Editing by Matthew Powell. Narrated by Matthew Powell. Video Production by Levi Hanusch. Guests featured in this episode (in order of appearance): Jim Brase, Deputy Associate Director for Computing, Lead of the Bio Resilience Initiative, LLNLFelice Lightstone, Leader, Biochemical and Biophysical Systems Group, LLNLJonathan Allen, Informatics Scientist, LLNLBrought to you in partnership with Lawrence Livermore National Laboratory.
Transcript
Discussion (0)
When was the last time you had a headache?
An ear-splitting, skull-pounding headache.
What did you do?
Probably massaged your temples, maybe dimmed the lights, and then you did what any normal,
sane person would do.
Went outside, found the nearest willow tree, and chewed its bark.
Right?
If you'd asked a physician 4,000 years ago,
that's the exact prescription they would have given you.
All across ancient Greece, Sumeria, Egypt, and China,
healers recommended willow leaves and bark
to treat pain, fevers, and inflammation.
But if that doesn't sound tasty, you could brew it into a tea.
Or more like hot bark water.
This worked because willow bark contains a molecule called salicin, which causes a reaction
in your body that eases pain.
In the mid 1700s, a cleric of the Church of England, Edward Stone, rediscovered the willow's
properties.
But instead of simply gnawing on leaves, he took things a step further, and dried the
tree's bark for months, creating a powder.
With it, he treated himself and several neighbors.
Two varying success.
Isolating and extracting the active ingredient, salicin,
was the next great leap forward in 1826.
After that, the development of the drug,
as we know it today, began in earnest.
But it wasn't until 1915, nearly 90 years later,
that Bayer released over-the-counter aspirin,
a small white pill that could banish a headache
in 15 minutes.
It almost seemed like magic, but it's science and centuries of experimentation, research
and testing.
Nowadays, thankfully, creating new drugs doesn't take thousands of years. But it's still a long process, generally 15 years or more.
The folks at Lawrence Livermore National Lab are hoping to change that.
Using state-of-the-art supercomputing, machine learning, and AI technology,
they hope to shorten the gap between research and discovery.
It's a tall order, but they're already making huge strides in creating new medications,
discovering cures, and saving lives.
Welcome to the Big Ideas Lab, your weekly exploration inside Lawrence Livermore National Laboratory. Hear untold stories, meet boundary pushing pioneers,
and get unparalleled access inside the gates.
From national security challenges
to computing revolutions, discover the innovations
that are shaping tomorrow, today.
Lawrence Livermore National Laboratory is opening its doors to a new wave of talent.
If you're driven by curiosity and a desire to solve complex challenges, the lab has a
job opening for you.
Currently, there are 139 open positions.
These include opportunities in science, engineering, business, administration,
and the skilled trades. From enhancing national security to pioneering new energy sources
and advancing scientific frontiers, Lawrence Livermore National Laboratory is where you
can make your mark on the world. Today's open roles include Lead Power Grid Engineer,
Laser Modeling Physicist, Postdoctoral Researcher,
OCEC Program Leader, and Chief Data Architect.
But the list doesn't end there.
Explore all available positions at LLNL.gov forward slash
careers.
Each opportunity comes with a comprehensive benefits package
tailored to your lifestyle and future.
Join a workplace that champions professional growth,
fosters collaboration, inspires innovation,
and drives the pursuit of excellence.
If you are ready to contribute to work that matters,
visit LLNL.gov forward slash careers pursuit of excellence. If you are ready to contribute to work that matters,
visit llnl.gov forward slash careers
to explore all the current job listings.
That's llnl.gov forward slash careers.
Your expertise could very well be the highlight
of our next podcast interview.
Don't wait.
Drug discovery is the process of how we come up with a new
medicine for a given purpose. So if we have a particular pathogen like SARS-CoV-2 virus
or a medical condition of some kind, what we want to do then is to find a molecule
that could actually go into a person and restore their system to its normal operation.
That's Jim Brasi.
My job title is Deputy Associate Director for Computing.
I'm the overall lead on our bio resilience initiative
at Lawrence Livermore.
I coordinate across different bio projects at the lab,
particularly in the areas where we're bringing together
high-performance computing and biology.
That's the central theme of what we're doing,
the integration of computing and biology
to really enable better predictive models,
better rapid countermeasure or drug development.
To make it into your medicine cabinet,
a drug has to travel a long, complex road.
There are five stages of development.
The first phase is discovery and development.
Here, chemists and biologists decide what malady or disease they want to target and
then design and formulate molecules to treat it.
Phase two and three are preclinical and clinical research. This is where countless
tests are run and safety for human consumption is determined. The next is FDA review. At
this stage, hopefully, the drug is effective and passes final reviews to be cleared for
market. Once each of these steps has been completed, the medicine finally goes to patients, and the
fifth stage begins, post-market safety monitoring.
If it sounds like a complicated process, that's because it is.
From start to finish, creating a new drug takes between 10 to 15 years.
One of the biggest bottlenecks is at the first stage, discovery and development.
Forming a new drug can take up to five years.
Lawrence Livermore wants to change that.
But how? As a Department of Energy laboratory, we are interested in advancing and creating new technologies.
The application of drug discovery and development is one of many areas that we can apply high
performance computing. That was Felice Lightstone, a leader in the biochemical and biophysical
systems group at the lab. When you say drug discovery or drug design, you are designing
a small molecule, which is a chemical entity designed to target a protein in exactly the same way each and every time
it works. These would be drugs like you take in a pill form. You've got a headache, you
take acetaminophen or Tylenol. It's a small molecule. It goes into your body and it finds
its target, meaning where it's going to change some aspect of how your body functions.
Usually it's a protein.
You hope to change the function of that protein
to show improvement of whatever disease you might have.
It could be as simple as a headache.
It could be complicated like diabetes
or metabolic syndrome or cancer.
Felice and her team are doing some really exciting stuff
with high-performance computing and biology.
But to appreciate their work, first we need to understand the traditional method of drug development.
Historically, drug discovery has really sort of used a random search with experiments.
You build up big libraries of chemical compounds of the types that you think might be applicable to a certain drug target.
Think of the chemical compounds like puzzle pieces
and the targets as empty spaces in the puzzle.
Once you have a target,
you need to have what they call a hit molecule.
That's any molecule that will bind to your protein target.
Then you test them against
that particular molecular target.
And you see what works or what looks like it's close to working.
Then you modify your set of chemicals.
Then you do it again.
Through this process of making and testing molecules,
you eventually narrow in on molecules that could be effective
in binding and neutralizing this target.
They have to see if each puzzle piece or molecule will fit.
The problem with that approach is that it can take months for every one of those cycles
to go through.
You have to make molecules successfully.
That's complicated.
You need chemists to figure out the recipes.
Then you have to actually do the testing and interpret the data and
so on. You have to do quality control. It's a very time consuming process. That's how
drug discovery has traditionally been done. And that's why it generally takes many years.
At its core, the process now is still the same. Discover, make, and test molecules. Our institution is more in the technology
development and probably more on the discovery side of it. So our goal is to de-risk the
development of these molecules. For the actual creation of molecules, Livermore has partnered
with the National Cancer Institute's Frederick National Laboratory for Cancer Research
and BridgeBio Oncology Therapeutics, a company looking for novel treatments for cancer.
But how do they run 10,000 simulations a week and narrow it down to only 20 molecules for
BridgeBio to synthesize? This is where Livermore's supercomputers and machine learning come in.
There's really two trends in this that are important. One is our understanding of biology
is getting better. Our approaches to simulating molecular interactions are getting better.
Our computational power has grown. So we can actually do physics-based simulations.
The other one is this data-driven machine learning AI,
bringing those two together.
You've heard about chat dpt, that's a language model,
but we're exploring different kinds of algorithms
for machine learning.
We get to use the biggest computers in the world.
We're using high-performance computing
to try to solve biology problems.
We start from the basic science,
where we're looking at new capabilities,
actually trying to use the computers
to make methods go faster.
So how do we design drugs in a faster way?
The AI can predict new molecules in new regimes
that we haven't seen,
but they generally don't have enough data to drive them.
We really get a lot of increased power in the predictive models by bringing those two
streams of work together.
In our analogy, the AI would be searching for puzzle pieces while computers run tests
to see if they fit the missing space.
The key here is that we move to a physics-based simulation model, and we can run about 10,000
simulations a week.
We have a three-dimensional model of every atom in that protein that we're targeting.
We use computers to look at small molecules in that virtual space, and then we design
it so that that small molecule will bind tightly with the protein target.
So instead of sifting through a box of millions of puzzle pieces by hand,
they're using models to narrow down what pieces needed.
A corner? Edge? Middle?
But this way they can hone and target their experiments
and there's hopefully an acceleration in the process and or a decrease in the cost.
They saw huge success in a recent study. We only had to synthesize about 500 or 600 compounds and
it sounds still like a big number, but a traditional drug program runs anywhere from
2,000 to 5,000 compounds. That's a
75% decrease, which means drugs that might have taken years to develop can
now be finished in a matter of months.
My name is Jonathan Allen.
I'm a informatics scientist at Lawrence Livermore National Laboratory.
That means he works with data and computational problems to aid in biology research.
His main focus now is small molecule drug discovery, but he's used high-powered computing
for other work as well.
I'm thinking back to a previous project I worked on where we published a study that
was a reanalysis of human microbiome samples, an instance where we were able to take advantage
of unique computing facilities.
We had access to this large cluster that allowed us to search every publicly available human
microbiome sample against a very large database of microbial genomes to try to assess what
actually was in these human microbiome samples.
Every living person has microbes living in their body.
These can be bacteria, viruses, and cells that contain DNA.
Analyzing them gets scientists closer to developing new treatments for diseases,
both genetic and otherwise.
What we actually ended up finding was there was a lot more human DNA in these samples
than was previously recognized because we were able to pick up on more of the natural
human genetic variation in the samples that hadn't been previously detected because it
hadn't been searched for.
It was on the order of like weeks that we were able to do thousands of samples which
would otherwise take maybe months or a year to do.
This acceleration is obviously an exciting prospect in drug development.
But protecting public health doesn't stop at headaches or genetic diseases.
Some of the greatest dangers to humanity lay in other threats, like bioterrorism or viral
outbreaks. Will we be ready for those?
Lawrence Livermore National Laboratory invites you to join a diverse team of professionals.
The lab is currently hiring for a lead power grid engineer, a laser modeling physicist, postdoctoral researcher,
an OCEC program leader, a chief data architect, and 139 other positions for scientists, engineers,
IT experts, administrative and business professionals, welders, and more.
At Lawrence Livermore National Laboratory, your contributions are not just jobs.
They're a chance to make an impact, from strengthening U.S. security to leading the
charge in revolutionary energy solutions and expanding the boundaries of scientific knowledge.
The lab values collaboration, innovation, and excellence, offering a supportive workspace and comprehensive
benefits to ensure your well-being and secure your future.
Seize the opportunity to help solve something monumental.
Dive into the wide variety of job openings at LLNL.gov forward slash careers.
This is your chance to join a team dedicated to a mission that matters. That's
LLNL.gov forward slash careers. Your expertise might just be the spotlight in our next podcast
interview. Don't delay.
Livermore has had a biology program since way back in the 1960s. Understanding genomics, bringing new technologies into biology, building detectors for biological defense against biothreats.
The big one was this program called BioWatch. Lawrence Livermore was responsible for contributing
to a lot of the early technology. BioWatch was introduced in 2003.
The system, managed by the Department of Homeland Security, monitors the air to detect possible
bioterrorist attacks.
The lab has this two-pronged focus in the space, which is around countermeasures, which
is developing therapeutics for something that would be a biological threat.
The flip side of that is also understanding
what do we need to be developing countermeasures for?
How can we be pre-positioning ourselves
so that we can rapidly respond
to something that's an outbreak?
From the Oval Office tonight,
the president announced his boldness steps yet to combat the nationwide spread of the novel coronavirus.
To keep new cases from entering our shores, we will be suspending all travel from Europe to the United States for the next 30 days.
The COVID-19 pandemic started. We quickly pivoted to focus on antibodies for viral infections.
And that was extremely successful.
We demonstrated that we could actually develop or redesign an antibody for a viral variant
in just a few weeks using the lab rather than taking a couple years to do this.
RNA viruses are just a naturally concerning biological threat in general, and no better example of
that than SARS-CoV-2.
There has been a lot of activity at the DOE labs, Lawrence Livermore, as well as the others
in terms of bringing to bear all of these computational tools to develop various responses
and countermeasures. We were able to engage in developing some small
molecule drug discovery on some of the protein targets for SARS-CoV-2 and develop some potential
candidates. We need to be ready for the next infectious disease that comes around. And I
think that this is a national mission now that Lawrence Livermore can fulfill.
Thankfully, the urgency of the pandemic has subsided, but the lab is using that
same technology in other ways to protect Americans. What we're doing now is
designing antibodies to support warfighter protection. So if the DOD
sends soldiers into a particular area of the world, they may be exposed to
a virus.
They want to have antibody therapies that they can give to those soldiers going in to
protect them.
They want to be able to develop these things rapidly.
That's going very well.
Lawrence Livermore is using computational power to speed up biology research.
But almost everyone already has an extremely powerful computer in their
pocket. A modern smartphone runs over 5,000 times faster than one of the
original 1980s supercomputers known as Cray 2. High-performance computing is one
of the cornerstones of Lawrence Livermore National Lab. Every three to
five years we'll get a new
number one machine. Next is El Capitan, bigger and badder than any other machine
that's on the planet Earth. El Capitan, the fastest supercomputer in the world.
About 1.7 exaflops. That means El Capitan can perform 1.7 quintillion math
problems per second. Everything happens in a very short time span, but very complex physics going on.
El Capitan is about 20 times faster than the lab's previous supercomputer, Sierra, dedicated
in 2018.
What took days or weeks on Sierra?
Now just hours on El Capitan.
El Capitan's unclassified companion system, Tuolumne, a 288 petaflop system using
the same components as El Capitan, is currently the world's 10th most powerful supercomputer.
It will be used for open science, including some of the team's drug discovery work.
So how does Tuolumne compare to a smartphone? The device in your pocket
operates at 2 to 2.5 teraflops, a measure of computing power. We're gonna
get a hundred petaflops of dedicated time just for biology and this probably
will be the largest computing resource for biology in the world, just for biology problems. A petaflop is one thousand times the power of one teraflop.
And with a quick little math, that means Felice and her team will have over fifty thousand
times the computing power that your cell phone has.
For dedication to biology, that's never been achieved before.
But they can't simply plug in the computer and put their feet up.
These operations don't just happen automatically.
Part of the challenge of these types of problems is that it's not always easy to formulate the problem to be solved on a large computing system.
It can take time to think about how to structure the problem in such a way that it can be solved
with a large computing resources.
In other words, you have to know exactly which puzzle piece you're looking for and give
the computer the parameters to find it.
And once you find it, you still have to see if it fits.
One of the current bottlenecks is that it still takes quite a long time once we actually do the computational modeling. We still have to go out and make and test
molecules with this. So what we're working on now is directly integrating
our laboratories with these computational models. So the computational
model can actually directly specify what experiment it needs to have done.
Sometimes that's to validate what we're doing or sometimes it's because the
model is uncertain on what the prediction should be. In that case what it can do is
it can say I really need more experimental data with this set of
molecules so can you go run those experiments for me? Then we can actually
have automated systems in the laboratory which actually
run those experiments, bring that data back to the computational modeling systems. That system can
update its models to improve their predictions and then we keep going. And we can have this iteration
then between computational modeling and experimental make-test cycles and have this positive feedback between those two
that allows us to actually converge
to solutions faster yet.
We call that whole process active learning.
It's finding, making, and testing puzzle pieces.
We still have lots of humans in that loop.
We don't have an AI system going out
and manufacturing molecules.
There are humans in the loop doing that, but the system is really specifying exactly what it should be done.
It also does take expertise. I don't want to minimize my team. They are a great team,
and they have the knowledge that it takes to recognize that certain parts of the drug molecule
need interactions with certain parts of the protein molecule. And so by having that intuition or education really can focus the simulations and the screening
that we do as well.
But even though Felice, Jim, and Jonathan have streamlined the machine learning process,
drug development can still come to a screeching halt during the second and third phases, clinical
trials. The longer you get down that pipeline
going from initial discovery phase
to actually clinical testing,
you end up finding failures, particularly around safety,
that can really slow down the whole pipeline.
So one of the premises of the computational platform
has been this idea that we want to try to evaluate these molecules upfront as early in the discovery process as
possible for all of these different criteria that go into making a good
drug. The idea is that if we can initially explore different parts of
chemical space that meet all of our design criteria then we will save some
time later on in terms of
having to go back and start over again.
Usually, what causes a drug to fail at the clinical level is a safety concern.
A lot of what we're trying to do is predict, number one, is whether this chemical will
bind and affect the activity of the protein target primarily, but then secondarily, is
it going to be safe
in the human body?
Small molecules that are associated with cardio toxicity or liver toxicity and can typically
make drug discovery fail later in the process as it gets closer to clinical trials.
So by using AI to test these molecules before testing, Livermore is ensuring they don't
have to go back to the drawing board multiple times. That's a great advantage
and they're applying the same techniques to address other drug problems. Ones
you're probably all too familiar with. Common side effects may include headache,
dizziness, dry mouth, fatigue, upset stomach, increased heart rate, myeloract.
What can make small molecule drug discovery so challenging.
There's a lot of things that we can ingest in the body that our body doesn't like.
It can make us have negative side effects.
If you're taking a drug and it's in your bloodstream, you want to make sure it hits
its protein target and it doesn't go everywhere else in your body.
This is where you would say you have side effects.
And side effects are where it shouldn't be going.
Like you might feel nauseous or you might have a rash
or something like that.
But you still want the drug to really work
in where it's designed to work.
We've been demonstrating this computational pipeline
with this selective histamine inhibitor,
a novel antihistamine that prevents drowsiness.
The idea is to design a molecule
that hits this histamine receptor target,
but doesn't hit some of these other receptors
that look like the target
that have this negative side effect of drowsiness.
That's just one example.
Often, which targets companies are working on
is kept tightly under lock and key to prevent competition.
But there's one protein that everyone in the drug development space knows about.
It's been called the undruggable target.
It's the RAS protein.
It's a protein that you actually need it.
It actually initiates cell division.
But when it's mutated, it can get stuck in a non-state where it will just continually initiate cell division, but when it's mutated it can get stuck in an on state where it
will just continually initiate cell division. And it's responsible for 30%
of all cancers. It's implicated as the triggering point in some of the worst
and most untreatable cancers like 90% of pancreatic cancer, gastrointestinal and
lung cancers. When a Ras protein mutates, it doesn't know when to stop creating more cells.
In this case, cancerous ones.
To make matters worse, some mutations can also create resistance to cancer treatments,
making the disease even harder to combat.
If we can actually find an effective drug that would target all of those proteins that are mutated,
then there's some vision that it would cure 30% of all human cancers.
That would be revolutionary.
You have to have an open mind that many things are possible.
And the beautiful thing about Lawrence Livermore is that if you really want to do something that is
within the mission of the lab you can do it. And they've partnered with other research
facilities that are just as passionate about developing a treatment. Working
with Frederick National Laboratory, Lawrence Livermore National Laboratory
has developed methods to model cancer cell interactions with proteins. Some of
that work has now led to work with a small company called BridgeBio.
They're a pharmaceutical company focused on addressing cancer.
Bridge is providing great chemists, great insight in the cancer space, and the ability
to push and get to a new drug entity.
Livermore is contributing the computational prowess of the Department of Energy Labs and to
show that we can develop new methods and new technologies. What that comes down to is the
people on both sides. It's the interaction at the human level. We have great experts in the computer
space. We have great experts in the chemistry space and the ability to communicate and actually
convince each other that we are able to do what we're able to do and produce new drugs.
We actually have two new molecular designs now that are coming from Fully's Lightstone
and her group that are going into clinical trials right now.
And so that's very exciting.
So if this works, this will be transformational
for cancer treatment.
Shortening the time between drug discovery
and getting pills to patients is a lofty goal.
But Felice, Jonathan, Jim, and their incredible teams
are tackling them head on.
It's something like 750 FDA approved drugs right now
that are available on the market.
What I would love to see is our ability to expand that pipeline of molecules and therapeutics
that can get towards FDA approval much more efficiently and more quickly and cheaply.
I'm hoping we will have a whole set of new molecules that we can bring to bear on different
disease targets.
By using state-of-the-art computing power and combining traditional chemistry and physics
with machine learning and AI, they're already making huge strides in achieving that goal.
In the last two years, we've had great success in being able to make a few drugs.
One was just FDA approved for the Phase I trials, and those will be starting up in the fall.
These are small molecule drugs, so they go through the standard stages of clinical trials.
That will take up to a couple years to have real feedback on those.
I'm optimistic that we'll have a lot more therapeutic tools
in the toolbox, so to speak, to treat various diseases
in the next five to 10 years.
We've come a long way from the willow tree.
And the folks at Lawrence Livermore
are confident that the lab's state-of-the-art computing
technologies will bring better, faster drugs
in the next few years.
Whether it's a headache, virus, genetic disease, or cancer, no challenge is too great.
Helping humanity. That's the priority. Lawrence Livermore National Laboratory is opening its doors to a new wave of talent.
Whether you're a scientist, an IT professional, a welder, an administrative or business professional,
or an engineer,
Lawrence Livermore National Laboratory has an opportunity for you.
From enhancing national security to pioneering new energy sources and advancing scientific
frontiers, Lawrence Livermore National Laboratory is where you can make your mark on the world.
Lawrence Livermore National Laboratory's culture is rooted in collaboration, innovation, and the pursuit of excellence. We offer a work environment
that supports your professional growth and a benefits package that looks after
your well-being and future. Are you ready to contribute to work that matters?
Visit LLNL.gov forward slash careers to explore current job openings and learn
more about the application process.
Don't miss the chance to be a part of a mission-driven team working on projects that make the impossible
possible.
Visit LLNL.gov forward slash careers now to view the current job listings.
Remember that's LLNL.gov forward slash careers.
Your expertise could be the highlight
of our next podcast interview.
Don't wait, explore the possibilities today.
Thank you for tuning in to Big Ideas Lab.
If you loved what you heard, please let us know by leaving a rating and review. Thank you for tuning in to Big Ideas Lab.
If you loved what you heard, please let us know by leaving a rating and review.
And if you haven't already, don't forget to hit the follow or subscribe button in your
podcast app to keep up with our latest episode.
Thanks for listening.