Big Ideas Lab - Drones
Episode Date: January 27, 2026Drones, or autonomous sensing systems, step in where humans can’t. At Lawrence Livermore National Laboratory, scientists are developing machines that can see, decide and act in real time. In this ep...isode, we explore how drones expand the frontier of possibility: scanning hidden pipelines underground, navigating dangerous terrain and coordinating swarms across land, air, and water. These aren’t just flying cameras. They’re intelligence-gathering systems, force multipliers and life-saving tools.Guests featured (in order of appearance):Brian Wihl, Associate Program Lead for Autonomous Sensors Program, LLNLJames Reimer, Mechanical Engineer for Autonomous Sensors Program, LLNL--Big Ideas Lab is a Mission.org original series. Executive Produced by Levi Hanusch.Script by Caroline Kidd.Sound Design, Music Edit and Mix by Matthew Powell. Story Editing by Levi Hanusch. Audio Engineering and Editing by Matthew Powell. Narrated by Matthew Powell. Video Production by Levi Hanusch.Brought to you in partnership with Lawrence Livermore National Laboratory. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
The storm hits before anyone is ready.
Rain turns to torrents.
Torrents turn to rivers.
Strong waters pull away everything with them.
Cars, trees, homes, entire lives swept away in minutes.
And for the rescuers who go in after them,
every second is a risk.
When disaster strikes or danger is present,
Humans can't always go in first.
So another kind of responder takes the lead.
Self-driving vehicles advance slowly,
scanning for signs of life amidst the chaos
of debris-strewn streets.
Above a self-piloted aircraft maps the damage
from the sky, spotting movement.
Beneath the surface, autonomous underwater systems
cut through currents, scanning hidden waterways
for survivors.
And together, they act
as one coordinated force, land, air, and water,
all increasing the chances of security and survival.
That's really rewarding to see how that can actually make a huge difference for a lot of lives.
This is the world of autonomous sensing at Lawrence Livermore National Laboratory,
where science advances quickly, adapts constantly, and technology navigates where humans can't go,
but drones can.
Welcome to the Big Ideas Lab.
your exploration inside Lawrence Livermore National Laboratory.
Hear untold stories, meet boundary-pushing pioneers,
and get unparalleled access inside the gates.
From national security challenges to computing revolutions,
discover the innovations that are shaping tomorrow today.
Autonomous sensing systems or drones gather data
and make decisions in real time without the need for manual human control.
They appear in everyday life.
hovering above baseball games, capturing aerial real estate pictures, and bringing amusement to concerts
and festival crowds. But drones aren't just novelty gadgets. At Lawrence Livermore National Laboratory,
the Autonomous Sensing Program drives rapid drone innovation that pushes the limits of what technology
can achieve. The drones they create act as advanced threat detectors, scanning for natural and man-made dangers,
oftentimes at the national level.
And it all starts with sensing.
We do a lot of work with small-in-man systems, mostly on the sensing side.
That's Brian Weil, the associate program lead for autonomous sensors.
We're marrying this capability of drones with very unique and exquisite sensors that we develop, oftentimes in-house.
Before a drone can act, it has to understand what it's seeing.
Sensors allow drones to scan beneath the ground, map terrain,
through forests and detect objects invisible to the human eye.
And to be mission-ready, they have to do this outside the lab.
Under challenging conditions where failure isn't theoretical,
Lawrence Livermore tests each system to a technology readiness level, or TRL,
a measure of whether the drone is ready to perform in the real world.
It's getting these things to not only work, but work in a relatively reliable and consistent way.
You want people to trust it and you want it to be robust.
In one case, that readiness was tested far from controlled conditions.
We committed to doing to this event before we even had the sensor built.
It was an advanced sensor created to detect and analyze radio frequency signals.
We went from a concept of a very complex RF sensor,
and then in about three months we had a fielded system that was high enough TRL and go test the thing.
The test environment?
A small boat in Palau,
halfway across the world after a monsoon.
We were literally on a small 15-foot flat-bottom boat
going across the Pacific after a monsoon had hit somewhere
and just crazy waves.
I was literally sitting on top of our pelican cases
trying to keep them from going, and I was nervous.
I was like, we just built this stuff.
I don't know if it's strong enough to survive this trip.
And I remember our program leader was with us
and he turned around and looked at me,
and I think he said to someone else,
oh, does he get boat sick?
And he's like, no, I think he's just terrified
that we're going to lose the equipment.
So we got out there and the systems worked exactly how we wanted to and expected.
It was great for the technology.
That's what readiness looks like.
And it's the standard Lawrence Livermore uses to turn complex sensing problems into working systems.
Every day we're tackling problems that haven't been solved before, which was really exciting.
James Remmer is a mechanical engineer in the autonomous sensors program that develop sensors that scan, map, and interpret the world in real time.
A lot of the features we want to bring into the field are going to be advanced because they're just having been done before.
That's something we really try to focus on.
It's something industry hasn't got yet that we think is valuable to the end user and we want to deploy it into the field.
One of their most critical tools is a custom ground penetrating radar built in-house at Lawrence Livermore.
We use this for explosive hazard detection.
This is something we've tested at many different events and our training end users how to use that sensor as well.
This sensor is designed to see what humans can't and give them the information they need to act quickly and safely.
The team is also working on technology that reaches underground, detecting hidden pipelines and abandoned oil wells.
We can use sensors like magnetometers.
So these search for the magnetic signature of metallic objects.
So with large pipes, we can pick these up pretty clearly.
And then with a lot of our sensor data, we can take that data in and then apply AI and machine.
learning to that data and make detections where we think an object might be.
Customized drone sensors can detect threats more accurately and efficiently than
traditional methods, where human inspection may be slow, dangerous, or impossible.
The same can be true in time-critical situations such as search and rescue, using tools like
LIDAR, light detection, and ranging, which creates detailed 3D maps of terrain using laser pulses.
We can map the terrain, and look, if there's any
the evidence of where there might be a man-made object or for search and rescue if we're trying
to find if somebody's car was left near a trail and they wandered off on a hiking trail
into the forest. We can maybe identify that car with the LiDAR underneath the trees.
The technology isn't limited to search and rescue. One of our jobs is to keep a thumb on the
pulse of what are the technology needs for national security. For one of our sensor projects,
we had the sensor mounted onto an all-terrain ground vehicle. In just about six weeks,
We designed our whole own autonomous system that could drop in to a pretty full-size all-terrain vehicle
and then have a user operate the vehicle remotely, then also upload missions to that vehicle,
and the vehicle would drive by itself through different terrains, different conditions,
and it was very reliable.
It had a lot of safety features, and it just pretty much worked right away after only about six weeks of work.
Because we want to move these vehicles to work together collaboratively,
so almost have a swarm of these vehicles,
but then also have these vehicles operate missions
and not just be controlled by a remote controller,
but are actually moving autonomously.
Some of these operations could be dangerous
and how it's taking our warfighters out of harm's way.
That's really rewarding to see
how that could actually make a huge difference
for a lot of lives of Americans.
Across land, air, and sea,
drones have become the fastest way
to put sensing where humans can't go.
But as these systems move from experiments to missions,
a new problem emerges.
One drone can sense a location.
One drone can cover a path.
One drone can respond to a threat.
But real-world missions rarely stay that small.
As the area grows, time compresses, conditions change.
Sensing stops being a single drone problem.
The solution?
A swarm.
A swarm isn't simply a collection of drones.
It's a single coordinated system.
Each drone working toward a common mission, together.
Some may scan for heat, others map terrain while another group detects signals.
Together they move as one, covering more ground and accomplishing missions no single drone could achieve.
In 2024, Lawrence Livermore's autonomous sensing team received federal approval to test swarms at a larger scale than ever.
We got the certificate of authorization from the FAA a couple of years ago and became, as far as we know, one of the first federal facilities to have the capability to fly 100 drones with one pilot in command for swarm exercises.
At this level, swarms stop being a research curiosity and start becoming an operational reality.
But testing swarms of this size requires a special environment.
We have a drone pen, a netted structure that allows us to do riskier testing and things that maybe not be allowed.
in open airspace by FAA practices,
but are really important to what we're trying to do.
But scale changes the problem.
Once dozens or hundreds of drones are in the air,
it's no longer about what each one can do on its own.
Every one agent in that swarm might be able to do something.
But when you start putting them together
and they have complex interactions and start making decisions,
that swarm is going to have its own,
what we call it, merchant behaviors
that is much more and sometimes unpredictable
compared to what a single agent can do.
Out of simple rules and shared information, something unexpected takes shape.
Instead of managing an individual aircraft, drone operators issue a mission.
And the swarm decides how to execute it.
What you really want to get to is this idea of force multiplication,
which is, okay, I can go do something else, or I can tell other things to do things at a bigger scale
and trust that they're going to go do it.
It's like an orchestra tuning itself on the fly.
Each drone follows simple rules, but together they create a symphony of motion, each working together to create a melody.
You can apply it to almost any problem that could be dangerous or repetitive.
Some examples would be search and rescue.
You can be a huge force multiplier, having a lot of drones searching the same area with really high-quality sensors.
When you're something so time-critical, it could really help if you wanted to save a very.
somebody's life. When you start putting them together, you start caring about how do they interact
that, again, allows them to do more than just one thing. What kind of information are they sharing
with each other? How are they coordinating between each other? When something happens to one,
what is the other one doing or deciding to do differently? Now it's very focused on the interaction
between those two things versus what their single capability is, because again, that's when you're
going to start getting more value of the swarm. But increasing drone numbers doesn't only multiply
impact. It multiplies complexity. And managing that complexity is part of the challenge.
When you scale up and do bigger tests, you need a lot more personnel out of the field. You need
to have more of a plan. If something goes wrong, it can take more time if you have to troubleshoot
across a lot of platforms rather than just a single platform. One of the challenges when you're
starting to build out your swarm, you need to figure out what kind of comms you want between the drones
and also how the drones are communicating. They're going to communicate in between each drone.
or if they're going to talk back to a base station,
or another option would be you had like a mother ship drone
that's also flying in the swarm.
So that drone could be above or below, the rest of your drones,
and be talking to those drones with different com links.
At this scale, failure isn't surprising.
It's part of the equation.
The real question is whether a system and the team behind it
can absorb that failure and keep moving.
That's where the autonomous sensing program
at Lawrence Livermore thrives.
We are a very vertically integrated team
that has all the different types of science
and engineering skill sets that allows us to put a system
together from end to end.
They act as a single coordinated team
and they move at a pace
few can match.
We have people with skill sets that pretty much can do
everything we need and if we identify new ones
as new projects or new things change, we try
to find people to fill those out. And then
we also are pretty unique in the lab and what
we have from a physical resource. So our
building pretty much has everything
we need in it, including a machine shop, so that we actually cut metal and build things in our
own building. We don't farm it out to other parts of the lab, and we don't farm it out outside the
lab unless we need to. That allows them to test early and break things intentionally.
We can design it, we build it, we go out, we test it, we break it somewhat intentionally, right?
And then we come back and we do that all again. And the goal is do that as many times as you can,
as fast as you can, to start answering those unknowns. You're always trying to critique your process
and improve it is huge in driving forward very fast because you're always going to be improving
yourself. As technology continues to advance, drones may not just follow instructions. They could
collaborate, adapt, and make decisions together. They could share information instantly, respond to
unexpected dangers, and cover vast landscapes with precision. In moments like floods, fires, or conflict,
Resilience keeps emission alive when conditions collapse.
We take on things that are inherently risky.
We're really trying to take things that a lot of times we're like,
we don't know if we can do this or we don't know if we can do this with the constraints we have.
But let's start trying to do it as fast as we can.
This is what it takes to build drones that can sense, adapt, and operate together
under real conditions at scale in only a few months.
So when emission spans land, air, and sea,
when environments are unstable
and when people can't safely go first,
these drones are ready to.
Thank you for tuning in to Big Ideas Lab.
If you loved what you heard,
please let us know by leaving a rating and review.
And if you haven't already,
don't forget to hit the follow or subscribe button
in your podcast app to keep up with our latest episode.
Thanks for listening.
