Planetary Radio: Space Exploration, Astronomy and Science - Space Policy Edition: How NASA remembers—and forgets
Episode Date: May 2, 2025No one person knows how to build a spaceship. Dr. Janet Vertesi has seen this firsthand. She’s spent years embedded in NASA science teams, not as a participant, but as an observer. She’s a... sociologist who studies the team dynamics of NASA missions. She is alarmed at the prospect of indiscriminate firings at the agency, and at the potential loss of institutional knowledge that won’t easily be rebuilt. Discover more at: https://www.planetary.org/planetary-radio/spe-janet-vertesi-on-threats-to-nasas-group-brainSee omnystudio.com/listener for privacy information.
Transcript
Discussion (0)
Hello and welcome to the Space Policy edition of Planetary Radio.
I'm Casey Dreier, the chief of space policy here at the Planetary Society. Good to be here
this month. I am very excited about our guest, Dr. Janet Vertessi, associate
professor of sociology at Princeton University, who for the last two decades has studied not just NASA's robotic spacecraft,
but the teams that enable them.
She's embedded herself and watched the various sociological engagements and interactions
and what makes these teams tick as they explore the solar system
via their robotic emissaries in a sense. Her insights on teamwork
and how knowledge is passed forward is reflected in two books that she's written. The first was
called Seeing Like a Rover, How Robots, Teams, and Images Craft Knowledge of Mars, and her latest
book that came out a few years ago, Shaping Science Organizations, Decisions and Culture on NASA's Teams.
They are truly unique resources in terms of understanding
in a sense how the actual process of exploration
occurs with robotic scientific spacecraft.
It's truly fascinating.
But that's not even the exact reason
she's on the show this month. That's part of it. Why she's here is a op-ed that she published
recently that is linked to in our show notes. It's called Invigorating the American Space Sector
Requires Working with NASA, Not Against It. The period in which I'm recording this
is a period of incredible uncertainty for NASA itself,
what its future is going to be,
what its budgets are going to look like,
and how many people it will even have.
At this point, again, in April of 2025,
NASA narrowly avoided laying off over a thousand workers called probationary,
which were basically young, early career individuals and people who had just been promoted.
NASA has though lost nearly 5% of its workforce through voluntary buyouts offered by the current
administration.
And while those departures are voluntary, a reduction in 5% of NASA's workforce actually
places the agency at its smallest civil servant workforce level since 1960, essentially.
Now what the right level of workforce is, isn't necessarily the point of this discussion.
What is the point, and which is the article that she talked about made the point of,
is that institutions themselves, particularly institutions
like NASA that are charged with doing something,
let's be honest, kind of weird, right?
Charged with exploring the universe,
with sending people into space, with pushing
the boundaries of capability of these weird one-off technologies
to land on Mars or to go to Jupiter
or to build a super cold spacecraft that
can look back towards the early stages of the universe.
These aren't something that you just pick up in a book.
You figure out how to do this.
In a sense, the larger institutional workforce of NASA
shares this broad knowledge.
It's too complex and too big for any one brain.
And Dr. Vertessi's works is basically this study
and her op-ed makes this argument
that if you don't preserve this institutional knowledge
through the act of doing things and working together in teams, that that knowledge can
die.
Canon does die.
There is no, you know, we often think that once a genie is out of the bottle in knowledge
for humanity, there's no putting it back in.
But she actually argues it has happened many times.
And particularly in civilization's deep history,
we forget things all the time.
Knowledge is an active process.
And if there's this concept of as broader meta,
in a sense, meta brain of a group capability
and group awareness and group expertise,
randomly losing people and driving
either driving them out of the workforce or applying arbitrary levels of workforce cuts,
you're poking holes in that group brain and weakening the ability to institutionalize
and to maintain this harder knowledge that we have for space exploration.
I think about this, I came away from this discussion
with the idea that NASA in a sense functions
as a national strategic reserve of knowledge
for how to get to and operate in space successfully.
The conversation I'm about to have with Dr. Vertessi
touches on her article, which I again
recommend you read, and the importance of actively maintaining institutional knowledge,
particularly for activities like going into space because of its complexity, because of
its unforgiving nature of space itself.
And it's a really important point to consider as we are in a situation where we are trying to reduce the capabilities, or at least reduce the workforce levels of NASA.
How can you do that if it has to happen in a way that preserves knowledge?
And as she'll point out, it's not just enough to assume that the individuals laid off will go and found new companies or bring that full knowledge with them, they'll bring bits and pieces, but there's no guarantee unless there's some broader effort
that that knowledge will take root
and be maintained over time.
Before we get to that interview,
the Planetary Society is a nonprofit,
member-supported organization.
That means we rely on individuals to be members and support us financially in order so we can do all the great work that we do here at the organization, all the outreach, all the education, obviously organizing our members to engage with their political systems and argue for space exploration and space science and bring you shows like this and the weekly show Planetary Radio.
Membership started just $4 a month.
You can pay more.
I encourage you to do so.
Again, we literally depend on our members.
We do not take corporate money, and we do not
take government money.
We exist solely by, and our independence
is enabled by, members.
Hopefully you, that I'm talking to you right now.
But if not, consider joining us.
It truly does make a difference. That's planetary.org slash join. I will add that as you hear this at
this period of recording when this show just came out, we are also fundraising specifically for
our advocacy and policy program. That's at planetary.org slash donate. You can see a section there that directly funds the work that me and my colleague, Jack
Coralli do in Washington, D.C. and all the outreach and analysis and the unique capabilities
that we provide to this broader space industry.
That's planetary.org slash donate or planetary.org slash join.
Please consider supporting us. And now, joining me is Dr. Jan
Abertessi. Jan Abertessi, thank you for joining me this month on the Space Policy Edition.
Of course, it's a pleasure. Thanks, Casey. I have linked in the show notes to the great article
that you wrote about the threats to NASA's workforce and the loss of that. And something that you opened that article with really stuck out to me,
which was you, and I'll just kind of quote you here, you say,
it is possible, even easy, to systematically forget or lose entire swaths of technical or scientific prowess.
Why does that happen? Has it happened in our past, at least in modern era?
And what do we need to do to preserve, in a sense,
and sustain human knowledge over time?
It's a good question.
Human knowledge, we tend to think
of belonging to individual people
or sitting somehow in our heads that could be sort of
extractable from an individual.
But knowledge of the type that it takes to
do these enormous technical projects is a group endeavor and it's a group collective
phenomenon. So we talk about this as a question of distributed cognition, for instance, that
it doesn't take just one person to put together a Boeing 747. I mean, there's many, many
different people involved, many different groups, many different clusters of knowledge that have to come together.
And then a lot of knowledge that we don't ever write down, that escapes documentation.
We call that tacit knowledge in sociology.
It means the unspoken or quiet knowledge.
If you've ever tried to explain to a child how to ride a bicycle using words or how to
brush their teeth using words, you'd know that that goes awry very quickly and it's because there's so much more that
we don't express verbally.
So when you put those two things together, one, that knowledge is a property of groups
and two, that a lot of knowledge about technical know-how cannot actually be expressed in documentation,
you see what happens when you have these large technical projects that suddenly lose institutional
backing. Everybody who knows what they're doing suddenly escapes the project. And it means that
you've lost a tremendous amount of technical knowledge, potentially absolutely crippling
technical knowledge. You've lost the ability to send something into space, to put something
effectively or safely in the air, to manage a railroad, for instance. I mean, if you trouble the institutions in which individuals work collectively and transmit that tacit
knowledge from one generation to the next and are able to bring their knowledge together
across different clusters, then you do actually un-invent technologies. So this is known as the
tacit un-invention of a technology because you've lost the stories,
the know-how, the mentorship, and also the collective knowledge that it takes to make
a project work.
And have there been examples of that that we've seen in history, in recent history?
I'm trying to think about, I mean, there's a lot of knowledge that we lose, right?
Because people are not going to...
I guess we wouldn't know if we've lost.
I guess... Yeah, it's hard to know you lost it.
I mean...
In your article, I guess I'm thinking about Apollo.
Yeah.
And returning to the moon maybe as an example
that's relevant to this discussion.
Yeah, I think the Apollo case is a really interesting one
because America spent a ton of money getting to the moon.
I mean, it was basically like blank checks
to get Apollo off the ground
and get it to
the moon successfully over successive waves of missions.
So what was interesting about Apollo wasn't just the technology, it was the institution,
the organization.
What NASA did, and people like James Webb, the original James Webb, talk about this explicitly,
what NASA did was set up an organization in which people worked on
successive Apollo's to stepwise make it to the moon. And they transferred their knowledge by
working, like someone would work on Apollo 2, and then another group would work on Apollo 3,
and then that group from Apollo 2 would also transfer to 4 and 5. And so you had this kind of
infusion of knowledge over the long term
between one group and the next. And what that meant is each stage built successively on
the prior technology demonstration as well as with the prior groups of people. So you
had that knowledge go over time.
And then the other thing that someone like James Webb said is what was interesting about
these big projects at NASA is like, you can't do this with a small group.
You have to do this at scale.
It is such a difficult and large thing to do.
I mean, space is there to kill us, let's remind ourselves.
So if we really want to get humans to some place as inhospitable as the moon and difficult,
we have to bring lots of organizations together in ways that they hadn't really worked before. And so what you end up with is these large
organizations that extend out between the public and the private sector, but at
the same time a group of people that work together quite concertedly over, you
know, 10 or 12 years in order to make something like Apollo happen. And so as
an organizational sociologist who studies technology in high-tech spaces,
this is a really fascinating kind of organization because it means that they
maintain and build on all that tacit knowledge. They can work with
distributed cognition to make the whole, you know, apparatus come together and
then it's not lost until Nixon says, that's it, we're done.
We're spending too much on the Vietnam War, it's 1970, the Nixon Doctrine comes in and says,
okay NASA, you don't get blank checks anymore. Now you're subject to the same style of funding
as many of these other agencies. You have to go to Congress every year, every two years,
and present a budget and has to be debated. This money isn't just coming out of nowhere anymore.
And that's when, as we've seen in your work, for instance,
Casey, we've seen massive withdrawal of funds from NASA.
I mean, the budget crashed by 2 thirds, right?
And that made it so the people who worked on Apollo
had to leave, and that knowledge left with them.
And I think the idea was, look, it's great,
we've built this capability,
now it'll go out into the private sector,
or it'll go out to re-infuse more NASA projects,
but it didn't.
It didn't because you can write down
how to build a spacecraft, but you need that team,
you need that longevity, you need that tacit knowledge,
and you need that group know-how.
And without that, without that, we literally lost the ability to go to the moon.
We basically forgot how to do that collectively.
And as time went on, as those people came to the ends of their careers, as those people
passed away, that knowledge died with them and that group knowledge died with them.
And so that's a fear I have when I look at the kind of funding scenarios for NASA right
now is we're facing a similar moment as we faced in 1970.
The idea that like, oh, it's okay, we're spending too much on this right now.
It'd be much better if we sent most of these people to the private sector and they'll keep
doing the good work there.
The problem is if you de-institutionalize those people, if you break up the band, so
to speak, they don't play together again. And they might play different music, but it
will not be the same thing. And in many cases, we may lose the ability to do the things that
we now do so expertly in space and do the best in the world in space.
There's like five things I want to follow up from that.
Do it.
Janet.
I'd like to hear it.
Yes.
I mean, the first is just that I want to just maybe talk a little bit more about the idea
of group knowledge because I think it seems counterintuitive or just maybe not intuitive,
maybe in a way for how humans tend to think about other humans. That there's this idea of like,
you know, the singular genius or the individual who is responsible for something and it's in
someone's brain at then or maybe even a more modern parlance, some captain of industry who
can come and marshal and through kind of some Nietzschean will to power, shape the world to
their whims. But this isn't it. This is saying that there is something
about that knowledge exists, it's distributed, but also in the interactions themselves, which is
maybe some interesting epistemological challenge to me understanding this. So is it the dynamics
itself in a sense have some emergent knowledge property come out of that?
Well, I think I'm not saying that humans don't have their own brains and they don't
think themselves, they don't have knowledge, I mean they do.
But we also have this other property which is that humans work collectively in groups
and in teams.
And in teams, humans can do way more than they can do alone.
I think culturally, especially in the West, especially in America, we have this fantasy
or ideal of the singular cowboy who can do all the things or the genius who's going to
be able to do it all themselves.
But when you actually look at the history of science and technology, it was never a
single person.
They often had wives that were assisting them in the laboratory.
They had entire laboratory assistants.
They were interacting with other scientists. They were working with other people's tools and technologies. You're never doing
it all alone. And that's okay. I mean, humans are also an interdependent group species.
We have ways of organizing ourselves. That's another thing humans do as primates, right?
I think the other thing that's interesting is, you know, I've done a lot of work with
NASA teams for the last 20 years, and my most recent book, Shaping
Science, looks at how different teams organize themselves in order to do their science on
Mars or at Saturn. And one of the things I found that was so provocative in that book
was that I was looking at these two teams that were concurrent, these two exploration
teams that were happening concurrently, and they were happening at many of the same institutions, and they even involved the same
people.
And those people behaved differently depending on the structure and the organization of the
group that they were in.
And like literally, you could go from one meeting in the morning with somebody and they
behave one way, and then you see them at another meeting in the afternoon, they behave entirely
differently.
So humans are also really good at code switching as they move from one group to another.
But this really helps demonstrate that organizations and teams are also super fundamental to how humans operate.
And if that's kind of a bit of dark matter in the way that we usually talk about knowledge,
it needs to come to the forefront as we try to think about what knowledge needs to be sustained into the future.
There may be some knowledge that we're ready to let go behind,
but there's some knowledge that is technically bound up in, in my understanding, the continued American project
that needs to be respected for the kind of group and collective knowledge that it is. And that's my concern.
It strikes me that
it's the idea that the knowledge is not just
distributed in a group. But as you kind of point out there, it's
the culture of the group itself and that ongoing interactions
between those group members. So yeah, the you know, I guess that
the collective brains together know something that distributed,
as you said, as if they kind of go off to the winds
Don't fully
Capture that now it's impossible to capture that it's impossible to capture that absolutely. It's impossible
Because it's almost like a domain error right that you you need the group to reflect it not the individual
Yeah, absolutely
And even if you you know go off into the wilds that you try to document everything, you know
Document your code or whatever there will always be stuff you forget and you try to document everything, document your code or
whatever, there will always be stuff you forget.
There's always the stories.
This is the other thing, working with spacecraft teams that have to go a really long way.
For instance, there's a group trying to build an interstellar probe.
We've talked a lot about this problem of knowledge transmission because the people who are going
to build the thing are not going to be the people who fly it or the people who do research with it because it's going to take 50 years to get there.
Those people will be retired, they may be dead, you know?
And so, we had to think carefully about building an organizational culture
that would transmit those stories.
Oh, you know, this one time this error happened and we figured out how to fix it this way.
And because it happened 25 years ago and the person who trained you happened to tell you that over lunch one
day, you're able to fix it later on down the line.
And we've seen that with projects like the Mars Exploration Rovers, we've seen that
with Cassini, we saw that with Galileo, like these longer lived missions, that transmission
of knowledge, of group knowledge over the generations and as part of the collective group is absolutely crucial.
Otherwise, you can't fly it. You can't operate it. You've inherited a thing that you do not understand.
Is it possible conceptually to capture all that information or is there something that is just fundamentally distinct about certain types of
experiential knowledge that cannot be codified into symbols? I'm not sure that there's something
specific that characterizes that type of knowledge as different. It has to do with what we think is
relevant and also with problems of interpretation. Because we all belong to the same culture, we share in communication the same cultural
substrate.
So, if I say brush your teeth, you know what I mean, right?
As opposed to, and you know to use a toothbrush and not a hairbrush.
But you could imagine that 500 years in the future someone finds instructions that are
like brush your teeth and they're like what?
Because they don't share the same cultural framing or background.
So even the way that we might write down the most considered set of instructions is just
not going to include that.
And look at also the way that Apollo did everything with the kinds of computers they had in those
days.
Right?
The people that are around building computers now actually are not
even remotely trained on anything like those systems. This is like the problem with talking
to Voyager and it's still commanded in Fortran 77 and they're like, is there, you know, raise your
hand if you still remember machine assembly language. Like they don't even teach that in
computer science classes anymore. My students are just, you know, they come in, they're like,
yes, I know Python. You can't talk to it that way. So a lot of the knowledge is so situational, is local to the
organization, is not necessarily captureable. The cultural context shifts so you don't have
an ability to interpret that without anyone to sort of serve as a guide. And then because you
don't know what's relevant, you don't totally know everything to write down. It's only in the moment that someone would be like,
oh, remember that one thing that happened 25 years ago
and it was this bit got flipped or something.
I mean, that's where, this is why we find
so much tacit knowledge is that stuff we can't express.
Right, I guess it's this kind of concept
of not just intuition, but knowing,
as you point out out this cultural context,
but even you're always going to have to make some judgment
about what to write down and what not to.
Sure.
And I guess we're probably really bad
about understanding what is obvious to others
versus our own experience.
It makes me think of there's this,
a great book called On Food and Cooking by Harold McGee
and it contains
throughout it these like first known recipes of various types of things and you know so it's like from you know back to Roman era or even the 1500s and a lot of the recipes are like add enough of
this ingredient until it looks right. Yeah basically yeah. Okay they wrote it down but I have no idea
what it looks right. It probably seems so obvious to them, they wouldn't even bother to write it down,
particularly if there's a cost to recording information,
not even necessarily in the physical medium these days, but the time of doing it.
Yeah.
If you just don't know, you can always fractalize down to details about how to include,
and then suddenly it's a pointless document if it's millions of pages long
as it's telling you how to walk slowly towards the machine
to begin with.
Exactly.
So kind of going back to groups,
is this issue or the idea of group knowledge
that is sustained, how can that be preserved then smartly?
I mean, I imagine it's a more complex relationship
between just add more people or keep the same number of people, right? There's some sort
of application of it that doesn't a priori exclude the idea of reducing the number of
people, I would assume, right?
Yeah, this is why we build institutions, Casey. We build institutions and organizations to build group knowledge and then sustain
it and transmit it.
And when you deeply trouble that institution, when you make massive cuts to the personnel
in that institution, then you threaten the ability of the institution to have any continuity
in keeping that knowledge alive.
That's just probably sociology of organizations 101.
So are there best practices for sure?
I mean, we think a lot about knowledge management in organizational studies and theory.
We think a lot about that in terms of archiving, but we also think about that in terms of mentorship
and so on.
In many ways, a lot of this is fracturing because of the way the labor force has fractured
in the last 20 years and moved more towards contracting,
but also moved more towards the kind of idea that you don't stay in an organization very long anymore.
Young people, they have this idea that you move out to move up. Many of the hierarchies are broken,
so you work in one place for a year and a half and then you get a job at the next rung up,
but at a different company and then you work there for a year and a half and then so on and you kind of put your career together across many different places and spaces
and you sign an NDA everywhere you go so you can't really share that information but you're moving
around this networked community within your field. One of the problems we've had with that is that it
challenges our ability to have knowledge sustainability within an organization.
It's hard then to mentor somebody with how a particular thing works here and then have
them leave.
Now, they may always come back.
They may come back at a different rung of the organization and use that knowledge, but
it's made for some challenges and continuity.
And then I would say also when a company is bought out or when
everybody, like the entire group of leadership retires at the same time or if you have a
competitor come in and there's mass departures at the same time or randomly, that will really
threaten the remaining threads of how knowledge is communicated and passed down.
You mentioned something that resonated with me, which is the idea of mentorship. Mm-hmm.
And again, I just keep coming back to this theme of
knowledge is basically a product of interpersonal
engagements. And again, you've studied this very closely at a, I'd say,
is it a small scale?
I guess, I don't know, like the range of teams that sociologists get.
We call it the mesoscale.
Yeah, within NASA mission teams.
How does that effectively address some of these?
Is it just purely by, watch me what I do, and then through that, you're just taking
in all of this nonverbal, nonwritten down information in a more whole, because we're just kind of taking in all of this non-verbal, non-written down information in a more
whole because we're just using multiple senses or is this something and maybe the extension of that
is that something that's possible or degraded then through remote work? Does this require in-person
presence? Yeah, there's a lot of challenges associated with remote work. I've given several
talks about this as well and they're well-known challenges. So I'll mention that we've studied remote
work for 40 years. So we know what makes it work and we know what makes it not work. And
we studied it also from within tech companies that were outsourcing newly in the 1980s and
the 1990s to places like India and so on. And then we were able to implement best practices
in some of those units, right, which was great.
And then when the pandemic happened, everybody took every best practice and flipped it on
its head and did the exact opposite.
So in fact, it broke a lot of teams as opposed to being able to produce continuity.
And that doesn't mean remote work can't work.
It just means that the kind of remote work that we implemented then was dysfunctional
and was dysfunctional in ways that like we could have told you in advance,
and many of us were trying to say something about during the moment.
So yes, remote work can be a challenge to this.
And we've also studied that, like how remote teams usually
get together on a cadence.
They have meetings in person.
They go visit each other's work sites to get a sense of the common ground
for communication
and the way that an organization functions and how they make decisions there.
And that's really the glue that holds these organizations together even when they work
remotely for long periods of time.
It's not even necessarily a question of quantifiable knowledge that's transferred.
It's ways to approach problems.
So the best kind of mentorship I've seen happens
in teams as they're trying to solve a problem together. That's where the tacit knowledge
really comes out. So if you're in an engineering team, you're all about solving problems. You've
got five people or eight people and you're all trying to put a system together and you've got
your different subsystems and you're trying to figure out how they're going to piece together
as like puzzle pieces. And you've got challenges, you've got problems, is there enough power, is there enough data,
is there enough, you know?
And it's in that group work that people learn not only how we solve problems around here,
but they also learn or pick up the, oh yeah, well last time we did it this way, or oh hey,
this is kind of an unusual piece of equipment, let me tell you about it, right? Those are the kinds of things that happen when you're working on shared tasks. So it isn't so
much like my mentor took me for a coffee today and he told me XYZ, which now my knowledge has
been transferred. You know, it's more we are working together in a group and the mentorship
is in the task and the group culture is in the task. Every time we get together and we try to
build this thing or we try to fly that thing or we try to come up with a solution to a problem, we're producing and we're engaging in
that kind of group knowledge. It strikes me as the difference between a participatory versus passive
relationship to this. And that's why it's not like codifiable. It's so situated, it's so in the
moment. It's quite ephemeral and it's
really personal and it's interpersonal. So that makes it the kind of thing that we culturally
haven't spent a lot of time being attuned to. And in fact, we often poo poo it. We're
like, well, that's just sort of messy social science stuff, you know. But that is actually
the human glue that enables these big projects to succeed. And I think that's also what attracts me to places like NASA.
These are people doing complicated, incredibly complex things, things that have never been
done before.
You know, JPL's motto is dare mighty things together for a reason.
They're doing mighty and difficult and, you know, first time things and they're doing
it in teams.
And the teams have to be optimized for precisely approaching those challenges.
And that means that they're not coming to it entirely afresh.
They're coming to it with the experience of 60 years of meeting similar kinds of challenges
and building on that knowledge together to approach the next challenge.
And that, I think, is what makes a place like NASA
so fascinating for someone like me,
is because that social messy stuff
is actually what makes these things fly.
Is there a flip side to 90s at NASA?
It seems, and I have in my head this image
of taking this big group brain
and then just poking random holes in it as people depart.
And that's like the unstrategic outcome, right? Then like the
group brain is just missing pieces whether they were important or not to that institutional transfer.
There wasn't a way to evaluate and ensure that you had a functioning brain at the end of it.
So, you said... Is that too weird of a visual?
No, I think you're... I'm trying to say... Yeah, you're right. But here's what makes it even more
devastating. Okay, so first of all, 30,000 employees at NASA.
Google has 180,000.
So let's get real about the scale we're talking about here.
I mean, NASA itself, I mean, has only, that was like at the peak of Apollo, it was 36,000.
Oh, great.
Right?
So what is it now?
So I think NASA is around 17,000, 18,000.
18,000.
So Google is 10 times larger than NASA. And how many things has it landed on
the moon? So you know, this is a small organization for the weight of the things it is doing.
Second, as you well know, Casey, NASA is not a singular entity. NASA from the beginning
had competition built into it through the distribution of various centers.
So there's Marshall that does more propulsion.
There's Houston, which does the human spaceflight.
There's launch capabilities in Florida, and there's robotics out at JPL, and there's
the wind tunnel at Ames.
I could go on, but there's local knowledge specific, talk about distributed cognition, specific to different
parts of a spacecraft system.
And that encourages when NASA spends money, first of all, that there's competition so
that you can keep the prices relatively low, reminding us that these are all bespoke objects
that they're building, but also enables us to spread the money from the federal government,
which is not a lot of money, to lots of different places around the country.
That was purposeful in terms of how NASA was set up.
It is already starved for people to try to do the enormity of the things that it's trying
to do.
It's also starved for money of the enormity of the things it's trying to do.
The budget we're talking about at NASA, this is like a teeny fraction of a penny
off of every, you know, US tax dollar.
And when you look at something like planetary science,
it's even smaller than that.
And the budgets are, you know,
going up and down and up and down all the time,
mostly down.
Sometimes they're up a little bit,
but they're not up a lot, right?
But you know, we're way down from where we were
under Apollo and even under Viking. So- And if if they do go up, maybe the money shows up six months into your fiscal year.
And it also shows up at the wrong time. So there is a well-established development curve
for the way we build new technologies. It's the technology development curve. It kind of looks
a bit like a wave where it goes down for a bit, which that's where you put a lot of money into it in R&D. And then it swoops up and that's where you
get all the revenue because it all works out in the end. And NASA can't do that because
it can't go into the red. It can't have a slush fund lying around for R&D. You know,
private companies, if they start going over on R&D, they can start moving money over from
different places. They can charge commercial satellites a little bit more than they would otherwise in
order to have the money around for a particular project. NASA as a federal agency cannot do that.
It cannot move the money that way. So when you restrict funds and you restrict personnel,
and then you start poking holes in these institutions, which are already pretty
small and fragile for the kind of work that they're trying to do, you have tremendous fragility and loss of knowledge that's basically on
the line.
And you might think, oh, well, we're just making it more efficient.
We're just making it more efficient because we're getting rid of the people.
But actually, in my research in the history of NASA and the history of these missions,
that makes it less efficient because you don't have the resources to get the job done.
And so you end up trying to sort of rely on other factors or other things or have gaps
in your knowledge or rely on like, you know, a free instrument from the French or something
in order to, you know, get you through, get you through these impossible budget caps.
And then it turns out, oh, wait a minute, the French are late.
So we slipped the Mars launch window. So it's going to be an extra two years and however many hundred million
dollars that is, right?
So it turns out when you lose the people and you starve these institutions of funding,
you actually cripple their capability to behave efficiently.
Well, again, it just strikes me as this idea, again, if the knowledge and the knowledge transfer process requires
interaction between people, cutting, if you want that knowledge transfer to happen, and
if you want them to be able to tackle these really difficult problems, you basically need
to give them the time to interact with each other.
You need to give them, yeah, people time.
Yeah.
And you can't shortcut that.
You can't make that cheaper.
And the cost of labor is just always...
People are expensive.
Yeah.
Let me give you a good example of this, Casey.
So, the Psyche mission, for instance, which had a launch delay, which is not related
to this particular problem.
I sat on the Psyche IRB and Psyche was one of those missions that decided to go with
a commercial satellite bus from a small organization called Maxar at the time that was producing commercial satellites.
They were trying to get them ready for it.
They wanted to move into the deep space market.
This was an opportunity to work with JPL to prepare their satellites for deep space, which
is as your listeners may or may not know, outside of Earth orbit is like a really different
context.
Just because you get something into Earth orbit, it says nothing about what's going
to happen once you get farther out from Earth and once you're into deep space. So it actually
requires quite a lot of work. And the PI of that mission, Lindy El-Konstantin and her project
manager spent a ton of time going back and forth between JPL in Pasadena and where Maxar was based
in the Bay Area, back and forth and making sure that those groups that came to visit JPL in Pasadena and where Maxar was based in the Bay Area, back and forth in making
sure that those groups came to visit JPL and the JPLers went to visit them.
They had such a tight team.
That group really understood each other.
The problem with Psyche had nothing to do with that.
When people first came in, they were like, well, we've seen this problem before that
these are two different organizations and they just aren't talking to each other. No, no, no. They put in the
time and the effort. The other thing I will mention about that is that is expensive. It
costs money to go visit. It costs money to have people that are really spending the time
to make sure the pieces fit together. Well, however, it only costs that money on the front
end. Because once you've invested in those relationships and that knowledge transfer, the knowledge
flows.
And then you're way better equipped later on down the line to fix problems as they come
up.
And they did come up for Psyche.
And because that relationship was so tight, they were able to fix those problems.
But if they hadn't spent the money upfront on that integration between those two organizations,
they wouldn't have been able to do it.
And they wouldn't have been able to do it and they wouldn't have been able to solve the problems. This is important to this question about knowledge
and knowledge transfer because when we talk about wanting to use money associated with NASA to
invigorate a private space flight center, the way to do it is to do what Psyche did with Maxar.
It's to say, okay, let's get a really capable team at a NASA center, could
be Goddard, could be Langley or whatever, and get them working really closely with a
private space provider because it's in that relationship that the knowledge will transfer
and they'll be able to do something really exciting.
Getting rid of people at NASA who know how to do the things and assuming they'll just
find their way to Blue Origin or intuitive machines or something and do it there, that's
not how the knowledge works.
That's not how knowledge transfer works.
In fact, you'll lose those capabilities as opposed to retaining them.
Another great example is Langley stepped in and helped intuitive machines when it turned
out their landing mechanism
was failing as they're trying to land on the moon.
It was the relationship between intuitive machines and the NASA center that enabled
that mission to make it down to the surface.
I'm not saying it's private versus public because actually at NASA it's always been
both.
If you're trying to think about how you do a knowledge infusion from a public organization
to a private one, it's not through these random cuts to the public one in the hopes that somehow
the private will pick it up.
You have to do it through extended collaboration between two organizations and institutions
to be sure that knowledge transfers.
We'll be right back with the rest of our space policy edition of Planetary Radio after this
short break.
Greetings, Bill Nye here.
The U.S. Congress approves NASA's annual budget and with your support, we promote missions
to space by keeping every member of Congress and their staff informed about the benefits of a robust space program.
We want Congress to know that space exploration ensures our nation's goals in workforce technology,
international relations, and space science.
Unfortunately, important missions are being delayed, some indefinitely.
That's where you come in.
Join our mission as a space advocate by making a gift today.
Right now when you donate, your gift will be matched up to $75,000
thanks to a generous Planetary Society member.
With your support, we can make sure every representative and Senator in DC
understands why NASA is a critical part of US national policy.
With the challenges NASA is facing, we need to make this investment today.
So make your gift at planetary.org slash take action.
Thank you.
I'm reminded of how Europa Clipper team discovered that they had an issue with the transistors.
Yes, with the MOSFETs.
Yeah, that they were basically randomly were at a conference.
Someone came up to, I think it was like someone just came up to the engineer said, hey, you
know, we're having this issue.
Are you aware of it?
You may want to check this out.
Yeah.
Well, and actually the MOSFETs are a great example because, you know, that could have
sunk the mission.
I mean, there were hundreds of those things,
right? They're like, it was just like, how do we solve this problem? And it turned out
that because the project manager had friends and colleagues who worked at other NASA centers
and at private space providers and at private companies, like they worked together across
those relationships to figure out the problem.
And that solved it for far less money.
I mean, I can't even imagine how much more expensive it would have been if there had
been a launch slip accordingly.
Because there was knowledge transfer, there were relationships, there was organizational
connection there, that knowledge could flow and the problem could be solved.
And these are the stories I think are so fascinating about NASA.
It really is about how people work together to solve really hard problems.
We keep thinking it's just about the kind of whiz-bang rockets and stuff,
but actually we forget that there's people and organizations and cultures
and relationships that are behind every single one of those things.
And if those relations are going awry,
you're going to have trouble with the technology as well.
I mean, another way to think of this is how much,
if the government was trying to save money by preventing
their experts from going to conferences,
maybe they would have saved a few thousand dollars
on that whoever did that conference.
And then they would have lost a $5 billion spacecraft
when it first landed at Jupiter.
Exactly. And? Exactly.
And that relationship wouldn't have even been obvious in retrospect.
We wouldn't have even known that was the opportunity to fix it.
We had this problem on Europa Clipper where I was an embedded social scientist and member
of the team. And very early on, in order to save money, Europa Clipper was asked to like not meet in person for a year.
This is at a time when they have to be doing a ton of thoughtful integrative work thinking about
how are the pieces of the spacecraft going to fit together. They had to be doing the relationship
building. They had to be doing the knowledge transfer. That was where the mentorship needed
to happen. That was where the stories of, oh, we did it this way on Galileo, and so this is going
to be helpful.
That's where all of that knowledge transfer needed to happen.
And to be told then, oh, you don't get to meet, meant that it couldn't happen.
And we struggled to figure out how do we replace those contexts.
Because also at the time, under austerity restrictions, scientists weren't also allowed to go to scientific meetings right like they were showing up at the age you on their own dime because they were restricted from travel because they believed it was so important to be there to make sure that this work this human social collective interactive knowledge work needed to get done. And again, it's these sort of short-sighted things
like, you know, Pennywise and Pound Foolish,
that it looked like it was gonna save so much money
and in the long-term,
missing that opportunity to build those relationships
cost you in the long run.
I've been really obsessed recently
about what is easily measurable and what isn't.
Yes.
And that we optimize for the easily measurable stuff.
And so it's easy to measure,
I'm gonna stop people from traveling to conferences
and I will save a couple hundred thousand dollars this year.
But you can't measure that three year later consequence of delayed spacecraft or inefficient
design or any number of things, right?
Because there's not a clear causal link between the two of those. And you end up kind of incentivizing bad decisions
like this or maybe not fully strategic decisions
for what are you actually trying to do in a long,
or any kind of long-term thinking maybe is the essence of it.
We've talked in a sense a lot about NASA.
Is NASA distinct in your observations
for how these teams work?
Is there something unique about either what NASA does and the
you kind of maybe imply this a bit with like the space being this profoundly unforgiving domain to
work in, the visibility of it or just the complexity of it? Is this uniquely, is NASA in a sense a unique
institution to think about workforce knowledge transfer versus others? I mean, yes and no.
One of the things that's interesting about NASA
is that people tend to work in those teams for much longer.
The space community is relatively small.
Even with the new entrance and new space,
people tend to pretty much know each other.
It's not as sprawling as, say, biotech or medicine
or bioengineering or so on.
So these are communities that actually know each other and they work together for incredibly
long periods of time.
Like we started working with Europa Clipper in what I started working with them in 2009.
They weren't even a mission yet.
They were selected in like 2015, started in 2016, and they didn't launch until last October.
And they're not going to get there until what, 2030 or 2032. So, you know, it felt like this working on Cassini, they talked about being there
for births and deaths and marriages, like you're in for the long haul on some of these missions.
There they take a really long time. And that also makes them very vulnerable to political shifts
and changes, which they're aware of. You know, if you're in the middle of something, and suddenly
there's a new administration that comes in and says, no, we don't want you aware of. If you're in the middle of something and suddenly there's a new administration that comes in
and says, no, we don't want you to do that, you're in trouble.
This also happened to a very small mission, Contour, which was run by APL many years ago
under Faster Better Cheaper and just so happened, they're trying to put this together in like
30 months, which is nothing for spacecraft.
In the middle of it, there's a federal election and then there was a labor dispute and all
of a sudden all civil servant workers, their salaries went up. That
completely blew through their budget, but they couldn't have predicted that.
Anyway, there's some things about space that are unique. One is that people work together
for these incredibly long periods of time. Two, they're very politically sensitive because
they have only one, a single point of funding and that's
a single point of failure.
When you're talking especially about deep space, I mean there's been changes with these
opening up of near earth and sub-lunary orbit, et cetera, to private players, but in deep
space you have to be reliant on government and multi-governmental funding agencies and
space agencies and you're subject
to different legal regulations associated with space. And then you're also subject to really
different environmental conditions in deep space. I mean, planning, it's not just landing something
on Mars, but even getting it to Mars, that takes a lot of tacit knowledge and distributed
cognition for sure. So it's highly specialized knowledge, it's highly subject
to major slings and arrows of outrageous fortune. It's also very physical work. You know, a
lot of studies have taken place of software engineering teams and when you're working
with software, like there's a materiality to software, but it's a lot easier to pivot
than when you're trying to build a thing with the kind of precision
that will get it to Jupiter and it's the size of a school bus.
That's a really different kind of phenomenon.
And then the teams will work together for decades.
So they know each other incredibly well.
And there's no big change.
I mean, there's not really big changes.
I remember talking to a friend who does organizational sociology in the private sector.
And they were just so surprised that the PIs have
the same role forever.
It's like a lifetime appointment.
There's no removal.
And we're seeing that increasingly
in the private sector, too, with certain sort
of charismatic CEOs.
But there isn't a kind of move out philosophy,
particularly among the top and particularly in science. Now, that said, there's a lot about NASA that's really similar to other organizations.
It's work. It's just a workplace.
It's a workplace where people are really mission driven though.
I mean, they're really committed to the public idea of space exploration and doing this for the glory of mankind and for humankind, for the Earth, for America.
Like, they're really committed to the mission of NASA.
But it's a workplace, you know?
You go to work every day, you've got problems you've got to solve, you've got people you're
working with, you've got too many emails and messages coming in on Slack and people tweeting
things in the middle of stuff, right?
So it's, in many ways, some of the interactions locally
are quite familiar and you'd almost forget
that what's happening here is people
are trying to go into space.
How has technology changed the transfer of knowledge
or any of these kind of team dynamics?
You said you've been doing this for 20 years,
studying NASA teams.
Lots changed, right?
I mean, things like, even things like Slack or Google Sheets, collaborative editing.
Yeah, those kinds of technologies.
Yeah.
Have any of those fundamentally changed these dynamics or reduced this need for interpersonal
in-person engagements?
No.
No.
Those technologies were only ever built to continue to support remote work in
which there was a cadence of in-person interaction. They were built coming out of that period
of time when we were studying remote teams and how to do distributed collaboration and
the idea was that still people would get together. So they can sustain for a long time, but they
won't sustain forever. I think, you know, I've seen changes when I first started working on the Mars exploration rover
everything happened on teleconference calls and there was a polycom system
that kind of sometimes worked and there was a you know off-the-shelf camera that
sat in the corner somewhere to you know now people are talking on you know on
slack or on matter most or whatever. I think what's interesting now is you have to
remember these are multi-institutional collaborations. So if you were one of the
180,000 people who work at Google, you have a whole bunch of Google tools. You have your Google
computer, you have all the Google G Suite, I presume, that you do all your work on and that's
how it is. But if you are JPL and you have a Cisco WebEx contract and a Slack contract for government
work and then you're at APL but at APL has a Zoom government contract and a Mattermost
for work, you can't talk to each other.
So ironically, in some ways, even though we have a proliferation of these collaborative
workplace tools, they've kind of created a paradoxical
situation where on the one hand, we think we can just use them and never see each other,
which is not true.
And then secondly, we can't even really talk to each other because it's like whose meeting
room are we using today, right?
So I have seen those things cause a problem.
And then of course, in the period of remote work under the pandemic, where people weren't just working remotely from, you know, different teams, they were working remotely from each other.
It was like an atomization of work.
There was a lot of isolation and a lot of knowledge that was lost and couldn't be transferred.
One of the things we found on the psyche mission was that because people weren't coming in because of the pandemic,
information couldn't really get around. And at JPL, there's this cafeteria that everyone goes to at lunch. And you see everyone that
you've worked with on other projects. And you know, other organizations are like this
too, right? If you go to Google with their five-star chefs in the cafeteria, like, you
see everyone and you're having a really good meal, right? And that's where the knowledge
is shared. It's in these informal settings is where that tacit knowledge sort of comes out and that
mentorship happens.
And the pandemic broke that.
It broke that for so many organizations, not just at NASA, but elsewhere.
There was a great paper that came out of a study team by Microsoft, for instance, and
Microsoft could do this because it was one single organization with hundreds of thousands
of people and everyone's on Microsoft Teams.
And they could analyze what happened to everyone's communication. And they were using the same tools that they'd
been using three weeks beforehand or in January of 2020. But by the time they were into March of 2020
and December of 2020, all of the informal interactions had dropped out. Everyone was
only talking to their closest ties and their closest collaborators.
So the innovation that happens when you put people together, the knowledge transfer that
happens when you put people together across units, the informal interactions that enable
mentorship and enable knowledge transfer to take place were gone, absolutely gone.
This paper is fantastic because they draw out the social network analysis and you're
like, well, that's pretty fragmented.
And we saw that at NASA as well, as we did in all kinds of organizations.
So does it require face-to-face work?
It requires a cadence of face-to-face work.
It doesn't mean that you have to be face-to-face all the time.
But it does mean there has to be a regular cadence of face-to-face
to enable the use of those tools over the long term, as opposed to thinking we can just rely on these tools.
As we're talking right now, we're
in this weird interregnum where we know that a reduction
in force at NASA, RIF, is coming.
But we don't know what it looks like.
And it seems to be dynamic even internally.
But that's been on top of this encouraged early retirement, you know,
taking anyone who I think five NASA lost 5% of its workforce just from that. Other agencies have
seen obviously much worse layoffs. And this idea of just reducing the number of people, it seems
like just for the sake of reduction as a de facto good thing, does this require?
Again, I think we've established that this is probably not
the best idea to maintain institutional knowledge
transfer.
But I think if pressed, I would say,
particularly from some of the more tech-minded individuals,
and we've seen this in Silicon Valley side,
I'll say maybe they would present this, well,
we don't need as many people now
because AI, whether now or in a few years,
that will be big enough and complex enough
that it can hold, unlike a human brain,
it will be able to hold the full complexity
of landing on the moon.
It will be able to hold the full complexity
of landing on Mars on it.
So we don't need people in the same way.
Do you think that's a, how would you respond,
I guess, to that framing that there's,
oh, these new technologies are coming anyway
that'll reduce this need so we don't need to pay
as many people off of government taxpayer dollars?
Well, the first thing I'd say is it's not a lot
of government taxpayer dollars already.
Secondly, I would say, but it isn't,
I would say that like, if you wanted to make sure
that we never visited another planet
or successfully really landed on the moon in the next 15 years or maybe 20 or maybe
more, this would be the way to do it.
So it depends what you're going for, right?
So I can think of no better way.
Where do you put this kind of, I mean, AI obviously lots of fantastical claims about
it, but in terms of...
Oh, about AI.
Mm-hmm.
I mean, in terms from a sociological, like your field studying it, how is this seen as
both a, how it's being, I guess I'm trying to frame this as I see a lot of actors, let's
just say AI will fix it.
Will AI fix it?
Well, just AI will fix it, but also that it'll be the stand in for an individual at some
level.
We have this amazing mythology in our culture that it doesn't matter how often we prove
it wrong, we still believe it.
That if you bring in machines, the work will be more productive and efficient.
And it never actually works because machines actually take more people
and they often take more skilled people or different kinds of people than you had before.
And so you see a change in your labor force composition. But in fact, in many ways, you're
paying for more people to be doing more of the work. Sometimes it's just associated with
keeping up the machinery.
And we saw this with early industrialization. We've seen this with computerization in the 1980s. There's this promise we're going to bring in this magical new tech and it will
change everything. And technology doesn't change everything. It comes into existing social context
and it will have a lensing effect on that social context.
It will reveal that context and sometimes in sort of its grotesqueness of its features.
But it doesn't make it more efficient.
I hate to pop your dreams here, but it just doesn't.
It never has.
The other thing that we know from-
Well, I mean, in some ways, I guess we've seen the reduction of manufacturing jobs in the United States, right? That's kind of...
But you moved the manufacturing jobs in the United States just because you moved them and
because you replaced some things with machinery and then other people have to take care of the
machinery and you made it not your job to pay those people because it's another company that
pays those people and they're trying to pay those people less. That doesn't mean you don't have people.
You still have people. You still have people. And I guess that's the dynamics change. And this is kind of what I'm getting at a little bit with
this, which is if the idea, particularly again from the Silicon Valley cohort that's kind of
in the administration right now, that if they, I think I would say that they would take at face
value that people can be replaced to a certain extent. And you just need fewer people to manage the AIs who are going to do all this grand thinking
and so forth.
And it's a difficult thing to talk about because at a certain level, it's like, well, if you
assert magic, then what's the...
I mean, yeah, I don't know how to confront, you know...
But this idea that can you add some sort of...
I guess that's why I was getting back with my original question. Are there ways to capture some of these interactions
and knowledge that even into a different type of system
that is then able to,
through whatever complex computation it has,
capture or regurgitate them in a way
that you just don't need as many people?
I mean, because we have fewer jobs,
a lot have been outsourced,
but I think it's true that we have,
we have much more advanced robots
than we did building cars 30 years ago.
There's a productivity gain from the application of technology in a lot of sectors.
Could this be in the information sector or design sector or whatever we would call it,
in the spacecraft building sector as well?
So there's two answers to that.
One is that most of the sociological studies of AI have demonstrated that it's pretty much
people all the way down.
So there's a level of computation and then there's like people cleaning the data, there's
people in Nigeria writing answers, there's people in the Philippines, there's like, there's
people who are not necessarily visible to you, but there's a lot of people involved
in making AI look like it's doing everything all by itself.
So the idea that AI comes in and replaces people, no, it just displaces people.
It makes those people invisible to you.
One of my favorite new books by my colleague, Ben Chesnokowski at University of Pennsylvania,
was in an AI startup that became a unicorn.
And it was fascinating because it got VC funding, and as a result, it had to demonstrate growth
that was so astronomical that all their software engineers had to
interview new software engineers to grow. And so most of their software
engineers were too busy interviewing software engineers to actually build the
software. So instead they got an outsourced community in the Philippines
to basically do the matching work behind the scenes that was supposed to be the
computer program. And then because that was a little bit, you know, janky and
didn't work so well, they got people in Nevada to be the computer program. And then because that was a little bit janky and didn't work so well, they got people in
Nevada to be the call office where people could call in because it wasn't working.
And this somehow became the technology that was like a billion dollar unicorn.
So when we talk about AI doing everything, we black box a set of computational things
and we ignore all of that human stuff.
So first of all, let's just put an end to this idea that AI replaces humans. It displaces humans. It does
change the arrangement of humans because of how this technology is being brought in and
that might mean that you don't have to pay them as much, but it's not inhuman.
But the other thing is, you know, Casey, I've worked with artificially intelligent robots
on other planets for 20 years. And the thing that I see that's really exciting about AI is not replacement. It's
human AI teaming. It's what can an AI system do that humans can't and what can humans
do that AI can't? And then can you put those things together in a way that's smart and
in a way that's intelligent. So you're not
bringing in a system to replace people. You're bringing in something that will do some work that
they're not as good at, but that will also rely on people as well. So for instance, the Mars rovers,
you know, those robots, they look like they're doing everything by themselves. There's huge
teams of people back on earth, scientists and engineers who are taking care of them, ensuring that they have what they need, sending them commands, doing the science.
And it would be super boring if we just sent robots to Mars to just do robot things there.
The excitement is that the robots and the people are working together.
And I remember also a scientist saying to me, we don't want to go to Mars because our
eyes can't see Martian minerals.
We grew up on Earth, so Mars just looks red to us.
But if we send these robots that have these cameras that can see in multiple filters,
they can see things that humans can't see.
That's an example of the machine doing a thing the human can't, but the human is also doing
the thing the machine can't, which is the critical thinking and the science and so on.
So when I think about AI transforming space exploration, I think about how much different
it would be if we thought about, say, tacit knowledge in an organization.
Maybe what we need is the tacit knowledge bot.
What we need is if you're going to produce an RIF and get rid of all of your most senior
people who have all of that tacit knowledge and who have all that mentorship to give and enable some kind of way for an AI to work
with a team that can transmit that knowledge, that's really different than saying we're
going to get rid of everybody and we're just going to make the AI do it.
I think it's also more interesting, it's more exciting, it's more robust, the systems
are more robust because you need people to check the AI.
The computers
don't do it perfectly. That means you need people to still have the technical expertise
to check the AI. The replacement, it doesn't work, Casey. In fact, I think it's a recipe
for more problems and more inefficiency because it doesn't work.
There are other ways to bring AI in that can work and that do work if we acknowledge that there's
ways that humans work and ways that machines work,
and it's exciting to bring them together in a different way.
Well, particularly if the way that AI works now
is it's taking in primarily written information,
recorded information, right?
That we've established earlier is
incomplete for these types of tacit knowledge transfers.
Exactly. Even to, yeah, to types of tacit knowledge transfers. Exactly.
And so even to, yeah, to make this tacit knowledge AI, which is you would need like what, some
this whole new suite of sensors, three-dimensional video coverage of every interaction everywhere,
right?
Not too low.
Somehow integrating all that and knowing again, like what's, as you point out,
that I've always talked about this too, that the meaning is all user derived from what an AI
puts out. So if you don't know, if you're not the expert, you have no way to know if it's true or not
because the AI doesn't know if it's true.
It's just putting out a pattern match to you.
That's true.
But an AI could do things like say, hey, it looks like you're trying to...
It's like Clippy, it looks like you're writing a letter.
It looks like you're trying to solve this problem of radiation shielding.
This is how Galileo did it.
This is how Viking did it.
This is how someone else did it. This is how Viking did it. This is how someone else did it. Here's the technical documentation, but here's who you
should go talk to because John actually built the radiation shielding. Also, this other
guy, Gina, who passed away last year, she did a NASA oral history interview. Here's
her oral history interview. You might want to look at that because she might say some things about radiation, right? Like you could see something like that.
They really thought about knowledge management in an organization and in a team and continuity.
That's really different than saying we can do this faster, better, cheaper without you guys.
You know, we've done path faster, better, cheaper before. You have to pick two.
Right. One more topic and then I know we have to wrap up. But
I'm struck by this idea that maybe this approach again that we're seeing from the
from the current White House and it's and again, we have yet to see the actual riff. Maybe it is
very strategic and aligned and so forth. But what we've seen so far is very much not that. And
But what we've seen so far is very much not that. And I wonder if this idea, either not considering
or being indifferent to the idea that institutions
carry knowledge in and of themselves,
that knowledge in a sense is an active process,
and also that people are not just cogs in this machine,
right, they're not interchangeable.
It almost seems as some expression in this machine, right? They're not interchangeable. It almost seems as some
expression of this broader, I don't know, the book Bowling Alone or this institutional
decline of just a loneliness or separation from community. And it's either having an
outright, not even a disdain for institutions, which are basically kind of a structured community,
but not respecting the value of an institution
because the kind of the individualistic
or enabled by these new types of technologies
that allow us to be separate from each other so easily,
it's just not as relevant.
And I wonder if this type of approach
of just getting rid of these institutional structures because
there's no appreciation for the value they carry is in some way an expression of that
broader theme in culture that we're seeing right now.
I think you're putting your finger on it for sure.
We're seeing a lot of institutions crumbling.
I see this in my students who come into the classroom.
They're traumatized by the fact that in high school, or in elementary school, they had to learn to duck and cover in case of an active shooter.
The banks had collapsed and they know people that were losing their jobs.
All these things that we would have relied on as institutions with some kind of stability are no longer stable.
There's a lot of public distrust in expertise and in institutions. This is also taking place at the same time as we
see more and more of this kind of apotheosis of the individual idea or ideology that it's all about
individuals. And these two things are happening at the same time, but they're also happening
against a long backdrop, Casey. Like since the 1970s, there's been, you know, a successive
withdrawal of funding from these institutions, which has made them
less capable of achieving their job.
It's a bit counterintuitive because you'd think that if you don't pay them as much,
maybe they'll get more efficient.
But actually, if you don't pay them as much or if you withhold funding, it gets more expensive.
We see this in studies of poverty and families in poverty
that it's actually expensive to be poor because you can't afford the good stuff. If you eat
McDonald's every day for 10 years and then you have heart issues after that, the expenses
pile up. They may not be an expense in the immediate moment, but they are expenses in
the long term. You always have to make decisions, that are penny wise but pound foolish that come back to bite you.
But the success of starving since the Nixon era of public funding from public institutions
has weakened them to the point where we don't trust them anymore because they're not able
to provide things for us and we see them as bloated as opposed to starved.
And we also see that because they're not functioning, we assume that's their problem.
The difficulty here is trying to find what's that balance.
What do we expect from people working together in groups?
What are the things that we can only do if we work together in groups?
Which groups do we want to be funding?
What knowledge do we need to continue? And then understanding that there
are some things that cannot be achieved through efficiency gains. I should say, you know,
one other thing. In the op-ed, I talk about how at NASA you bring your own cup and your own coffee.
And I also spend a lot of time with companies in Silicon Valley. It's very different. There's a way in which if you have all the money in the world to burn on a thing, you can move quickly and look efficient.
If you don't have the money to burn, you don't have the money to even expend on what's basic or
necessary, and you start cutting the people as well, then you actually lose institutional capability.
And my fear is because of this sort of obsession with the idea that if we get rid of people
and we put in AI and those people will go to the private sector and it's going to be
so much better for space, I wish I could agree with that.
All of the data shows that that's not true and it's actually the opposite of what they
want.
So if there are goals they would like to achieve, you have to match the methods to the goals
instead of working with these old mythologies
and these inherited problems.
So if you, let's say Jared Isaacman gets through
and is nominated but then is approved by the Senate
and says, I read your op-ed,
I really wanna hear what you have to say.
Tell me how to approach working with NASA
and making it more efficient and making sure we
keep this knowledge process.
What are maybe the top three things
that you would tell him about how to approach his workforce?
So first, I would say, this is ironic,
but it's true that when you starve an organization like NASA
of funds, things get more expensive,
not less.
And what we've seen since the 1970s as spacecraft have overblown their budget caps over and
over again is the result of a kind of asynchrony between how funding is allocated federally
and what's necessary on the ground.
And it's also a result of this kind of sense that we should pull the money back under austerity and not let these teams actually have the resources they need.
So first I would say we need to do a serious evaluation of how these things are funded
that enables them to achieve what they need to achieve over the long term.
Secondly, I would say this is a really exciting time in space exploration because we have
this new space sector.
Now we've always had a private sector in space.
We tend to think of like the Lockheed Martins and the Boeings as sort of cumbersome and
old but they were once the scrappy new space startups in the 60s and so on.
So it's not like we haven't seen new entrants.
But what we do see with some of these new entrants is different models of funding that
mean that they are responsive to different stakeholders.
If you need to grow so quickly that you can't make sure your product is safe, or if you
have a billionaire running checks for you, those are really different kinds of capabilities
and technologies you're going to produce than at NASA.
We have to think about what's the stability of funding also in the private sector.
And then third, I would say, you need to get those organizations together. You cannot assume that if you cut a lot of people from NASA,
they'll just find their way to, you know, Rocket Lab or SpaceX or something, and then they'll bring their knowledge with them.
That's just not going to happen. If you want to grow a vibrant private spaceflight sector, you have to have a lot of small projects
that bring experienced NASA centers together in teams with a single private spaceflight
provider.
And you need to make sure that there's transfer of tacit knowledge between those teams, that
they're working collaboratively together on a key
product.
They have a task that they're all doing together.
And in doing that task, that knowledge can be transferred.
So I'm not saying we shouldn't have a private space sector.
I'm just saying, my goodness, you're letting these people hang out to dry because they
won't have the knowledge they need.
They won't have the tacit knowledge they need to succeed.
And space is hard. You can't just sort of make it up. Even big places like SpaceX,
they've really benefited from bringing people through from places like JPL.
Blue Origin is the same way. So we need more extensive opportunities to bring experienced
NASA centers and personnel into long-term collaborative arrangements with some of these
new entrants to enable the transfer of knowledge. So I think those are the three things. First,
that starving NASA further of money is what makes it go over budget because it's actually
expensive to be impoverished. Secondly, that you must be attentive to what these funding models
look like in the private sector and also what expertise looks like in the public private sector.
If people are moving or hopping really quickly from one place to another, what's happening to the institutional knowledge there?
Do they have a knowledge base that they're drawing on or not?
And then third, to strategically through smaller missions put new entrants together with experts that have that old knowledge.
Instead of getting rid of making a whole bunch of people
retire who are the ones who actually
know how to do the things, putting them in a room
together and working on a task together with a new entrant,
I think we'll do more to bolster the American private spaceflight
sector than the alternative.
And those all sound super reasonable to me,
so maybe he'll listen to this.
I hope so.
Jennifer Tessie is the author of the op-ed in question,
Invigorating the American Space Sector Requires
Working with NASA, Not Against It.
We will link to it in this episode.
You have a number of other books and writings.
Do you want to just quickly mention your last book,
Shaping Science, before we go?
Yeah, sure.
My most recent book, Shaping Science, is about two NASA teams and it shows that the way that
they're organized means that they end up doing really different science and building different
technologies.
And I'm currently finishing off a book project about budgets and the history of budgeting
at NASA.
And it explains why these spacecraft teams have always gone over budget and how we can
fix it.
I cannot wait to read that one.
I'm excited for that. Solve the problem. have always gone over budget and how we can fix it. I cannot wait to read that one.
I'm excited for that.
Solve the problem.
A better targeted, you have at least one person
eager to purchase that book, I'm sure of many.
I'll let you know.
Dr. Jennifer Tessie, thank you so much for joining us
this month.
I really enjoyed speaking with you.
It's a pleasure.
Thanks so much, Casey.
We've reached the end of this month's episode of the Space
Policy edition of Planetary Radio,
but we will be back next month with more discussions on the politics and philosophies and ideas
that power space science and exploration.
Help others in the meantime learn more about space policy and the planetary society by
leaving a review and rating this show on platforms
like Apple Podcasts or Spotify or wherever you listen to this show.
Your input and interactions really help us be discovered by other curious minds and that
will help them find their place in space through Planetary Radio.
You can also send us, including me, your thoughts and questions
at planetaryradio at planetary.org. Or if you're a Planetary Society member, and I hope you are,
leave me a comment in the Planetary Radio space in our online member community.
Mark Hilverda and Ray Paoletta are our associate producers of the show.
Andrew Lucas is our audio editor.
Me, Casey Dreyer, and Merck Boyan, my colleague, composed and performed our Space Policy Edition theme.
The Space Policy Edition is a production of the Planetary Society,
an independent, non-profit space outreach organization based in Pasadena, California.
We are membership based and anybody, even you, can become a member. They start at just $4 a month.
That's nothing these days. Find out more at planetary.org join. Until next month, add Astra. The Thanks for watching!