99% Invisible - Constitution Breakdown #9: Alondra Nelson
Episode Date: April 24, 2026This is the ninth episode of our ongoing series breaking down the U.S. Constitution. This month, Roman and Elizabeth discuss Article VI and VII, which include some odds and ends like the Debts Clause,... the No Religious Test Clause, and the process for ratification. But tucked into Article VI is the all-important Supremacy Clause, which states that the Constitution is the “supreme Law of the Land,” and is probably the most frequently used constitutional law in practice. Roman and Elizabeth are also joined by Dr. Alondra Nelson, a leading expert on AI. She discusses why AI is a challenge to regulate, what to think of the tug of war between the states and the federal government on the topic, and whether she’s optimistic governments will figure this out. The 99% Invisible Breakdown of the Constitution Subscribe to SiriusXM Podcasts+ to listen to new episodes of 99% Invisible ad-free and a whole week early. Start a free trial now on Apple Podcasts or by visiting siriusxm.com/podcastsplus. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
This is the 99% invisible breakdown of the Constitution.
I'm Roman Mars.
And I'm Elizabeth Joe.
Today we're discussing articles 6 and 7.
Roman, why don't we go through both articles and let's save the most important part for last.
Okay, because there's a lot of unimportant parts.
Less important.
Go for it.
So why don't we start with Article 7?
That's the ratification clause.
Okay.
It may be the least important to talk about today, but actually crucially important for the Constitution itself.
Yeah, yeah, sure.
We needed these states to vote on it, and the ratification clause says that nine states would be enough to ratify or make the Constitution itself a legitimate document.
Yeah.
So that in fact happened.
It comes into effect on June 21st, 1788, when New Hampshire became the ninth state of the 13 to ratify the Constitution.
So it kind of has its own clause to make sure the document is legit.
Got it.
Yeah, that makes sense.
Okay.
Yeah.
So that's pretty clear.
no real important Supreme Court cases on it. So let's turn back to Article 6.
Okay. Article 6 is a mishmash of a bunch of different things. So Article 6, Clause 1,
talks about debts that the United States is already obligated for and still has to pay.
And so why is that there? Well, you know, when the Constitution was drafted, the country still had
debts and engagements that were left over from the Revolutionary War. And creditors were kind of
nervous. What if you create a constitution that wipes out all of the debt? That would be pretty
convenient. Got it. Got it. So in order to assuage those nervous creditors, this constitution,
our constitution, actually says, don't worry, we understand that we have these debts and we are
going to pay these debts. So today it's of really mostly a historical interest. It doesn't come up
because we did in fact pay our debts. I will say what is interesting is that from clause one,
And originally, as it was drafted, the clause said that the United States would both be obligated
to pay the debts and would have the power to pay the debts.
But that second part got taken out of Article 6 and put into Article 1 as part of Congress's
spending authority.
So that very, very important part today is actually in the larger chunk of the Constitution.
We cite all the time, which is why does Congress have the ability to pass laws?
And very often it's because of spending authority.
Wow. Well, that's really fascinating, actually. Yeah. So, you know, one little switch around made a big difference. Yeah, made a huge difference. Okay, okay, so that's the first clause. Okay, let's turn to clause three of Article 6. Do you want to read it? Okay. The senators and representatives before mentioned and the members of the several state legislatures and all executive and judicial officers, both of the United States and of the several states, shall be bound by oath or affirmation to support this Constitution. But no religious tests shall ever be
required as a qualification to any office or public trust under the United States.
So this is known as the no religious test clause.
Great.
I like this clause.
Exactly.
It's kind of a, you know, no religious test for anybody taking office.
And in fact, it's the absence of religious tests that makes us understand that this is
successful, right?
Yeah, yeah.
We don't require anybody to take a religious test.
Yeah.
But sort of, at least formally, right.
I mean, there's no formal test, but you can kind of feel it in there, the fact that the
representation of other religious faiths is not super common inside of our public institutions.
True, true. But there is a big difference when you were formally required to do it. And in fact,
this clause comes from traditions going back to England. So, for instance, in England in the 17th century,
for example, all government officials had to take an oath that they would help establish the Church
of England. Also disclaim Catholicism and the Pope. And so the idea is we have this common-in-law tradition.
from England, by the time you have the colonies and the articles of Confederation, it was pretty
common for government officials to be told that they had to take some kind of religious affirmation.
Of course, not for the Church of England, but some kind of, I believe in God sort of test.
It's notable that it's absent.
It's notable for its absence.
And this clause as well has very little in terms of Supreme Court interest or case law today.
And that's for a totally different reason.
You'll notice this is about, you know, religious freedom, essentially, right?
It shouldn't matter whether you are a practicing Catholic or a Muslim or Jew to be able to take a public office.
But the reason why this clause doesn't get much attention is that's because free exercise clause cases today come up under the First Amendment.
Right?
Rather than this clause.
So not too much there as well.
Yeah.
So I noticed in a recap, you had Article 6 clause 1 and Article 6 clause 3, but we have skipped Article 6.
clause two. So what is that? Well, Article 6
Clause 2 contains what's called the Supremacy Clause.
Why don't you read it? Okay. This Constitution
and the laws of the United States which shall be made in pursuance thereof
and all treaties made, or which shall be made, under the authority of the United States,
shall be the supreme law of the land. And that's referred to as the
supremacy clause. So why is the supremacy clause so important? Well,
Historically, the Supremacy Clause responds to a very particular problem, and that is before the federal constitution, the articles of Confederation, which was the predecessor document, had no similar provision saying that federal law is supreme. And you might wonder, what does that really mean? Well, think of it this way. If you have state laws on a topic and federal laws on the exact same topic, which one are you supposed to follow? If there's no clear instruction, well, maybe you just follow whichever one you want. And that's kind of.
of what happened. Before the Constitution, state courts sometimes just didn't think that federal
law was binding, so they didn't apply it. They applied state law. That's kind of a problem, right?
Yeah. So the supremacy clause, just with one fell swoop with this particular clause,
gets rid of that uncertainty or ambiguity. The supremacy clause simply says, look, federal law,
whether we mean the constitution, federal statutes, federal treaties, are supreme when it comes to any conflicting
state law. So the idea here is that you have this very important structural part of the
Constitution, that federal law is supreme. So what does that mean, practically speaking?
Well, what that means is if you can think of supremacy as stating the simple fact that federal
law is supreme. But arising out of supremacy is the idea that Congress now has the power
when it legislates to preempt, or that really means displace or overrise. Or that really means displace or overrise.
any contrary state or local law.
So you can think of preemption
as being based in the constitutional
power of supremacy.
So Congress doesn't have to exercise preemption,
but when it does pass laws in this way,
it's very clear that any directly conflicting
state or local law has to give way.
So that's kind of the genius
or the simplicity of the supremacy clause.
But that's the most simple part of the supremacy clause.
And I take it,
there's lots of constitution.
case law based on the supremacy clause.
That's right, because things can never be simple, right?
Yeah, yeah.
So when you think about federal law, sometimes Congress can simply say, we're going to pass a law,
and this law will, in the text of the law itself, displace or preempt any similar state law.
That's pretty easy.
And if that were the only issue, we'd never talk about preemption.
But the problem is that Congress very often doesn't say there may be a federal law on a topic
and a state law on a topic, and the federal law doesn't say anything.
So in response, the Supreme Court has come up with a whole host of cases, doctrines, tests,
ways of thinking about federal preemption to try and answer the question,
what happens when it seems like there are federal and state laws legislating on the same topic?
So what exactly is supposed to happen when there's a conflict?
Well, that also is a complicated answer.
So it depends on what we're talking about.
Sometimes courts will say something like, you know,
there are some areas of federal law where the federal interest is so important,
so extreme.
We don't want the states to get involved a little tiny bit even,
even if Congress hasn't specifically spoken to that area.
So an interest like this would be foreign policy.
We don't want the states to get involved with foreign policy,
negotiating their own treaties.
That would be a bad idea.
That would be a bad idea.
Exactly.
So those are the easy cases.
but the much more frequent and difficult cases are sometimes courts have to answer, well, there's a federal law on a topic and a state law on the topic.
Is it possible to comply with both state and federal law?
If it's possible, maybe there is no preemption.
No preemption would mean that state law and federal law are both valid.
But for instance, if there is a way in which the state law is an obstacle for the federal government's law to operate,
or whether it's literally impossible, state law says black and federal law says white,
you can't do both at the same time, then that's a case of federal preemption.
So these are always case-by-case determinations.
But preemption is actually really important because if you think about all of the different
areas in which the federal government regulates everything from the environment,
consumer protection, energy, you name it, the states also often legislate in the
same areas. And what you will have are individuals or companies that say, well, I want to comply with
one. I don't want to comply with both or am I supposed to comply with both? And that gives rise to
preemption. So of all of the areas of law that we've talked about with the Constitution, in fact,
preemption is probably the most frequently used constitutional law in practice. So on the one hand,
you can think of constitutional law in the courts as being on a spectrum, right? Like maybe we'd put
impeachment at one end. We don't talk about it in the courts. And then preemption all the way at the other.
Preemption comes up all the time because the idea of federal preemption is that it's a possible question
anytime the federal government is regulating in a particular area. Right, right, which could be
infinite, almost. Almost infinite. That's right. Every single area of a modern life where the states
regulate very often, though not always, of course, very often the federal government is also
regulating. And this situation is exacerbated by the fact that modern life continues to go on.
Like there's new laws coming up all the time because there's new technology all the time and
there's new things all the time to consider. That's right. So whenever you have a new policy
problem, a new change in society, there's a race to regulate it or at least calls to regulate
that new development in modern life. So the question is, are states going to do that job?
Should the federal government do that job or should they both do that job?
So one way to think about the problem of preemption is for us to pick an emerging area where both the states and the federal government are trying to regulate at the same time.
And I think there's no better topic than artificial intelligence.
Totally, totally.
I mean, that's like huge.
I don't even know what I think of it.
That's right.
So I can't even imagine what states and the federal government are thinking about it at this point.
That's right.
artificial intelligence is everywhere.
It's at the doctors.
It's at the store.
It's at school.
It's at work.
It's kind of a huge problem for government.
And that's because AI has the potential to produce these really big benefits for society.
But we've already seen that it can have all kinds of harmful effects.
It can produce all kinds of major risks for society.
You know, everyone's heard of AI makes up facts that don't exist that people believe and sometimes act upon.
or it can make decisions about people that are really hard for us to explain.
And sometimes those decisions are false or misleading.
So just like any other problem in society,
the states and the federal government are trying to figure out
how do we regulate AI or AI systems?
And that means everything from how do you regulate a chatbot that teenagers use
or self-driving taxis or how do you regulate autonomous weapons when it comes to wartime.
Oh my God.
And so what kind of level of government should be regulating AI?
And so should the states get out of the way altogether?
Now, this seems like a very current topic, and it is.
But the larger picture is an old one, and that's a question of federalism.
So the narrower view we have of preemption, we're really allowing the states to engage in more experimentation for the states to say, hey, we want to try this approach.
And California will always take an approach.
that probably Texas will not, right?
And vice versa.
Sure.
But a very broad view of preemption really is saying,
you know what?
We want the states to just get the heck out of the way.
We want the federal government to be the primary voice in this area.
So those are choices that courts have to make.
There's nothing obvious about going in one direction or another.
Yeah, yeah.
Because this is a fast-moving and complex topic,
our guest for this episode is Dr. Alondra Nelson.
She's a scholar of technology and social science and a leading expert on artificial intelligence.
She currently holds the Harold F. Linder Chair at the Institute for Advanced Study in Princeton.
She also served in the Biden administration as the acting director of the White House Office of Science and Technology Policy.
It was in that role that Dr. Nelson spearheaded what's called the blueprint for an AI Bill of Rights.
We invited her to help us navigate why it's a challenge to regulate and what to think of the tug of
between the states and the federal government on the topic,
especially during the second Trump administration.
But we start with Alondra's definition of what exactly AI is.
So, you know, I usually use a modified version of the OECD definition,
which is a definition that kind of 38 nation states have agreed upon.
But, you know, and it's basically that these are like machine-based systems,
like lots of statistics, lots of math, and that they use,
they make inferences, so from different inputs, and they generate outputs.
And so the outputs are things like, you know, so-called predictions.
They are things like recommendations, like your Spotify, you know, music recommendations
or your Netflix recommendations.
And I like to use those two examples because, you know, people have different feelings
about how good or bad they think their net stream and Spotify is.
And I think that's kind of a level set for AI, you know, decisions.
So there are, you know, machines that are helping, you know,
if we think about the theater of war, decisions about targeting people locations and the
theater of war. And of course, with generative AI, AI tools and systems generate content. So
texts and images and sound. So that's kind of, you know, inferences made from different sets of
inputs, almost all sort of data, whether those are photographs or numeric data or, you know,
quote unquote, all of the internet that was taken into generative AI.
and lots of different outputs.
So you cross-cut that with the fact that AI systems have, like,
different levels of autonomy and adaptiveness after they're deployed.
So some can be very static, like, you know, a decision-making or predictive algorithm that might
be used in the criminal legal system.
It was taking in data and, you know, it has a sort of hardwired data set that it's sort
of making so-called predictions against.
And obviously today, we increasingly are being.
told about things like open claw and AI agents. And so these are more autonomous kind of AI systems
that are, you know, making purchasing decisions for people coding for them and the like.
So that's a broad definition on purpose because AI is really broad. And we, I think we go back
and forth from using generative AI as the default for what we mean by AI. But it's this whole
suite of things. And if you talk to, you know, a computer scientist or an AI,
machine learning engineer, they would say to you that actually, you know, if you think about
AI, the world of AI is sort of a set of Russian nesting dolls, that generative AI is actually
the smallest, right? You've got deep learning, you've got machine learning and all of that.
So because generative AI with things like chatbots have been made consumer-facing tools,
and that's really how AI came into the public sphere, that's kind of how we think about AI. But there's a lot
of other use cases and types and autonomous and more brittle, et cetera, besides.
Yeah. So I think, you know, when you hear this, it's like a pretty technical set of
definitions and products. But I suppose if you're listening to this conversation, maybe someone
might think, well, I'm sort of familiar with maybe chat GPT that came out in 2022. I've used it
a couple of times. But like, I really want to know, like, why should I care about this? So what for you
are some of the most transformative or really concerning examples of AI that are happening in American
society right now? So the why should I care is, you know, I think people every day, particularly
folks and companies, oversell AI. So that's certainly true. So what might be transformational?
Some of the claims, you know, the AI for good claim are true and I think are on the either happening
or on the horizon. So you can think about in the medical space, like an AI system reading chest
x-rays or being able to flag kind of an early stage kind of cancer diagnosis, being able to see,
you know, a tumor in its very early stages. So that's transformative. And indeed, you know,
if we get that right, life-saving, it is the case that we still need radiologists and we don't
have enough of them. So transformational, but transformational in, potentially in the intersection of
humans working with the AI, right?
So, you know, other cases certainly are like in agriculture.
So you can either farmers, whether it's sub-Saharan Africa, Kansas, and the United States,
are using forms of computer vision, forms, you know, on a phone app that can help them
identify, you know, whether or not a crop is being blighted.
We're using already kind of AI and the traffic flow and try to sort of direct traffic and
kind of retime stoplights.
So, you know, you can cut commutes or you can redirect traffic.
So when these are all, as to go back to my definition,
they're all systems that take an image or a data pattern or a question,
make an inference and generate an output that, you know,
hopefully helps to augment when humans are doing,
maybe improve what humans are doing,
maybe to help humans make better decisions.
So those are, I think, cool things.
I mean, we just have been watching Artemis 2.
You know, that is full of AI computer simulations that help them
to track how they were going to do this incredible 10-day journey.
Also cool.
Concerning, you know, we're living with a lot of that right now.
You know, we've got this kind of great race happening in the world of, you know, looking for a job, right?
So you can now more easily do your resume and your cover letter using AI, but now AI systems are being used to screen your resume out.
So, like, people are now sending dozens and dozens of resumes out on a given day.
but they're getting screened out right away.
So the downside of this is that it might filter out, you know, people out of an applicant
pool before anybody ever sees your name or anybody ever, like, actually looks at your credentials
and nobody will tell you why, potentially.
There's some research that suggests because, you know, again, as you talk about input data
and making inferences from that, and things like employment, a lot of the input data is
historical data. So, you know, in fields in which you've had historic racial discrimination or
gender discrimination, like if you're looking for the resume of an excellent computer scientist,
then, you know, a lot of algorithms have been shown to sort of kick people out. So you're like,
people are losing access to opportunities with real implications for their liberties and their rights.
There's, you know, so-called predictive policing tools that the algorithm says that you should
police it more because it's been policed more historically, not because there's actually
new information suggesting that that should be the case. And then in the generative AI space,
because I live partly in New York City, the Adams administration spent, you know, nearly,
I think, a million dollars on this government chat bot, this NYC bot or NYC chat. That was supposed to,
you know, the idea was of it was good. It was supposed to help like small businesses navigate all
sorts of city regulations, which in a place like New York City, they're voluminous. But it was telling
them to violate the law. So it was giving advice like that you could like how to skim workers' tips,
how to discriminate against your tenant if you're a landlord. I mean, it was fairly outrageous. And,
you know, I think well beyond the kind of whimsical term, you know, hallucination that we use that,
you know, often suggests that it's not a really big deal. And, you know, we shouldn't be surprised
that I think the Momdani administration, I think, canceled that contract and got rid of the chatbot.
But the concerning aspects, I think, also just give you a sense of, like, all of the places in our lives, all of the sites simultaneously that are being shaped in some way by some form of kind of algorithmic decision making or management.
Yeah.
And I guess one of the ways to approach that, right, is to say not just like, oh, these are technical problems, but since you're mentioning like all of the different ways that individuals,
might feel powerless or just confused about what's going on, you can kind of use a civil
rights approach. And, you know, and of course in the Biden administration, you led the OSTP,
and you're credited with directing the White House blueprint for an AI Bill of Rights.
And I would love for you to talk more about that. There's a, you know, this is a policy paper.
It's a white paper. So what was the process? How did you begin creating the blueprint? Like,
who was behind it? Who did you talk to?
Yeah, so it was, you know, we came into office in the middle of a pandemic, and we came into office as a country having a racial reckoning.
We were having an economic crisis.
And, you know, I think those of us who work in the science and technology policy space knew both on the research side and also kind of saw brewing amidst all of these kind of societal concerns, like what was going to be happening in the algorithmic space.
And we all, you know, we were having already examples.
So, for example, the YouTube videos about the, you know, the so-called racist soap dispensers and faucets.
Right.
You know, if you have darker skin, you can't get the soap to come out, which is a kind of application of AI.
And, you know, and I had the idea to do, in part, I think, you know, borrowed from lots of other examples.
I mean, the Obama administration accompanied its Affordable Care Act with something called the Patience Bill of Rights.
I think Ralph Nader had a consumer Bill of Rights.
So the Bill of Rights has been used, you know, variously, both by government and folks in civil society, as a way to sort of think about a rights expansion in the face of kind of a new technology or a new social dynamic, for example.
So we got into office by October of 2021.
We published an op-ed and Wired.
That came out in October of 2021.
And we sort of used the Bill of Rights framing.
and we kind of tried to draw a parallel to the country's founding
and noting that there was this time, you know, in the 1780s and 90s
that Americans adopted the original Bill of Rights
to guard against really a power.
They just created this powerful government, right?
We're about to celebrate the 250 years of the Declaration of Independence
and then the Constitution.
Like we had created this kind of powerful government technology
and that we needed to place a check on that.
So how did you secure our rights and our liberties,
our opportunities and the context of a kind of large and powerful government.
So we saw a parallel with the kind of powerful technologies
and the powerful companies that were pushing these powerful technologies
and thought that there was a useful analogy
and we're wanting to think with the public,
with the American public about what might be equivalent kind of guardrails
against these kind of new powerful domains.
And so we were trying to kind of,
of frame the blueprint for an AI Bill of Rights project within a kind of continuous, you know,
U.S. or American tradition of aspiring to values kind of recognizing the shortcomings of the systems
that we create and sort of, you know, thinking about what we might do to sort of mitigate it.
Can you tell us some of the five principles that are identified in the A.I. Bill of Rights?
Sure. Yeah. So we, so the five principles was, and the white paper that Elizabeth alluded to,
was released in October of 2022, so a year later. And what we did over the course of that year was
a lot of public engagement. So we had that wired op-ed ends with an email address that can go
direct that you can write to the White House. That's always a good plan. Yeah. So I think we wish more people
had taken us up on it, but people certainly did. And we did kind of focus groups. We had what we called
office hours. So everybody who worked on the team, which included policy generalists, AI,
computer scientists, folks who work on science and technology policy from academia,
who had government experience, who had commercial experience. So it was a pretty broad team.
And we would all block on our calendar time just to talk with people. And that included high school
students and rabbis, in addition to always the technology companies, lobbyists, you know,
but we really tried to have a broad conversation. And what we, so the five, the five, the
five principles are really distilled from those conversations. We don't, we weren't trying to do anything
novel. We were trying to sort of take from this near year of conversation, like what is the best of
what we think? What are the aspirations that we should have as we move as a society into a more
kind of algorithmic shaped, mediated world? So one was that AI system should be safe and effective.
I mean, that's a very kind of basic and almost kind of consumer rights principle. Second,
that people should have protections from algorithmic discrimination.
Third, that there should be some modicum of data privacy.
We are still fighting out what that might even look like.
But again, these are kind of aspirations.
Fourth, that there should be notice and explanation
so that you should have a right to know when an AI system is being used
to make consequential decisions,
like some of those that I was talking about, Elizabeth,
when you asked me what's concerning.
Like, you know, do we care if you get a bad Netflix
recommendation and you end up watching movie you don't really like that the algorithm told you
we're going to like, like, no. But when algorithms and more advanced AI systems are being made
for consequential decisions about people's lives, they should know about that. And if they want
an explanation, they should be able to get one. And then lastly, the last principle is that there should
be some sort of human alternative or fallback so that you should, you know, ideally be able to opt out.
We build a lot of algorithm and social media systems as opt-in, as opposed to opting people out.
So can you opt out of an automated system?
Can you talk to a real person instead of being kind of brought down into a circle of like a phone tree hell
where you keep trying to press zero to get to a person?
Particularly when it's about something that affects your life.
I mean, you know, health insurance, jobs, housing.
So these are really critical things.
So that's what we came up with.
And it's been, you know, variously sort of taken up.
by different kinds of constituencies.
It's become a kind of a civic infrastructure that is a way,
I think, that allows different kinds of communities,
particularly non-expert communities,
to talk about why AI is important
and how they want it to sort of sit in their lives and not sit in their lives.
So from, you know, from an ordinary person's perspective,
what would it mean to have a safe AI system?
Does it mean that it's not going to make mistakes?
Or is it, you know,
what do you envision as like an AI system that would follow this idea of safety?
Yeah.
So, you know, my friend Damon, who leads the Lawyers Committee for Civil Rights, you know,
he will often say there's more laws around your toaster than around the chat bot that you might have used this morning,
which is true.
So we just basically don't have, certainly at a federal level, there's some action happening at the state level,
but any kind of just basic consumer protection.
So I think many people are actually shocked.
when they realize that when, you know, an AI company or tech company sort of ships a new model or an update of a model,
that no one has looked at that. There's been no kind of third-party kind of authority that said, you know,
it's met some threshold or standard of testing and that we think it should be safe and effective.
So, you know, there are affordances, there are things about, particularly about generative AI.
And we know increasingly from the research that you're never going to get rid of.
of all of the mistakes in generative AI,
certainly not in large language models.
So safe and effective systems doesn't mean that,
but it does mean that one would and should expect
that there should be testing on what people think
would be the most obvious use cases of these technologies, right?
You can't, if it's a multifunction or a multi-use technology,
there are use cases, I think, that we can't,
we haven't even imagined and people aren't doing yet.
But I think anybody who has studied the history of technology
in the United States,
even just going back to the 90s, you know, we know there's always going to be a problem with scams,
scamming and fraud, always, any kind of new technology.
We know historically there's always going to be a problem with forms of, you know,
pornography, sexual abuse.
These things are like often the first use cases for new technologies.
And so, you know, that we have chatbots that are being used to nudify young people in high school
or on rock or whatever.
We can't act like these are not harmful use cases that were not anticipated.
And so it doesn't mean at all, Elizabeth, that there won't be unanticipated things or that, you know, that a chatbot won't hallucinate.
But it certainly should mean that a company before releasing a product has thought through even basic historical use cases and actually thought about how they might be mitigated or that, you know, should have a conversation about some independent.
stakeholder, you know, state government, you know, a civil society, I don't, you know, about
how they might be mitigated.
Yeah.
So, you know, you'd think with all this essentially, you're sort of describing this experimentation, right,
that's happening.
And we'd expect that if the government's going to do this, they also should be regulating
it a lot.
And the answer at the federal level has been crickets, mostly.
There has been some movement.
I mean, the blueprint served as a springboard for President Biden's executive order.
on AI. So could you say a little bit about what the core of those concerns were in the EO?
Yeah. So I think the philosophy, both for the AI Bill of Rights and for the most part for President
Biden's executive order on AI, was that just because we have a new technology does not mean that
we have to have a new social compact or a new social contract. Like you don't have to throw out
every policy regulation in law because we have this new technology, as powerful as it may be. So
if intentional discrimination or intentional violations of people civil lights or liberties are
illegal in any other fashion, if you do that with AI, it's also illegal, right? You might have to
differently figure out the mechanism or differently make the case, but the outcome is the, you know,
the legality of the outcome is the same. One of the things the executive order did was ask the
Department of Education to think about what, you know, you've got guidelines for children's
privacy and their protection for the use of educational technology, do those need to be updated?
Or how do we need to, or do we just need to double down on what we have as you're introducing
different forms of advanced AI potentially to the classroom, right?
You know, the president's executive order had some, you know, directions to things like
the Department of Labor. And I think differently from what the current administration has been
doing, it was not just what is AI going to do to work. It was, how can government help put speed bumps,
or friction or help to direct the sort of direction of travel.
So you're not just potentially casting people out of work.
You are helping them find other work.
You are re-skilling them.
Could there be a conversation about tax incentives
or other kinds of incentives to keep people on work
or to help people off board or on ramp to different work, for example?
The executive order, of course, also weighed in on, you know,
there was a lot of concern and remains a lot of concern in the national security space.
So, you know, should there be...
sport control, should we be controlling where various forms of technology go? So this is still a very
live conversation. Controversially, the executive order proposed that we would use the Defense
Production Act from, I think, World War II originally to require that companies give the government
more input and information about new, more powerful AI systems and tools that had a kind of
certain threshold of capability. So it was, it might have been historically the long.
longest executive order ever.
Really?
Yeah, I think that's right.
I think it was 100 some pages, 101 or 102.
You know, as a reformist, a reformer, I don't necessarily think that is a good thing.
You know, and somebody's a bad thing.
But in this case, I think it was good in the sense that it tried to be comprehensive,
that the philosophy here was that this is a kind of new infrastructure.
This is sort of a new operating system for a lot of the work that we do and how might we
think about the ways that government can both help to accelerate potential good use cases and
mitigate potential harms using the things, the tools, the mechanisms, the levers that
government agencies and the executive already have.
We're going to take a break, but when we come back, we'll turn to how the federal government
is and isn't regulating AI and how the states are filling in the gaps.
So before the break, we talked to Dr. Alondra Nelson about how to think about artificial intelligence,
and why it poses at risk and should be regulated.
And so how did her work lead to a conversation about preemption?
Well, as she's already mentioned during her time in the Biden White House,
she helped create the blueprint for an AI Bill of Rights.
And that blueprint became the impetus for a part of President Biden's
2023 executive order on AI.
And as she's already discussed, that order told the federal agencies to address the safe and ethical use of AI.
Now, that's the limit of what we're going to be able to.
President Biden can do. And that's because Congress has the power to legislate, not the president.
So while Biden could tell the executive branch what to do about AI, he lacked the authority to actually
preempt state law. Got it. And as soon as he began his second term, Trump rescinded or did away
with Biden's executive order and replaced it with his own. Now, the Trump administration's approach
to AI has been to turn away from a focus on safety and ethics in AI. Surprise, surprise, okay?
Right. And instead, to focus on what the federal government can do to accelerate AI development.
Okay.
Now, Trump's executive order has called upon Congress to use its power of preemption based in the supremacy clause to override state laws on AI.
Congress so far has not responded.
Okay. Which has left a lot of room for the states. And so we'll pick up our conversation there.
So we have a lot of different states regulating on AI. You know, California has been in the last.
lead as it often is in these areas.
You know, so for example, California has just a lot of different laws on AI.
For example, you know, you've got to disclose what kind of data you use if you're an
AI developer, what you use to train your models.
That seems like very technical, very big picture.
There's also some very specific California laws that we just passed a law.
You know, if you're a police department, you've got to disclose if you use generative
AI when your officers, you write their police reports.
That's good one.
So you've got the whole range of different things.
So what does that mean?
You talk to a lot of people in the industry.
If I'm an AI developer and I want to offer my product in California
or I want to offer my product in Colorado,
which has an algorithmic discrimination law,
what does that even mean?
How does that work?
Well, I mean, I think the first thing to say is that it's that we have other industries,
you know, where you have different kinds of regulations.
So you've got like, you know, insurance is like regulation.
regulated mostly by the states, for example.
We've talked about a little bit consumer protection.
So I think the discourse that gets used in D.C.,
which is its own language of its own,
there's a lot of kind of like pearl clutching around the fact
that you would have different laws in different states,
although the same, very same people in Washington,
because they are the most adept people on, you know,
what the regulatory space looks like more broadly writ large.
like, you know, know that it's true all the time.
You know, it's basically true.
And I think there's like, you know, there's real.
We use the phrase, you know, laboratories of democracy.
I think there's something to that.
I mean, you have a new technology that I think is fast moving.
I think in some ideal demos, would you want just one, you know, a rule, you know, a lot of rule them all, sure, right?
But we don't live in that ideal demos.
And we also know that the states are much closer to the,
harms. So you also have to imagine being a governor of a state or a state legislator,
senator in a state, and you have people writing to you about being worried about the future of
their children. You know, we had a scandal. I'm sitting here in Princeton and New Jersey about
nudify apps. Like, you know, lots of just, I think, concerns about, you know, reading in the news
and experiencing young people harming themselves. You know, there's been something I saw a case
reported about a potential homicide. And I think if you are a state legislator and you're hearing
from constituents who've been denied a mortgage or screened out of a job by an algorithm, you can't
just sit blithely and sort of not respond to that. So I think partly is just like folks are hearing it.
I think that we have a new technology. What are the best ways to think about this? I mean, even with
the case, you mentioned California and New York, which have done laws around kind of trying to require
some disclosure and transparency from companies around harms.
Texas has weighed in actually on thinking about harms and including discrimination,
but they've said it really has to be intent.
It can't be if there's unintentional harms, you know, that they're trying to let the companies off the hook.
You know, a place like Colorado has attempted the first, you know, we might think of as like an omnibus AI bill that covers lots of things,
you know, including sort of harms to young people, deep fakes, discrimination.
And these are all, these are like I've just named three different approaches.
And it's not clear which one of those is the best one or which one's going to be most efficacious.
And I think it's worth actually letting states do this, you know, finish the work of implementing these laws and actually find out.
I just don't think that the harms are more likely to be on the side of not doing anything at all,
rather than trying to do a couple of, you know, different innovative strategies in different states to see.
And then, you know, because there's been no federal law, there's obviously just this vacuum in the states and there's a lack of clarity.
And, you know, I think there's been the D.C. conversation, the Trump administration conversation has been, or discourse has really been, well, it's creating confusion.
And I think what's actually creating confusion is the lack of any kind of federal guidance.
Yeah. It's actually the states that are trying to sort of bring clarity to chaos.
I mean, if the states are the appropriate front line for figuring this stuff out, is the ideal form of that to eventually roll up into some kind of federal regulation that makes sense?
Sure. I mean, I think what the state patchwork does is test things out. Some things will work. Some things will fail horribly. I think it also creates some kind of the, you know, the so-called patchwork. I think kind of creates some upward pressure because we're,
When exactly to your point, Roman, when enough states act, federal policy or norms become, as the patchwork gets woven together, become kind of implicit.
And I think it puts more pressure on it for the federal government to actually do something explicitly.
I would also, like, if we widen the aperture just slightly broadly from like the AI companies that, you know, like that we're talking about now to social media, the social media example, which gives us another, you know, 15, 10 years more to think about.
we've seen the utter failure, right, of the federal government to be able to legislate in that space.
And to the extent that we've got anything that looks like regulation or law or governance in that space,
it's coming out of these lawsuits, like the lawsuits that we saw early, you know, that were decided a few weeks ago around, you know, meta and YouTube.
And so I think there's also, I think if you are a state level, you know, executive, if you're a governor or a state, state legislator,
you're like thinking it back about that example
and just thinking, we can't wait and do this again.
You know, the other, you know, as I said,
the states are close to the harms,
they're hearing from constituencies.
The way that we've been governing,
if you think about the social media model,
I mean, the young woman who was the plaintiff,
and I think in the META case,
she's 20 years old.
I mean, this happened eight years ago or something.
She was, you know, a child when this happened.
And so using liability and legal cases
puts us quite far away.
from the harms. And I think the states can be much closer. Yeah, just to back up for a moment,
by way of explanation, you're referring to the social media trials that are happening in California,
where basically the state's attorney generals and private plaintiffs are suing, arguing that
social media platforms are harmful products, which has a long storied history of legal liability
in the United States. And actually, they're using the legal playbook of big tobacco. You know,
we kind of shut down big tobacco because we argued that the companies knew that these were harmful
products and you sold them anyway. And that has proven so far to be successful in the social
media space. So I guess we could think of perhaps, you know, AI. Some of these products are going to
be dangerous and maybe we'll do that. Of course, I think you're right. I'll say that this is a backup,
right? We don't want to wait for the bad use case for people to be harmed. I mean, the nice thing
about regulation is you can be proactive and say, we think the.
this is going to happen or it is happening and we want to affect as many people as we can within the
state or within the country. My question is really more about what about the companies? Not that I feel
too bad for them. But if you're a company, it's pretty burdensome, I would think, that you've got to
look at every state and see like what is every state doing. So I would imagine that, you know,
their first choice is no regulation, right? But their second choice must be federal regulation. No?
Yeah, I mean, I would disagree with that a little bit. I mean, let's have a friendly quibble about
this. I mean, I think that the compliance burden argument, I think, is a bit overstated by
companies, right? You know, that that's just what companies do in their own interest in their
lobbyists. So, you know, right. And as I said, you know, I already mentioned, I think companies
already in other policy spaces are navigating like different consumer protection regimes for different
states, different employment laws, different privacy frameworks. I mean, you know, there's a,
the state of Illinois has this pretty strong biometric policy, you know, regime. And yet, you know,
companies were still, Clearview was still selling its facial recognition technology
data set, for example. So I think that the language from companies and lobbyists that say that
state AI laws are like uniquely burdensome or specially burdensome, I think doesn't really hold up
when you think about these other examples of these other policy spaces. The other thing I would say
is that I think what your question, which is a common question, an important one, presumes,
is that like, if the states don't have a law,
there's no other governance or pressure being applied
on the direction of AI governance,
which certainly in the Trump administration is not true.
So, you know, so, okay, maybe you have to,
you don't want to deal with California or Colorado,
but you've got a Trump administration that's saying,
we're changing tariffs every day.
You know, we've gone from Liberation Day to not Liberation Day back and forth.
So companies are dealing with that, including
AI companies. You've got a Trump administration that is saying we're uncertain about we don't
like immigration. We're uncomfortable with science and tech immigration. If you want to bring a new
technology talent AI company, you're going to have to pay $100,000 per visa if we allow you
to have one to bring some, you know, a talented engineer from France or, you know, Korea or something.
And then they're also intervening in business. So, you know, we've got. We've got.
the U.S. taxpayer is a shareholder in NVIDIA,
where a shareholder in Intel.
So the compliance burden question, I think,
is much too narrow, given all of the different ways
in which companies are being asked to respond
to a kind of broad spectrum of AI governance.
Yeah, and let's not forget, I should say,
you know, the federal government and all of the state governments
are huge customers, right?
You know, customers can demand changes if they want.
procurement is an excellent vehicle.
I mean, you know, Governor Newsom just signed this executive order that I think really leaned into that,
including not only safety issues, but issues around discrimination and civil rights and liberties,
which I thought was fantastic.
So we've talked a lot about sort of granular harms that are potentially happening or are happening.
But I do want to talk about your thoughts on what's on the horizon, the AI horizon.
There seems to be this race to develop AGI or artificial general.
intelligence. So the idea would be like not like, please find all the cats in this picture or
write my high school essay on Pride and Prejudice. It's like an all-purpose, sophisticated AI with
autonomy. Now, you've spoken to a lot of people in tech. I've spoken to a few. It seems like some
people in the AI policy world are extremely worried about this. Like we could create something
that gets totally out of control, develops like a biological weapon, takes over our defense systems.
How concerned are you about this? As a, as a, at some,
and then an object of regulation.
So I'm concerned about it as I think some people are quite invested in the name and what the name means.
So people are quite invested in whether it's super intelligence or AGI.
I'm not at all invested in the name and I don't really care.
So it keeps me out of some fights, but probably also keeps me out of some parties.
I don't know.
But I do think that, you know, I prefer to use.
the phrase like advanced AI. Like there are significant concerns about advanced AI. So example,
if we think about the Doge early, you know, last year in the Trump administration, and part of what
the reporting and Wired and elsewhere was suggesting is that Doge was breaking the Privacy Act of 1974,
which said that a lot of interagency organizations could not share data, in part because you don't want the federal
government to have administrative data about you from health and human services, from
Fannie Mae, from whatever, to be able to put into this kind of large surveillance kind of panapticon.
And I think what like powerful AI systems do is allow the interoperability of that data and the
sort of discovery potential of associations that are dangerous, things that we could never
possibly know about ourselves or about others. So that's like not even a.
GI, right? But that's just like sort of a powerful extreme. So if you imagine a system having
access to data about everyone in the United States, everyone in the world, being able to
sort of constantly be evaluating that data, running that data, and then making decisions. And again,
you know, I mentioned at the beginning, the various forms of the, you know, autonomy or not of
different AI systems and to do it autonomously. So imagine not just all of the, you know,
open claws, not all the little lobster claws of various agents, but like a really big claw,
like a really powerful independent agent sort of acting in the world. And so we've got, you know,
there's been some reporting and I've seen, you know, and some people, you know, discussing on
social media, things like, you know, I used this agent and it wiped out my entire hard drive or
it deleted all of my emails or that, right? You know, and that's, you know, that's happening.
And we're not imagining an AI agent that was sentient at all knowing.
and like Dill decided that it was going to wipe out all of your email because you work too hard
or because it doesn't want you to work or whatever.
Those are just powerful systems that we're learning to you.
So then you can imagine potentially a system having a bit more intentionality,
a bit more sort of understanding of its stakes in being more powerful.
The question then becomes, and I think this is where we trip ourselves up,
well, how do you regulate that?
It's just so powerful, what are we going to do?
And before you get there, you need to.
to imagine that companies can actually be told not to build a thing.
You know, I mean, you can't tell them not to, or they can be told that they can't ship a thing.
You can't tell a company what to create, but you can certainly say you can't ship this out into
the world without certain controls.
Like someone needs to be able to have a kind of final decision on whether or not it ships
or to be able to turn it off and on or you can only run it for a few hours or it can only have
so much access to so much compute or so much data, you know,
and we're not having those kind of, I think, system-wide conversations.
And, you know, to go back to where, you know,
the kind of subject of the broader conversation,
I mean, that is where you would want a smart, prudent,
federal government to sort of weigh in, right?
That's where, you know, at that level of kind of nuance
and both kind of level of abstraction and power,
you might want there to be some sort of federal,
I think law or legislation or guidance.
When we come back, Dr. Nelson explains her vision for finding a consensus on AI regulation.
And whether she's optimistic, the government will figure this out.
I mean, you've developed this idea of a kind of thick alignment when it comes to AI governance.
Could you talk more about what thick alignment is and how that translates to regulation?
Yeah, so there's a wonderful writer, Brian Christian, who has a really, you know, important book that I would,
come in to people called the alignment problem that is really writing about the first, the kind of
early years of what we, what some people call AI safety, which is basically just like how do we
explain them, how do we interpret what they're doing, how do we demonstrate that they're safe to the
extent possible. And, you know, it was very much a kind of technical sense of thinking about
alignment. So, you know, the system says that it's supposed to identify at 98% with a margin of error
of two or three percent, you know, these people in a facial recognition technology system.
And for all intents and purposes, you would say that system is aligned, right?
But we know the system is misidentifying people.
We know in the Detroit metropolitan area that there have been more than half a dozen people
misidentified by facial recognition technologies that someone, somewhere in the development
and deployment queue said, this is aligned.
Like this product works, right?
And so as we're thinking about AI systems and advanced AI systems, it's not just whether or not they kind of work technically.
What happens when you or what can we anticipate or not anticipate when you deploy them?
And how do we create a process or an understanding that allows us to be thinking about alignment as something that needs to happen fairly continuously over time?
And also that's something that needs to happen in conversation with the values.
of different communities and different societies.
So alignment is not, so by thick alignment,
I am taking up the work of, you know,
the philosopher Gilbert Ryle,
but also the anthropologist, Clifford Gererts,
who was a professor here at the Institute of Advanced Study
in Princeton, where I am,
who has this very famous sort of essay
and the concept of thick description,
like that you don't really understand the world
until you've really thought
to understand contextually, deeply,
what it means. How do you describe it
deeply? And so
my provocation to AI safety
researchers and my collaboration actually
so it's not just a critical work
is sort of what does it mean to
like alignment is important. Safety,
explainability, interpretability, all the things that you might put
in that bucket are really important
and are an important and taken together
or are an important
solution set for some of the
harm mitigation that we might want to do
in the space of AI. But what does
it mean to do that in a way that takes
seriously the different contexts in which these tools might be used, the different values.
So if you think about, you know, Anthropic has created a constitution for AI, for example,
like, you know, who gets to weigh in on that? And are those the values that I want or others want?
You see this kind of value conversation coming up also and even some of the Trump
administrations kind of the way that they frame sort of, quote unquote, ideological bias in AI.
Like, who gets to decide what's biased in AI? There's a technical question about bias.
in AI, but who gets to decide sort of what is a biased chat bot? So I think we just need to have a
conversation we're not having about what it means to try to come to a rough consensus
values to the extent that's even possible to try to have, you know, high level values to make
decisions about these technology. So I think the AI Bill of Rights was one of the ways that we were
trying to point to that. But certainly I think state laws are another way. I mean, you have
state sort of saying, I mean, that you might think of those or as examples of thick alignment.
Like, this is what our constituency cares about. And this is where we're going to lean in on in
the regulatory space with regard to AI. The other stuff, maybe we don't care about so much.
But like, I don't even know if I have thick alignment with President Trump as like a human.
You know what I mean? Like, it seems harder and harder to have it, you know, like when you're
talking about all these hypothetical uses of this stuff. And like if something like a program is supposed to
have inherent, you know, human values. I, I, I, I, I don't know, I, I don't know.
A lot of those don't feel shared right now.
No, I think that's right.
I will say, you know, one of the things I've been doing since I left,
so the AI Bill of Rights comes out in October of 2022, 2022.
And I've been since that time following its afterlives.
And some of its afterlives, I think, Roman, to your point,
have been in red states.
So there's been an Oklahoma AI Bill of Rights introduced as a bill.
You know, it didn't succeed ultimately, but it contained all of the five principles that we discussed previously, plus a few more that were really good and actually quite stronger than some of the things that we suggested.
More recently in November and Florida, Governor DeSantis introduced a Florida AI Bill of Rights, which contains within it all of the five principles that we have and lots of other things besides deep fakes, you know, child sexual abuse of imagery, a really nice clause.
that was around health insurance and not being able, you know, being able to get a decision
around algorithmic uses of health insurance. So I totally take your point, Roman, but it's also
clear that there are a few things that we agree are wrong or that we don't want that are like
suboptimal for society. And so I take, I think you're exactly right, but I also take some
comfort in these Bill of Rights alignments that pop up here and there.
You sound optimistic about the future of AI regulations. Is that right?
I'm not optimistic about, am I optimistic about regulation? I don't know. I mean, I think if we look at the history of technology policy at the federal level and the Congress, I mean, we, Elizabeth, correct me if I'm wrong, but I think, you know, it's maybe not been since like the Communications D, since the Act of 1996 that we've passed anything like a technology law. Like, you know, that's a long time. That's a generation. So I'm not optimistic in that.
sense. I think I'm optimistic with, you know, some people are calling it the tech backlash,
and I don't call it that. You know, I don't, I don't like that framing. But there's a growing
public empowerment to speak about what people want and don't want with regards to the way that
AI systems are being developed and deployed. So when I first started working, you know,
and sort of big data and then, you know, as it became AI, you know, policy and research.
That's how you date yourself. I know.
I know.
You say big data in a room and people kind of like the young cringe.
They're just like, oh, big data is so cringe.
So, you know, you would sit in rooms and say people can't possibly understand.
I mean, even now you hear people saying, you know, if folks don't have, you was here in D.C.
actually, when I was working in Washington, you know, if you don't really have a PhD, if you're a staffer on the hill and you don't have a degree in machine learning or AI, like, how could you possibly even begin to like offer guidance on how we should govern this technology?
So, of course, you don't want people who know nothing about AI to be governing AI.
But I've been encouraged by the fact that the public has demonstrated that it is not true
that you have to have a PhD in AI to be able to say something about the AI governance space.
So you see it in the space of governance center, of data center.
So people have really, you know, that's a place where like AI governance and policy is quite tactile, right?
So it is in communities.
It is about their water.
It is about their energy use.
And it's where sort of AGI or superintelligence like lands on the ground.
And it is where communities really feel they have a sense of agency around that.
So we're seeing, I think I just saw in the news that Maine has banned, you know, data centers for a time.
There's been, there were a lot of big projects announced that have been installed, that are being revisited.
There's reporting now about.
how a lot of these data center agreements in various communities were done with local politicians
under NDAs, that local communities can't even know the terms of the agreement for some of these.
And people are really pushing back against that. And they're pushing back against the harm
to young people. They're very concerned about, you know, suicidal ideation and how chatbots encourage them.
So am I optimistic about law? Absolutely not. But am I optimistic about the fact that
it's getting much more difficult for, you know,
companies and, you know, other elites
who really want to just drive technology
without thinking about the harms
and the social implications to do that
because you've got a growing chorus of people,
bipartisan, Roman,
saying,
maybe not aligned with bipartisan,
you know, saying that, like, we don't want this.
And so I find optimism, no encouragement, yes.
I think one of the things, I just, my one point here is like, one of the things that I think it's funny is the biggest proponents of AI and the broad use of it are kind of the biggest fearmongers of it too.
Like I think kind of enjoy the sort of sense of this is super powerful.
You should let us do what we want to.
And it's going to destroy humanity in five years.
Like I think they like both of those things.
So I think both of them like feed into their ego.
They're both about power.
Yeah.
Yeah.
Yeah. It's fascinating because it's one of the things that the alarmists are the biggest proponents is a weird dynamic. This is not like tobacco regulation where we, you know, the people who wanted to like regulate were just on the side of harm and the other people were like no harm. It's an odd dynamic. And it's one of the things that's also like mixed up and all this stuff of like the Florida regulation versus the California regulation. The political valence of this stuff is much more complicated than most other things.
Yes, it's very complicated and kind of heterogeneous.
And so that's fascinating.
And I think there's some very interesting essays, articles, papers to be written about at a time of, you know, maybe historically, since we've been measuring highest polarization in American society, that you've got this growing negative sentiment about AI and that it's bipartisan and that the issue set about which people are having agreement of their dissatisfaction around is growing, right?
So you go from kind of discrimination to young people and CSAM to fraud to, you know, healthcare.
Like the space is just becoming much broader to data centers, for example.
People obviously are worried about their jobs and worried about employment and what they're being told.
Roman to your point about powerful people saying our powerful tool is going to be really great and destroy everything, including all of your jobs, right?
So, yeah, it's a, you know, it's a very interesting.
interesting policy space, and it's a space, as I said, you know, I think of political
encouragement, if not optimism.
Yeah.
I mean, this seems like a new opportunity for a different kind of alignment, which is really
kind of fascinating.
Yeah.
Dr. Nelson, I really appreciate you being here.
Thank you so much.
It's been great to talk to you.
So that's the original seven articles of the Constitution.
Thank you for joining for all of that.
Of course, there are amendments to be talked about, 27 of them.
But we're going to take a pause on the breakdown of the Constitution.
There's just so much going on with Trump and the Constitution that we're going to go back to releasing our what Trump can teach us about con law episodes.
There won't be an episode in May, but we'll be back in June for Supreme Court decision season.
Everyone's favorite season.
The 99% Invisible Breakdown of the Constitution is produced by Isabel Angel, edited by committee, music by Swan Real, mixed by Martine.
Gonzalez. Kathy 2 is our executive producer. Kurt Colstead is the digital director. Delaney Hall
is our senior editor. The rest of the team includes Chris Barubei, Jason DeLeon, Emmett Fitzgerald, Christopher
Johnson, Vivian Leigh, Loshamadonne, Joe Rosenberg, Kelly Prime, Jacob Medina Gleason, Tallinn and Rain Stradley,
and me, Roman Mars. The 99% of visible logo was created by Stefan Lawrence. The art for this series
was created by Aaron Nestor. We are part of the Series XM podcast family, now headquartered 6,
Fox North in the Pandora building.
In beautiful, uptown,
Oakland, California.
You can find the show on all the usual
social media sites as well as our own Discord server
where we have fun discussions about
constitutional law, about architecture,
about movies, music, all kinds of good stuff.
You can find a link to the Discord server
as well as every past episode of the Conlaw
Book Club and every past episode of 99PI
and 99PI.org.
