No Priors: Artificial Intelligence | Technology | Startups - From Coder to Manager: Navigating the Shift to Agentic Engineering with Notion Co-Founder Simon Last
Episode Date: March 12, 2026Notion isn’t designing AI agents that just use tools. Their agents can autonomously build their own integrations, as well as write the code needed to finish a task. Sarah Guo sits down with Notion C...o-Founder Simon Last to explore Notion’s rapid evolution from a simple writing assistant to a sophisticated platform for custom AI agents. Simon discusses the technical hurdles of indexing disparate data from sources like Slack and Google Drive, as well as the internal shift toward using coding agents to build Notion itself. Plus, Simon elaborates on what he sees as a fundamental transition in productivity: moving from a tool where humans do the work, to one where humans manage a swarm of agents. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @simonlast | @NotionHQ Chapters: 00:00 – Cold Open 00:05 – Simon Last Introduction 00:26 – Genesis of Notion AI 04:10 – Challenge of Semantic Indexing and Retrieval 07:16 – The Six-Month Rewrite Cycle 08:12 – Notion’s Coding Agent Era 09:44 – Impact on Team Dynamics 12:49 – Launching Custom Agents 15:39 – Notion as the ‘Switzerland’ for Models 17:33 – Designing APIs for Agent Customers 20:09 – Simon’s Personal Agentic Workflows 24:48 – Notion: Tool for Work is Now A Tool for Agents 27:28 – How Building Has Changed for Simon 29:00 – Conclusion
Transcript
Discussion (0)
Hi, listeners, welcome back to No Pryors.
Today I'm here with Simon Last, co-founder at Notion.
We talk about their new vision for Notion in the AI age as a platform for humans and agents to collaborate,
how the engineering and product org at Notion is changing and these new tools for thought.
Welcome, Simon.
Hey, Simon, thanks for doing this.
Hey, of course. Yeah, it's really fun to be here.
Notions at scale, amazing platform, lots of users.
You did start quite a while ago.
I think of Notion as one of the companies that has really, like, braced AI quite aggressively.
I was told you first got your hands on GPT4 at a company offsite in Mexico.
Is that true?
What is the origin story of, like, starting to work on this stuff?
Yeah, I think, yeah, that year, that was 2022.
I've been watching, you know, what's going on.
In general, I've just been, like, super curious about the technology and fascinated to try everything and think about, like, how we can apply it.
It wasn't until I played with GPT4 that it became really, really real.
So, you know, we got access to it.
It was sort of like a proto-chatGBT like interface.
And my co-founder, Ivan and I both a book got access.
And it was just immediately clear.
Like, I would say two big things.
One is that it was just pretty smart.
It could follow reasonably complicated instructions.
It could write things for you.
You could edit things.
And the second big thing was at the scope of its knowledge
was extremely interesting. Super, super deep and broad world knowledge. When we play with it,
it became just instantly clear to both of us, like, okay, the time is now to start.
But I think about how to apply this. It's only going to get better.
We were talking about Mexico, GPD4. You guys saw it was clearly the time.
Did you start with like a particular vision of like what you should obviously be able to do
with AI in notion or just start pulling people from different teams or recruiting people and
say like, let's experiment? How did you begin?
I think we immediately had a long-term and a short.
term vision. I would say the, I'll store the short term one. The thing that was immediately
obvious was, oh, it could be like a writing assistant. So it could be in your document. You can, like,
select some text, have it rewrite it. You could have a write text for you. Maybe look something up and then,
you know, give you like, like sources or more information. So that was the thing that we immediately
like got to work on. And we sort of started a tiger team around it. And then we were able to launch it in
like two or three months after that. And then the long term vision that we immediately had was like,
oh, the thing that looks like it may be possible is more of like a general assistant.
So what if you could just give it all the tools inside notion that a human would have,
be able to create its own databases, query, manipulate them, create documents, edit them,
and sort of weave all of these things together to do like a longer range task.
And so we sort of immediately started on both.
The short-term one were able to shoot very quickly, and then the long-term one didn't really work yet.
And so that took much longer to get working.
Are there like specific first launch of the AI-specific notion, features and products was when last year?
No, it was a February 2020-3.
Oh, okay.
My timelines are wrong.
Are there like a few specific learnings or breakthrough moments you think since beginning to release that are interesting?
Yeah, I mean, there's been, it's been a slog over many years or multiple years at this point with many, many learnings.
I would say, yeah.
I mean, just to give you a timeline of the arc of what we shipped is, you know, so the first thing was our writing system.
We called it AI writer.
That's the first thing we launched.
It was the easiest to get working because it's like single step task, rewriting editing text.
There's no like retrieval aspect.
It was just like raw access to the model to write the text.
The next big thing that we immediately started working on was Q&A, doing a semantic index of the entire workspace, and then letting you.
ask a question and it can give you an answer that's grounded in the sources. That was also
immediately obvious to us that that'd be super useful. And so we started work on that. That one we
launched in, I think it was October 2023. So we started a beta before then. Our GA was in October.
That was a much bigger effort day working, obviously. We weren't just like the plugging in the LLM.
It was actually doing this like real-time updating index. Right. We had to get much more serious
about the e-vals and the quality there as well. The Q&A has been a multi-year journey.
basically, what we did is as soon as we got the Notion index working,
it was obvious that, oh, okay, we should index everything else as well.
And so we index like Slack and Google Drive.
We're launching new ones on a regular cadence.
And now we have a, I would say, fairly complete index.
One could argue that those are like very difficult problems that, you know,
those products natively have not solved perfectly yet.
So how did you think about taking that on?
I don't know if that's like an offensive thing to other product teams,
but like it's not working yet.
Yeah, it's kind of true.
Yeah, this has been something we talk about a lot
because it's like, you know, it's like almost like
what right do we even have to do this?
But it turns out that most of the companies
are pretty bad at making their indexes somehow.
It's honestly kind of baffled us a little bit.
Right.
But I think my take after dealing with all of this
and, you know, working with the teams
trying to get it working is there's a little bit of just
AI-pilled savviness that's pretty important.
And then I think most of it is honestly just a bit of craft and attention to detail.
I think, like, in particular with this, like, indexing retrieval stuff, in order to really get it working,
you have to be quite empirical and iterative and actually be, like, trying queries.
Like, you know, each data source is a little bit special.
Like, you know, you can't just apply a one-size-fits-all to, like, querying Slack versus querying Google Drive, let's say.
They're completely different kinds of information.
And we found that there's just a little bit of like craft and love that has to go into it in terms of like actually trying a bunch of different queries, actually using it every day, and constantly iterating and rethinking and tuning how the retrieval works.
How did you think about the diversity of how people organize their work spaces and just their, I mean, even notion is not use of it is not homogenous, right?
Like I'm probably part of 15 workspaces as an investor.
And so I look at them and I'm like, well, mine's a mess and these people are really organized and the workflow is reflected in how they're not.
notion works. Yeah, totally. I would say, I mean, the interesting thing is that with embeddings,
it almost doesn't matter as much anymore. The AI doesn't really care what the tree structure is,
for example. All the AI cares about is that there's a snippet of text that has the context you need
and that it can retrieve it. And so actually, we kind of advise people now, like, don't worry as much
about organization. Just find a way to get it all piped in and, like, thrown in there.
You still make decisions that could change performance quite a bit, like chunking stress.
Yeah, yeah. That's super important. But that's sort of not, that's sort of transparent to the user and sort of independent of their particular method of organizing things.
It just seems like still a difficult technical challenge given how different the content bases are.
Yeah, yeah. Yeah, I think, yeah, that took a lot of iteration. Yeah, the chunk sizing, how retrieval works, the different like steps in the pipeline of retrieval. Yeah, there's a lot of iteration on that.
Ivan said I should ask you how many times you've rebuilt Notion and rebuilt your harnesses.
Yeah, yeah, it's kind of a running joke almost.
I mean, we rewrite our AI harness probably every six months or so.
And the time to rewrite has kind of been decreasing just because, I mean, like, like, progress has been accelerating.
I think this is honestly a really key thing.
And something that a lot of companies get wrong is just like doing one thing and then just like, like, sticking with it.
You really do have to keenly aware of what the current state of the models and the technology is.
And then designing the harness and system and the product deeply around that.
And it basically means you have to rewrite it every six months.
And I find it pretty fun.
It's part of the process.
You know, you get to restart and rethink it.
You know, we're working on, we're about to release a new version of our harness like in the next week or two.
And then we're already thinking about the one after that as well.
I think that leads to a set of questions I had for you on just like how does Notion as an engineering and product and research organization work now that you have the power of coding agents as well because I imagine like your willingness to rewrite the harness goes up dramatically like agents are going to help me do it.
Yeah, that's extremely true.
Yeah, I mean, yeah, it's been it's been really fun to use the coding agents.
I think the ambition of what I even consider building is as gone up a lot.
What do you think is most dramatically changed in how you think about how engineering and product should work at Notion over the last two, three years?
Yeah, I mean, it's definitely changed multiple times.
I mean, in terms of the coding agents, we kind of went through multiple eras.
There was kind of like the tab autocomplete era, and then we got into sort of inserting, rewriting some code.
But it wasn't really until the agents started working.
I would say, like, early last year, we started to adopt the agents.
like I started using clog code, I think, around April last year.
That was a huge unlock.
Like, I would say the big shift there is that, you know,
you can really push on getting these agents to end-to-end, you know,
implement and verify and maintain stuff.
But it requires pretty significant thought
in terms of how you architect things and what is the verification loop.
But the upshot is, I think if you do it well,
you can be much more ambitious about what you're building
and also make it much more robust than you could have done
with humans writing it.
And then the flip side is if you do it badly, it's all slop.
Does that change your lens of what teams should look like at Notion?
Like size, seniority, anything like that?
Yeah, I mean, I would say, I mean, the fundamental effect is that, you know,
everyone's individual impact in terms of their output can be much higher.
And your output increasingly depends on your ability and willingness to use the tools.
I think that's the fundamental thing that's happening.
And then, like, how does that play out?
I think we've seen that much impact on the team sides, really.
I think we like to work in, like, small-ish-tiger teams for the most part.
I think if you can make teams small, it's almost always better.
That was true before, and I think it's still true.
Maybe increasingly a little bit, but not that much.
I think, yeah, the main thing is to just, like, really harness the tools.
Do you think something different happens to the median engine?
in an organization versus the 10x engineer or the engineer 10x more willing to use the tools?
Yeah, I think the gap is bigger. You can be like 100 or 1,000 X engineer for use the tools right now.
I think the gap is much bigger. Like the minimum bar has not changed, but the maximum bar is
extremely increased. One impact has had internally, I would say, is like broadly, things feel
like a little bit more messy and chaotic. I would say like, but I kind of love that. I mean,
it's like there's more, there's way more prototypes. You know, people are.
For example, our design team made an entire Git repo.
They call it the design playground.
And it's essentially like a simplified notion with a bunch of like UI permittives in it.
And they've made it like really sophisticated.
You know, it has like an agent in there.
And like it's pretty cool because it allows them all the designers can spin up like super high fidelity prototypes.
Really quickly.
And so it's no longer like pointing out a mock and being.
like, you know, like how will this look like?
They'll give you like a URL to a prototype that's been deployed.
And that sort of thing is true all the way up and down the stack, you know, for all of engineering.
Just like a little more chaotic, more stuff happening.
All the PRs are more ambitious.
Do you draw a line somewhere about like stuff that is more dangerous to touch or sensitive?
Like, there could be risk of data loss over here and not.
Or is it kind of you look at it all as it's fair game?
We still do reviews on all the progress.
And I would say, and, you know, all the ProvoCrest are now written by agents, they're often, like, larger and more complex.
That's, like, the worst part.
But the better part is that they're often, like, a much better tested, and we can demand sort of a much better testing for the things that merit it.
I never produce a PR that, like, hasn't been, like, fully intent tested anymore.
And so it's like, you can get to a pretty high degree of confidence that it works.
But it requires, like, you're not just vibe coding by saying the thing you want.
you're sort of thinking carefully about like, what is the change I'm trying to make?
And like, and how can it be verified and how can it be deployed safely?
And then enlisting the agent to help you with that process.
When you think about where you said the general assistant, like, doesn't quite exist yet,
what's the, what do you imagine notions, agents being able to do like over the next year or two that are still unblocked?
They're still blocked by either capability or your harness work.
We struggle for a few years to build an agent.
And, you know, it always like sort of works, but then, you know, wasn't that useful.
Largely just, it was too early.
So we, you know, we tried to build an agent, I would say, actually three or four times.
And then we finally launched it last fall, so like last August, September.
So the diffusion AI now, it's like the full agent that has accessed everything in Notion
pretty much.
So that totally works.
I would say, like, a lot of the original vision that we had totally works now.
and it's like fully shipped.
Last August or September,
we shipped our personal agent.
So it's pretty much every user in Notion
has an agent.
And it basically, it has access to all the things
that the user has access to.
So you know, it can create a database for you.
We can update things,
create documents.
I can search the web, do research.
And then the second big thing
that we just launched last week, actually,
was custom agents.
So you can basically,
you can create a new custom agent,
give it a name.
And unlike the personal agent,
by default it doesn't have access to anything.
So you have to grant it access.
But then once you do, it can actually run autonomously in the background.
So for example, you can give it access to its own database to file tasks, let's say,
and then you can attach it to a Slack channel, and then it will start responding to people
in Slack and a filing task.
That's one use case.
Another one is maybe you could give it access to a database of weekly reports, and then let
it search the web or search your workspace.
So a custom agent sort of represents some work or job, some knowledge work.
tasks that you want to be done autonomously. One thing I'm really excited about this going forward
is we want it to be extremely good at sort of bootstrapping its own capabilities, basically
from an initial kernel allowing it to basically bootstrap itself to do anything, right? So
even for example, maybe building an integration that we don't support yet, deploying that and
then using it. So you imagine that notion agents are actually the broader definition of agent
where like writing code is a tool that's got access.
I think it's pretty key.
Yeah.
I think of coding agents as like the kernel of AGI.
AGI will be a coding agent.
And code is just a really, really useful, a primitive for representing like deterministic logic.
The thing that's really exciting about it, replying it to a knowledge work agent is that it can bootstrap a capability.
So like I said, if integration doesn't exist, it can build it.
if it needs to, you know, connect itself to a new data source, it can do that.
Given you have a notion is at scale, but is operating in a landscape of productivity
and platform players that are at even more scale, right?
Many of these will end up with their own agents, lots of people from the labs to the Microsoft
world are trying to integrate other data sources.
This like cross-attempt to integrate and index.
Like, how do you think that plays out?
Like, what do you imagine that notion agents are best at or what they have the right to go do?
If you look at the landscape, like, I would sort of say there's the labs,
and then there's maybe the software platforms, and then there's maybe like infrastructure.
In terms of the labs, you know, we see ourselves as kind of like the Switzerland for models.
We think, and our customers, they don't want to be locked into a certain labs model.
They're always releasing new versions.
Any given month, one is better than the other.
So we want to be a place where basically you can easily get access to all the best models at any time.
And you can easily switch around.
Do you think open source plays into that as well?
Yeah, yeah.
Absolutely.
I think the open source models are actually getting really good.
There's like four different Chinese models now that are quite good.
Yeah.
We actually just released one of them in our agent last week and we're going to do all four for sure.
They're actually quite good.
And they're way cheaper than the frontier models.
So I think there's a lot of use cases where you'd want that.
And we want to give that as an option.
In terms of like the other, you know, so, you know, we think of our role as sort of taking
all the best models that we can, creating really high quality state-of-the-art agent implementations
where people can easily and conveniently get access to them.
And then making sort of a collaborative workspace that is really good for humans and for
agents to coordinate on.
I think it's something that's very needed.
in the world, and we're just trying to do it in a really tasteful, well-executed way.
You were describing, you need the index to make the agents good.
You give the agents access to the tools that we humans have in Notion.
How do you think about the structure of Notion and where it's useful or even not useful or relevant for agents?
Like blocks and databases and such?
It's all still pretty useful, extremely useful.
there's been a challenge to sort of, you know,
we want to make it really convenient for the agent.
I think that's a new thing that didn't exist.
In the past, it was convenient for humans,
and then we also made it API as convenient for humans writing code.
It's our API.
So we essentially have a new customer, which is the agent.
At first, that was definitely a problem.
So, for example, like our API uses this crazy JSON format for blocks
that by default is, like, crazy verbose,
like horrible for the agent.
But we basically took on that challenge
and designed just really convenient APIs
for the agent.
We created sort of a markdown dialect
that looks like the default normal markdown,
but it's sort of enhanced with all the notion blocks.
And the models are really good at it.
It works really well.
So that's how it reads and writes the pages.
And then for databases,
we use SQLite.
So basically it's the
I guess the speak in SQLite
Which also works really well
So the default thing did not work really well
But then we just
Like took that on as an engineering challenge
And I would say now we have
Like extremely convenient APIs that the agents
Are really naturally good at
How did you
Understand
Or figure out what
Would make the API better for agents?
That's a good question
Yeah I would say it's a combination of
Just trying things
It's very empirical
So we're just playing
around and like noticing, oh, it's not very good at that. Oh, that's way too many tokens. How can
make this smaller? And then a little bit of just like first principles thinking of like,
you know, what is it the models are being trained on? And what's in their prior? What do they
know? And what do we think it would naturally be good at? And like, like, how does the agent loop
work? And like, what would be the convenient, efficient pattern for accessing these things?
And so, and then just, you know, a lot of playing around.
I hear user research where the user is actually agent and then, you know, ongoing eval.
Yeah. I mean, you just chat with it. The user's always there. It's ready to talk to you.
Yeah. Actually, that is wonderful where you have infinite access to it.
You have an infinite access to it. Yeah. And you can script and scale the access as well.
I assume you have. Actually, I know you do because you walked in. You're like, hey, I need to get access to Wi-Fi. I need power. We can't block the agents while we're doing this.
What do you have running right now? Tell me about your setup.
I'm working on a new prototype, and so I have a couple agents. I'm working on that.
And then, yeah, my setup these days is just either claw code or codex.
I like the CLI tools. They're super simple and, like, we're pretty well.
I'm pretty comfortable in the CLI, so.
And then, yeah, my- You don't need my generated game.
It's a very cool idea.
I would say, yeah, my whole goal these days is essentially to just have as many running as possible.
and to run them all the time.
And, you know, so, for example, like, every night before I go to bed, I'm like, okay, I...
Let's go, guys.
Yeah, basically, what I have to do is make sure that I've given it enough stuff that by the time I wake up in the morning, it will still not be done.
And so I've maximum...
That's victory.
Yeah, that's victory.
Yeah.
So, yeah, like, I've done that, I would say less five nights pretty well.
My personal record is that I've had a coding agent running for, I think it was 13 days straight.
without stopping and just basically working through like tasks.
Well,
well prompted.
Yes,
I admit to having woken up in the middle of the night,
at least multiple times this week.
I'm just being like,
are you still going?
Yeah, I know.
Yeah,
it's kind of nerve-wracking.
I always like,
there's always like,
I'll check it one last time before bed
and just really make sure that it's still spinning.
What about on the notion agents side?
Like, do you have a workflow there that is core to daily work?
Yeah, I mean, I mean,
I use our personal agent all the time.
So it has all the context about our company and everything that's going on.
So like, for example, last night I was asking it about how the custom agents launch was going and like what the signals we're getting from it were super useful for that.
And then I have many custom agents that are running.
My personal favorite is I have an email triage agent.
So it has access to all of my work and personal emails.
And it just wakes up every day and just archives all the stuff.
I don't need to see.
I train it over time to learn my preferences.
Do you actually label data for it?
It's pretty easy to do this, actually.
So all you have to do is you make the agent, and then you give it access to your email,
and then you can make a blank page.
It's like it's memory, and you let it edit that page.
And then you just say, okay, now go, look at my emails, and then interview me,
ask me which things, you know.
So it sort of it will, like, propose things that it thinks it should archive, and then
you can kind of correct it, and then we'll use that to essentially generate, like, a list
rules about like what it thinks are correct or not. And so for the first couple of days,
I was sort of like like correcting it on things. After a couple weeks or so, I dropped the
approval entirely and it just automatically the archives all the things I need to see now.
Wow. It's a lot of trust. Yeah. It completely solved my email problems because for me like
I don't I don't use email that much for work stuff. Like it's mostly in Slack.
95% of the personal emails and work emails that I get, I don't need to see it all. And so it's just a
waste of time. And so it completely solved that. So now one in my inbox, it's like, only stuff I
need to see. I've got lots of custom agents running. There's another one that I built that
triages, all internal feedback and bugs. So we have a Slack channel where basically
people just, just post-random, like product feedback and bugs. In the past, it would sort of
sometimes get answered, but then sometimes like haphazedly get ignored. Just because, you know,
there's so many teams wearing things.
So its entire job is just to route it to the right place.
And it uses a similar sort of like memory pattern
where it sort of learns on the fly
where it's supposed to file bugs.
And then over time it's built up like hundreds of roles
that it just sort of like learn over time.
So for example, like there's a blog about the mobile app
and there's a route to the mobile team
and then a file a task in their database.
Do you look at that like the generated and updated memory
because it's legible to you
to say, like, that makes sense to me?
I think I did it, I did at first,
but then sort of once you trust it's kind of working,
you kind of ignore it.
And then, if it ever breaks, I'll go fix it.
It'll break every now and then, and then...
But the benefit of not reading your email is here.
Yeah, just not read it.
So, yeah.
Yeah, I mean, generally I would say,
yeah, the general pattern I follow is sort of,
I build it as a prototype.
I have it in sort of like an approval mode
where I'm sort of, you know,
watching it closely.
But then after it runs a bunch of times,
you kind of trust that it's working.
Is there anything you do internally at Notion
to make sure non-technical teams
have the intuition for how to build agents
or how to express that productivity too?
Yeah, that's a great question.
I mean, we do sort of workshops and hackathons
pretty frequently.
So, like, for example, like a month ago,
I did a hackathon with the People team
and sort of got them.
The People team has been amazing.
They're actually one of the highest adopters
are custom agents.
Cool.
You know, they do all these kind of workflows
and like Slack and Notion,
kind of like manual work like that.
And yeah, I would say, yeah,
like people are super excited to try it and sort of like,
like maybe just need like a little bit of a push
in terms of intuition and like getting them started.
But then honestly, I've been super impressed.
Like, I think the concept is like kind of intuitive.
Sort of like, like once you get,
once you get past sort of a little bit of the technical barrier of like,
what is a prompt and like what is the agent and how does it get triggered and woken up and like how does that even work but then once you sort of get past that i think it's actually a very human-like interface
yeah maybe the maybe the biggest barrier is actually just getting people to try and assuming it's going to work at all right yeah yeah
you and iven originally met on the internet tools for thought community um it feels like you know the tools we have for thinking are very different now
as your core conception of notion changed
the last few years because of all the AI stuff?
What is the, what thinking does the tool do for you?
Should agents do for you?
What do you get to do?
Yeah, I mean, it's, I would say, change quite a lot.
I mean, broadly speaking, before AI,
our goal is to create the best tool for humans
to directly perform their work.
And then now the goal is to create the best tool
for humans to manage agents to do the work for them?
That's a big shift.
That's a pretty big shift.
It's pretty fundamental.
But it turns out that you need most of the same primitives.
You actually, all the primitives that we built are actually still extremely useful.
It's more that we just needed some new primitives, like representing what is an agent
and how does it interact with your pages and databases.
But you still need the same primitives.
You still need a document.
It's an unstructured way to write stuff.
agents love to write markdown documents so
it's still very relevant and you still need a database
you still need a structured data you know
if you're working with your your swarm of like
100 background coding agents you don't want to have 100 chat threads
you want a Kambemboard it's you know the same as before
makes sense you still need the the coordination structure
what is one thing that just because you're
ahead of the on this stuff and then trying to figure out how to bring
you know notion and then users along
with you. What is something that's really changed about how you personally, like, build even in the last
six months? I mean, it's completely changed. I haven't written code since, like, last summer. I don't
type code anymore. Yeah, it's completely shifted. I mean, we went from humans type all the code
to, like, we're still typing, but we like tab complete to sort of like, we talk to the agent and it sort
of does little tasks for us, but we are still in the outer loop. And then now it's more like,
I design a end-to-end task that involves, like, making some change and end-to-end verifying it.
And then I'm just the outer, you know, the outer verifier sort of like double-checking at the very end that it's correct.
And if it's going off the rails, kind of like monitoring it.
So it's a complete shift.
You know, I'm now like the agent manager instead of the coder.
Amazing.
Well, thanks, Simon.
This has been a super great discussion about how we're all going to become agent managers.
and hopefully in Notion.
Cool. Yeah.
Find us on Twitter at No PryorsPod.
Subscribe to our YouTube channel.
If you want to see our faces,
follow the show on Apple Podcasts, Spotify, or wherever you listen.
That way you get a new episode every week.
And sign up for emails or find transcripts for every episode at no-dashpriars.com.
