No Priors: Artificial Intelligence | Technology | Startups - What Google Cloud Can Teach Enterprises Developing & Rolling Out AI Tools, With Kawal Gandhi
Episode Date: October 23, 2023As the Lead for Generative AI in the Office of the CTO for Google Cloud, Kawal Gandhi has a unique vantage point on enterprise AI rollout. Sarah Guo and Elad Gil sit down with Gandhi this week to disc...uss his insights on how enterprises can effectively invest in AI development, the importance of TPUs, and Google’s internal AI applications. Plus, when will email get more intelligent? Kawal Gandhi has worked at Google for nearly a decade in search and ad roles before focusing on the development and marketing of AI tools. Show Links: Kawal Gandhi | LinkedIn Google Cloud Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @geeztweets Show Notes: (00:00) - Generative AI in Google Cloud (09:05) - AI Adoption in the Enterprise (13:31) - Multi-Modal AI Models (16:19) - AI Adoption, return-on-investment, anti-patterns (24:43) - Google's TPU and NVIDIA GPU shortage (31:00) - Data Marketplace and Model Training
Transcript
Discussion (0)
This week, Al-Danaya joined by Kual Gandhi.
He works in the office of the CTO at Google Cloud, where he's the lead for generative AI.
Gandhi comes from a long history of working on search and ads at Google before cloud.
Welcome, Gandhi.
Thank you.
How did you end up working on cloud and then AI in particular from other projects at Google?
Sure.
I really worked deeply with a lot of our advertisers around search and ads for
shopping and travel, especially commercial-based queries.
And as working with them, they required a lot of storage, compute, infrastructure,
constantly around making their ads perform better, which led us to cloud
and then cloud solutions around using some of that to create smart analytics,
machine learning pipelines, more around documentation, AI, conversational AI.
And here we are.
We've been doing it for a while, but now it's still.
Generative AI where how can you make that customer experience much better with the information
that they have.
And just in terms of beginning to broadly Google incorporate AI into GCP, like what was
the origin story of that?
Is it TPUs, API, some customer need that you specifically saw?
As we were getting into the Google Cloud, this goes back into how can we provide them with
latency, high response, better experience with data is how our customers started
kind of leaning towards Google in my mind.
From the beginning, it was more around machine learning and AI as a differentiator to work
with Google and how could they use that data better on our platform was a constant ask.
So as we were, and on the journey of Google Cloud, it was all about data, AI, storage, privacy, security.
and kind of having that same deep technology that we used inside Google,
how could we leverage that and offer that in market?
So lots of learnings because what we built internally, we had tools, frameworks, etc.
That took us time to kind of make our platform rich for our customers,
from regulated to non-regulated environment and how to leverage some of their current
investments on our platform.
What are some of the internal use cases that have really driven that behavior
in terms of the stuff that you ended up building for your customers.
I know that a lot of what Google does is sort of dog food, its own APIs or products,
and then it starts launching them externally as sort of a service that other people can use.
What were some of those first applications of generative AI that occurred internally
that then caused you to decide to do these things externally?
Yeah, the only ones, I think it's all public now, was around workspace.
Just using a documentation or email, you'll, I'm sure, use it.
So it was like, can you summarize this better?
can you personalize this better?
Can you offer me a suggestion?
And all this kind of gradually, as we let things,
as we tested it internally and dog food,
we gradually launch it externally.
Because we see a lot of progress
that we make internally in terms of efficiency,
productivity gains, folks can use it in their spreadsheet creation,
etc, was gradually launched,
and now it's duet AI, part of workspace.
So these are constantly being doork.
food and tested, we call them experiments.
And as a research team leans in and looks at some of these, we add it to the platform and
bring it forward in our products.
Is there a single feature or a product you launched within the internal Google version
of Duet, the workspace AI products that have come the most uptake or that you're most
proud of?
Yeah, I think we're seeing it across the board, aha moments.
As we launch, we're seeing it in documents in terms of generation, some of the
We're seeing it now in slides with suggestions on images and new image creation,
which is to take time for someone to go ask a studio or an agency, and you have a prompt
that you can give and say, here's something I'm thinking about.
So, Sarah, we've seen kind of intake on all those features.
Also, email generation has been super helpful from productivity perspective, not only for the
consumers, but for enterprises as well.
security. We don't talk about it a lot. I think it's super secure, how it's sent, how the links are
used. Those are something we really take it seriously. How far do you think we can take it with
email generation? Because I maybe spend four hours of my workday just trying to keep up with my
inbox. So this is of great personal importance to me. I'm not going to predict because you predict
something and something always comes back and surprises you from a technology perspective, from a
modern perspective. So I'm looking forward to see what surprises you. And I'm sure we'll speak
again in six months and it'd be like, Gandhi, I saw this feature inside Gmail. It's really helpful.
I like the translation feature for my mom because she speaks Hindi. She's really fluent in it.
And when I write my emails now, I can just say, translate this when she reads it. Now it does
it automatically because she's using that app. So which is like fantastic. So I think it just
bridges the gaps for a lot of our users internally and externally.
I know that there's a lot of different services that Google Cloud provides, particularly around
genitive AI. There's a series of models. There's lots of domain-specific models that Google's
really been forward-thinking on things like Med Palm 2 or Sec Palm or other things like
that. Which of those are currently available? And how do you think about that roadmap of things
to expose over time? Sure. And I think it's helpful a lot to know that we have invested on the
AI infrastructure. So you've seen people training not only using our first party models, but also
training their models and bringing it on to our platform.
The next level up is the vertex.
What we call is capabilities of models that you can use from our model garden.
That's where we have domain-specific models.
So it's Met Palm, these are early days domain-specific.
You can chat with it.
You can kind of get validation of things, the notes that are written out from a nurse
to a patient health care from a doctor.
And it kind of puts it in the right format that can be.
you know, saved up. So it's early days. We are seeing it from just foundational model capabilities
to models that are built by our users and deployed on our platform to also open source models.
I'm really excited to see like Lama, stable diffusion, other variations coming on to the platform.
But what's key in all of this is your data is secure. You want to be like, you know, fine-tuning it
on the platform. And then all the model operation should be,
easy, because we've spent a lot of time really specializing around the model drift, operations,
tooling, safety around it, super early.
But I think those are the elements that will differentiate on the platform from the next few quarters.
That's very interesting.
Yeah, there's this guy, Anker Goyle, who runs a company called Brain Trust, is focused on e-val
and a couple other things.
And one of the points he makes, which I think is pretty interesting, is that there tends to be
a very sequential adoption of LLMs by larger enterprises or people training model.
for the first time.
So often they jump straight into fine-tuning
and then they suddenly realize,
wait, wait, let me just prove it out in GPT4
or BARD or some other API
and then let me iterate towards something
that actually works.
And then maybe I go and train my own model.
Do you see something similar
in terms of the pattern of behavior
that a lot of your partners adopt
or do they just jump to your APIs
or what's the most common sort of sequencing
of people really adopting this technology?
Yeah, I think it's always the case
if you look in history,
it's like, hey, I got an API.
I can build a prototype and it creates a lot of excitement.
It's when you really start thinking about deploying it, managing it, monitoring it,
using different models to chain them, have responses that really answer up a capability.
You start thinking deeply about it inside an organization.
So I see this as early excitement.
It's not hype.
So I want to separate a couple that.
It's real excitement because engineers, we love it.
Like, I've been one, you want to grab something, you don't want to have restrictions around it,
and you want to show the art of the possible.
And I think from those experiments, we're going to see in the next year or so capabilities that are,
you know, available to the users and to inside the enterprise where they will see differentiated
output and capabilities.
That's how I see it.
So I'm excited in all those phases, by the way, because it's like the best creative time
for engineers to solve problems
that they've been thinking about solving
for a while. And now they have the tools
that they can download, whether they can grab
an open source one, a closed one,
and we can then kind of see how they're
using it and evolving the
platform along with it. Do you
advise your customers
and major Google Cloud
customers how to think about when they
need to train their own models or when they should be
using domain specific models? Like what advice
would you have for them? Yeah, sure
Sarah. So we deeply
think about it because it's it's you have to be responsible once you're building and fine
tuning a model you know what are the guardrails what are the use cases what is the cost you want
to put behind that because it's a continuous learning process and it also depends on the maturity
of the organization do they have a team that understands how models are built tuned and then
brought forward and this it doesn't mean that you cannot invest in it it's just that where you
are in the cycle is very important. So we lean in, talk about that from a board level perspective,
from a strategy perspective, and then thinking through cultural transformations as well. So it's not
limited to just one dimension in my mind. And it really helps them to think through how they see
this transformation also happening inside their organization. Do you track the common use cases that
your customers end up focusing on? Because it seems like there's almost three different things that
people tend to do. There's, you know, let's go experiment and to see if this is interesting.
There's, I'm going to use it for specific internal tools. I want to make customer success better
or I want to make, you know, some ops workflow better. And then there's a people who are actually
doing it for external products. You know, I'm actually going to launch a feature that includes
generative AI. Do you have a sense of sort of how that breaks out across your customer base on the
proportion that's doing each of this point in this early cycle? Yeah, it falls into like, for me,
It's like efficiency, productivity gains, and also creativity.
And I did it in order because inside the enterprise, folks want to be creative,
but every time they think about efficiency games inside their workflows.
So how can I make my workflow better?
So I can invest back into my group.
And then how can I make productivity better than I can make them creative?
And I think we're seeing the flow inside the organizations.
So what you touched on, Enad is right, how can I make my customers successful
like let's start that use case, efficiency, how can I make my support better, and not drop my C-Sat.
That's the KPI I'm going to look at.
And if that matches, then I go into productivity gains, offering promotions, next recommendations,
then the trust increases.
So let's put another dimension there of trust.
So now you have your efficiency, you're going into productivity, you trust more and more,
and the world we look at is like, how are these things going to work in the system of intelligence
or agents in the future, that the trust goes up and then you really free up your human capital
to be creative, right? So I think we're on that trust cycle of like, how do we trust these
models? They do what we think. They don't go off. They don't hallucinate. All those things are
important. So that's how we think about it and then gradually make progress towards it and not get
super excited and then find that the KPIs are not working, for example. Are there particular
verticals outside of technology you see the strongest adoption in so far or the strongest interest?
Yeah. I see a lot of interest. It started from sales and marketing. Like as soon as these
came out from a horizontal perspective, I think it's an intersection between horizontal and
vertical for every department, whether regulated or regulated, there's like bottlenecks in content
creation, distribution, all that became like you could see the efficiency gains immediately
and creative gains. And then making that process.
and workflow, shorter.
So that was one, customer care.
You all touched on it.
It's another one that goes vertical across horizontal.
Even if you are a B2B, you're touching a supplier.
And a supplier wants a good experience with their current provider.
So how can that experience get better?
And now we're seeing them verticalize on the internal experiences.
So how do I make my employees experience better?
So you kind of seeing that same technology applied internally,
it's like, oh my God, I'm looking for my HR benefits and do this XYZ.
Why can't you make that easy, right?
So we're seeing the internal process now, whether it's regulated or unregulated.
And I think there are these huge areas that we can provide opportunity with AI.
That's my belief.
I'm assuming that the vast majority of this starts with text, right?
Text analysis, generation, et cetera.
Where are you seeing, if at all, multimodality, voice input?
text to speech output, any sort of other, even within sales and marketing, perhaps other
modalities in people's use? Yes. So I think the core right now is language and text and conversation
fits really well into it. Like, you know, what we're doing, can you convert it into a script and
you do translations around it and you have a distribution mechanism? Sounds easy now,
what was really hard three, four years ago. Multimodal, I think, are early stages now. So you're going to
see the early models which are audio-based.
So I think about it like text, then images,
the images and text with audio,
and then you're kind of combining these medias together.
So as I said before, the trust on these as it increases,
you're going to see multimodal coming forward.
There are things to think about like identification, creation,
deepfix that get created, voices that are used out of band.
How can we provide that layer of safety around that?
especially when it's used internally.
And the data and the creation becomes an element of cost.
How do we store that?
How do we retrieve that?
How do we scale that?
So if I give you an example,
I think gaming is a really good industry,
which is kind of scaling out multidality.
And we have a lot of customers who are using that.
Now, think of that when you're going shopping
and then the latency and the progress bar,
like, why can't I see the video coming, right?
So that's why you see it's centralized with some organizations.
who are experimenting, but we've got to bring it at scale out,
which will be fantastic to see across not only the web internally as well.
Are there any particular patterns that you see either organizationally
or the types of projects that people start with
that make your customers more and less successful with their AI efforts?
Yeah, I think it goes back to kind of the, how much investment,
how much belief, how much, you know,
they want to be in the vision quadrant, first of a kind versus fast follower, all plays into it.
It's less about the technology.
It's more about like where they see themselves in the industry.
And if they're seeing themselves as like, hey, we want to really build something and then scale out and then keep investing in it, I've spoken with a lot of customers, even governments, sell on.
The interest in AI, I have not seen this.
It's so high.
We have just scratched the surface of this right now.
So we can see a lot of these gains, which then can get invested back in the business.
So I see a lot of positive signs around that.
What's the most expensive part of the investment cycle, you think?
I'm sure it varies dramatically from customer and a customer case to case.
But when you say, you know, it depends on commitment level and investment, like what's expensive?
The expensive part is now becoming cheap.
Is the models, the availability, the use?
of the platform, those were the things that were really expensive.
I think if you do a cost craft curve right now and the investments you all are making into
the ecosystem, I think we're seeing that just phenomenally coming down, which will allow
people to adopt more.
So I think we're really entering a phase of Seraela and, you know, like a growth phase
of just people adopting this more and more.
We are seeing signs of just like how fast can you do, how fast can we learn.
We've had training classes and we have people certified.
I've never seen this for a year.
We have more than 10,000 people certified now.
Just more on GCP, generative AI, globally, right?
Ready to write code, take advantage of code capabilities, engineers, productivity going up,
which used to take time.
Migration of systems, can we make that easier?
There's a ton of use cases that are helpful with.
We're just bringing this forward on the platform.
Are there any anti-patterns or big mistakes you guys have made internally or that you see customers make when they're trying to get these efforts into production or even choosing use cases?
Not mistake, but we think deeply about the data that customer bring to our platform.
And they're not mistakes, but we worry about any of this, like, you know, not being used in a way or something happens.
So we take that very seriously.
So we run a lot of drills, we make sure that data is kept secure, the models that are trained,
even if there are early days, no leaks happen.
The adapter model, the certification, everything stays in their tenant.
So from the beginning, we just made sure that all of their data, all of their models,
all of their weights, that's like their IP.
And we want to safeguard that.
So none of the mistakes, just deeply have to think about how fast we move and what are the
checkpoints we need to make in between to make sure we don't move too fast that we get into
mistakes. Those rollbacks are expensive, by the way. Then you have to again educate like the
industry, make sure they understand, rather not commit those is where I'm going. Yeah, Google's
always been very good at sort of data security and ensuring that, you know, customer data is well
secured. I guess related to data, there's a number of verticals that are very data intensive
or alternatively where the data can be quite sensitive. You know, healthcare is almost a canonical
example of that where, you know, bespoke data can really help build a more interesting
AI service, sort of like what you've done with MedPalm, but at the same time, you want to
secure it properly. Are there specific verticals or use cases that you see adopting AI soonest?
For example, do you see healthcare moving ahead of education, or do you see fintech or financial
services as sort of the early adopter wave? I think they're all kind of early and adopting in all
those verticals that I was mentioning or horizontal like sales and market.
creative approaches.
I'm seeing engineering, adopted much faster internally now.
With the coding models with the open source,
it's a surprise, by the way, if I had to share my personal,
I thought engineering was so productive and they're writing there,
just design elements of U.S.
And the discussion you have around that,
I've been there, it takes a long time.
But then having a bot in the room saying,
here's how you can approach it,
how code could be written capabilities,
it's fantastic to see that.
So I see a lack of resources
which was causing engineering projects to be bottom-legged.
Now the transformation is going to be much faster.
So bringing that across to vertical is like fantastic.
That's the IP that's getting created now inside those organizations.
Yeah, I'm seeing something very similar in the startup world
where a lot of the early startups are basically technologists
building stuff for themselves or mid-market tech companies
is sort of the earliest adopters outside of a Google or Microsoft
or sort of the really cutting-edge large tech companies.
And so it seems like, you know, at least on the startup scene,
there's a very similar pattern being mirrored
relative to what's happening on your cloud
in terms of, you know, developers are building for themselves first.
Mid-market companies are kind of next simply because they're often driven
by a developer or CEO.
And then it kind of is starting to eke into the other parts of the world
in terms of adoption patterns.
So it's interesting to see that parallel.
It's very interesting.
And think about the investments you all make.
Now, I have a platform in GCP, which gives me models.
I have a coding model that helps me code.
And now I just need capital allocation to go experiment.
I think that's great.
And then now I need to make it secure and scale it up.
That's where we come in with our infra and then make it cost effective.
I think we're like early stage of new era of even startups and big companies coming out of this.
and different solutions getting built out.
What you said earlier was really interesting
about having a bot in the room, so to speak,
helping us with UX discussion or speccing a product.
Where does that live for a Google product team?
Does it live in a chat interface?
Does it live in documents?
Does it live in the IDE or in version control?
I think a lot of people are trying to figure out
how this should integrate into existing developer workflows.
Yeah, I think today in my mind it lives in docs.
So you get a lot of benefit in the documentation and that's shared across.
I think you've seen it like any project is like, let's start a doc.
Any hypothesis or experiment, let's start a doc.
I really think there will be a revolution as a platform's mature where the docs can make suggestions.
And those suggestions could be here based on what you've specped out.
Here is output.
Now, you can choose to ignore it.
It could be a notebook as well.
You said IDE's and UX.
it could be just code written
put the API key here.
It's great to kind of do that,
but then when you're scaling up,
that's where I get scaling.
How does it going to scale up?
You don't want these things to become something
that doesn't have the guardrails around them,
who is entitled inside the organization to do this.
So we have to make sure logging is on.
Simple things like logging we will not think about,
but it has to be who accessed, when did they access, etc.
But I think once we get over that,
If you're writing a monitoring tool and it knows the schema, you don't need to go back
and look at it and have an engineer work on it for like four weeks, right?
Why can't it be generated out of?
I just need a monitoring tool for model drift.
Those benefits are going to just accelerate adoption in my mind.
One of the things that Google really pioneered on top of all the cloud services and APIs
and everything else is actually the silicon layer.
I think it was seven, eight years ago when TPUs were first really rolled forth.
as something internal to Google.
They said, let's invent silicon
that's really good at dealing with AI and ML systems.
And so that was my understanding of the origin of the TPU.
And it obviously was dramatically more performant than GPU for a long time.
Could you talk a little bit about the TPU versus GPU tradeoffs
and how Google Cloud approaches that?
Yeah, I think how we think about is like,
can the platform be capable enough and have features, as I call them,
that takes away the developer or engineer or a team thinking
about a project from, you know, the infra piece.
Early days, like, you know, we used to write Windows app, I've written like Java
application, and it's always about abstracting it away so you can manage it, scale it,
and then roll it out faster.
So I think we'll see a world gradually, other than training and where it's trained
and deployed, is it abstracted out from that layer in a way that it gives the fungibility
and the adoption and the scale out that helps you kind of do that.
that will speed up in my mind. And then to your point, innovation on that layer is going to also
continue. Are there specific tradeoffs you see in terms of when you should use TPU versus
GPU or do you think they're reasonably fungible? They're reasonably fungible. They're
reasonable in my mind. It's just more around how many folks know how to do it directly
is something in a top of mind for us. Because there is this that was externally available and
a lot of people are trained up. As more people get trained up, we're seeing
cost and benefits across the board, right?
So having optionality is what we want to bring forward constantly.
The people that I know who know how to use TPU very well are very comfortable in that
environment and to your point that's often people who are at Google and therefore
they're trained to know how to actually use that underlying silicon versus the GPU in terms
of optimizations or, you know, little tricks that make things more efficient.
How can we kind of make that more prevalent and available is something we should kind of
invest in in the future.
I'm going to see that.
More than that, I'm on the team as such is thinking about inference, because you get the
train, you get the application ready, and then the scale is where you don't know of character
AI is like really thinking about scaling it out.
We have other examples in Kora and Bo, Adams thinking about scaling out the application.
So now you're going to see the shift on like we have V-5B, etc.
We are investing heavily and, you know, others are doing is how do you scale?
scale. And then keep the cost curve kind of going in the other direction. We're going to
invest and going to put our effort around that. Yeah. And I guess since Google Transits
Own models, and they have an advantage in terms of thinking through how that scalability
could work for others. And that may be an interesting differentiator in terms of that
understanding. Yeah. I think inference will be something really needed. We can see the numbers,
right, like an application launches and it goes from like 5, 10, 1,500, right?
And you're like, how do I scale this further out?
And Sarah touched on it, what happens when we get into multimodal world, right?
And if I'm an educator, thinking about an education site, I'm not having that kind of luxury.
So that's like really something we think about.
We didn't talk about that vertical at all is education.
And people want to be learning, deploying, etc.
How do we help those organizations come on board and take advantage of this as well?
Are you seeing the current Nvidia GPU shortage change your customer's perspectives in training or inference processor choices today?
Or how does that change your conversations at all?
It definitely is mentioned as we think about it in the conversations.
It's always the discussion that comes up.
But as you look at the applications and the usage and what they're really trying to solve,
you then get to the next step.
I think the conversation shifts around capabilities and platform.
As we were discussing over here, how do you keep the data secure?
How do you make sure the right teams have the access to it?
How do you make sure the data is regionalized, for example?
And those answers are not easy to answer for some of the other providers, for example.
And then it comes to like if models are going to be available,
do we really see that shortage playing out?
And I think we're seeing the concentration on training side, but then the world focuses on a different problem, which is like, let's focus on the real, how do we deploy it inside your organization and make it successful.
That's, I think, where the narration changes, and then it doesn't even come back to talking about the chip shortage and availability, etc.
But having said that, we are absolutely focused on making sure that the platform is available for customers, and we see the demand coming as well towards us.
Yeah. I mean, there's different ways to look at the demand for training. One point of view in the world is that most of the demand will end up quite concentrated in people who provide model services, right, versus at least quantity of compute used in fine-tuning or training outside of a few large.
labs. How do you think about this and how should customers think about Google's model quality
versus others? Absolutely. So we think about going back that the customers should have
availability and optionality. So if there's a model that's coming out, it's been trained up,
can we make it available like Lama inside our model garden? And then they can use it inside their
application and not worry about the platform capabilities at all. And it kind of fits into
currently and leverages that
current investment. We think about that
a lot. So just starting from the customer
first and then making the technology
available, rather than thinking
about here's another set of models
that you need to kind of go
take and do the scaffolding and
the build out around that.
So absolutely. Like as
these come out, Sarah and you know, they
will be new ones, you know,
launched with some capabilities, different
ways of training and approaches,
optimization. And if we see
our customers in our vertical kind of leaning towards that. We want to make sure it's available
on our platform. And I think models in my mind are like 50, 60 percent is the work around
and how do you leverage your current investment is 30, 40 percent work that goes in from the
groups, from our customers as well. And then the upkeep and maintenance and all the operational
elements are also important. One or two last questions for you. What is the thing you're working
on right now within cloud AI you're most excited about that we should be looking forward to
or more customers should know about? I'm working across the board with multiple customers on
just making these user experience the next generation multimodal and how should we think about that
and what are the real first use cases that we should kind of deeply work and partner together
with them. So that keeps me really excited around that and you know how can we bring these to our
platform and make it capable.
Second is just the amount of sheer data some of our customers want for their model training
or available on our platform.
That's the next thing we think about.
Like, are there data sites that we can offer out?
And we're also looking at synthetic data as well.
Can we use in regulated industry some of the synthetic stuff and recreate those simulation
modes that can give them good insight into some data?
that's missing. Does Google Cloud have a data marketplace, a labeling offering, labeling tools,
anything like that today? So we have the partner ecosystem. Through that, we provide a lot of
partners. So exactly the marketplace. And if they want to take advantage of some of those
providers, they can absolutely do that. They're integrated into our platform. So you can kind of say,
I want this service for our LHF and, you know, reach out to them and then use it for your model training,
etc. The most important thing Sarah is to make sure that they're working in congestion and
whatever they are getting from the customer, it's secure. So it's not something used with
another customer. So we make sure those pipelines and entitlements stay in the project.
Gante, is there anything that you wanted to cover today that we didn't? It's a great conversation.
I feel like we covered a lot of ground. Yeah, I think it was thank you. I think we covered all of it.
I would love to come back in six months,
take look back and see, you know, how we move forward.
I know making predictions and AI is very hard,
but I heard from you that my Gmail suggestions
are going to get a lot better quickly.
So I'll take that.
Yes.
Hold us to that.
Yeah.
Great.
Thanks so much for the time today.
I really appreciate the conversation.
Thank you.
Find us on Twitter at No Pryor's Pod.
Subscribe to our YouTube channel if you want to see our faces,
Follow the show on Apple Podcasts, Spotify, or wherever you listen.
That way you get a new episode every week.
And sign up for emails or find transcripts for every episode at no-dashfires.com.