The Data Stack Show - 239: How AI is Transforming Product Development with Thomas Kuckoff, Senior Product Manager at Omron Automation
Episode Date: April 30, 2025Highlights from this week’s conversation include:Introduction of Panelists (2:15)Understanding Product Development (4:42)Three Stages of Product Development (7:53)Collaboration Across Teams (11:20)U...nderstanding Customer Pain Points (12:31)Designers, Explainers, and Sustainers Framework (15:17)AI in Product Development (18:09)Using AI Responsibly (22:53)AI in Sustaining Product Development (24:57)Brand Storytelling with AI (27:40)Tooling and AI Implementation (29:29)Pressure for AI Integration (34:05)The Importance of AI in Product Development (38:28)Contextual Advantage of AI (42:52)Evolution of Prototyping (46:11)The Balance of Speed and Critical Thinking (48:16)Understanding AI in Product Development (51:08)Navigating the Messy Process of Product Development (53:14)Final thoughts and Advice (56:32)The Data Stack Show is a weekly podcast powered by RudderStack, the CDP for developers. Each week we’ll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
Transcript
Discussion (0)
Hi, I'm Eric Dotz.
And I'm John Wessel.
Welcome to the Data Stack Show.
The Data Stack Show is a podcast where we talk about the technical, business, and human
challenges involved in data work.
Join our casual conversations with innovators and data professionals to learn about new
data technologies and how data teams are run at top companies. How to Create a Data Team with RutterSack
Before we dig into today's episode,
we want to give a huge thanks
to our presenting sponsor, RutterSack.
They give us the equipment and time
to do this show week in, week out,
and provide you the valuable content.
RutterSack provides customer data infrastructure
and is used by the world's most innovative companies
to collect, transform, and deliver their event data
wherever it's needed, all in real time.
You can learn more at ruddersack.com.
We are on, cool.
Well, first, I wanna kick things off
and just say thank you so much to Drayton, John,
everybody here at the center that has put this together.
We are super excited to be here today.
As Drayton said, I'm Brooks Patterson and I have two of my good friends
and colleagues here, Eric Dodds and Chandler Vandewater.
We all work at a company called Rudderstack, our software company
based out of San Francisco, but it's what 2025 now work remotely.
We have a little office in Greenville,
but work with folks all over the world.
And super excited to have Thomas here as well
to give his perspective on a really interesting topic.
So we'll talk about how AI is changing product development.
And we'll dig into that if you're like,
what is product development?
Don't worry, we've got you covered.
But we run a podcast called The Data Stack Show
and talk to folks that are building data,
tooling that work as practitioners
in kind of customer data and are super excited.
This is actually the first time we have ever done
a live audience recording.
So very excited about that.
And we'll post this as a podcast in a couple of weeks, just as a regular episode.
So if you hear anything today and you're like, man, want to go back and check that
out, feel free to follow us on whatever podcast platform you listen to.
And we're called the data stack show.
So without further ado, um, we'll jump in and have everybody introduce themselves
here, I guess I can actually kick us off.
So I'm Brooks Patterson.
I'm in product marketing at Rudderstack. So I sit kind of in between
the product function, building our software and our growth team. So sales marketing as kind of the translation layer to make sure that everything we do from a go-to-market perspective is aligned with
our product strategy and includes very robust, I think, details
about our product.
So that said, Chandler, we'll let you quickly introduce yourself.
What's your role at Rudder Stack in the product org?
Yeah, so my name is Chandler Vandewater.
I'm our principal designer and front-end engineer.
And right now, we're in the role of pushing forward design with product. and front-end engineer.
around what we're building and then the execution of that. RutterSag, just to give a little bit of context,
we are data infrastructure for customer data.
So companies like Crate and Barrel, Cars.com, PrizePix, et cetera,
install us in their websites and apps.
We collect behavioral data and stream that data in real time
to all of the systems that they use to run their business.
And that includes data warehouses,
other infrastructural streaming systems,
marketing tools, CRMs, et cetera.
So we're an infrastructure layer for behavioral data.
We're about five years old and we've raised capital from investors like
Kleiner Perkins and Insight Ventures.
I'm Thomas Cookoff.
I am the product manager for Omron Automation here in the Americas.
And even though we are in the launch pad, Omron is not a newer company.
So it's about $8 billion from a global level at the revenue side.
And the lion's share of that revenue comes from industrial automation.
So that is high speed robotics.
That is a lot of that communication on the factory floor to get that data
from where the work is getting done
to where new insights can be gleaned.
So my responsibility is the profit and loss statement.
So very similar to yours.
My job is to not only grow the business,
but keep it healthy along the way.
Awesome.
So very excited today.
The world I live in every day is we're building software.
Very excited to have Thomas on the panel
to give us a perspective of their building
software and hardware and have a,
I think what we'll find a very similar, but also dissimilar
process to kind of software only product development.
So that said, and Eric, we'll let you take this one first. But I think product development. So that said, and Eric will let you take this one first,
but I think product development, very wide field,
probably looks very different at,
even within the software industry at every company.
Can you help us just understand a little more,
like what is product development?
How would you kind of simply explain it?
Yeah, it's tough actually.
We were just in San Francisco at a conference
talking with a bunch of data people and
we met several people who are in product development at companies, even AI companies,
like OpenAI and a number of people there. And it was great. They even said, you know,
I'm in product, but it's kind of hard to describe my job, which I think is par for the course in software. I think in its best form,
like having a role in product is actually having
a very deep understanding of a customer pain point is really the main goal of the job is to understand
what is the problem and then how do you actually solve it
is how I would distill it down.
And that could be in really any discipline, right?
You know, we do it with software,
but I think that's the essence of product,
understanding problems and then building solutions for them.
Thomas, I think that very much aligns with kind of
as we were talking, doing prep
with how you think about product,
but anything to add and maybe anything from the,
from your world that you think is maybe additive, like working on hardware and problems
for manufacturing, like that is, is maybe different.
Yeah.
So a lot of what you said is dead on, um, at least from this perspective and to
shed a little light on that undergrad engineering graduate business.
So that just means I'm as partial to a good problem as a elegant solution.
So that being the case, the product development process is very much a commercial process
of bringing a solution to those who need it the most and getting that solution in that
closest proximity to those who need it.
So a good way to measure that success is that proliferation of that solution and other recursive
technologies or derivative solutions that come from that initial one.
So a good way to kind of break it up, at least from the corporate side, is there's three
stages.
These stages are not mutually exclusive and AI is certainly polishing them a little bit more to create a little more overlap.
That first stage is very much the problem,
defining what is that problem that you're trying to solve
and de-risking it enough to decide,
yes, we as an organization want to solve the problem
because it's very much a commercial process.
The next is actually solving it.
And some people kind of put the solution and the delivery of the solution into two different
groups.
Some keep them together as one, at least from this side.
It tends to be two separate.
And a good rule of thumb is that some people like to say it's creativity before capital. So you want to isolate that solution and design it the best you can.
And then you get to that third bucket, which is getting that solution to where it can really
do a lot of good, which is in the hands of the users.
And that timing is very important because just like a good product launch, it's directly correlated
to the root benefit of comedy, which is timing.
That's great.
Can you give us your perspective on kind of what Thomas just covered?
So three separate areas as a, I mean, you're really involved in every single one of those
in as Thomas was talking, I think like what I heard is there's just a lot of design thinking that goes into the whole product development process. single one of those.
Right? So if you're trying to solve a problem that nobody cares about, that's the number one issue that a lot of people find themselves in,
is creating a business around a problem that really doesn't exist.
So starting with the customer and understanding their pain points.
And I think as a startup too, a lot of people, they start their business out of being in a pain point.
And so my role is very much so to understand the customer,
understand what their problem is,
and then translate that into a solution
by which our services can serve them.
So there's plenty of backend engineering
and data management stuff that's well outside
of my realm of influence
and my understanding, but I'm able to interface
with those teammates and basically set forth a paradigm
of some sort of interaction that the customer can use
to utilize our software.
So like the pushing of data around the internet,
behavioral data, that can get really complicated
and you can look at it a thousand different ways.
So really trying to go back and forth with Dodds, that can get really complicated and you can look at it a thousand different ways.
So really trying to go back and forth with Dodds who's on product side
and then our engineering team and then me in design
so that we can come up with a solution that really serves our customers well.
And again, you can't just push something out the door without actually getting feedback from the customer to say, hey, this works or we really don't like this or you're missing the point
completely. Again, it's all about solving the problem for them. So as a designer, I
learned a long time ago you can get really personally attached to your designs. And so
that's the best thing is to just let that go and that goes for anybody in any role where you know
I poured my heart and soul into this thing, but it's not what you want. Well too bad, you know
Gotta serve the customer first. That's great. Well, I promise we will get to talking about AI
That's what we're all here for but before we do so so something you brought up Chandler's kind of you work with a lot
Of different teams You have to understand
I mean like all across the business. How are things working? How does the product work?
How does the market use the product Eric?
Can you tell us a little more just specifically about I mean the product org sits at the nexus of so much within a company
Can you just tell us a little more about the different teams and kind of roles that you work with?
Sure.
I think this is one of the things that makes, makes working in product really
difficult on the one hand, it's actually very simple, right?
If you build things that solve a really deep pain point, customers will love you
and they will exchange their money for solving that pain point, right?
It is actually phenomenally difficult to line up all of the different pieces
in order to make that exchange of value happen.
I would say one of the very first things that you run into in a product role
is that even if you spend a lot of time with customers,
which is shocking how many companies
who are building software or products
actually don't really talk with their customers that much,
even if you do though, one thing that's really tricky
is that customers tend to describe their pain
as specific symptoms.
And so there's a trap that you can get into
where you are trying to build some sort of solution
for that specific symptom,
as opposed to understanding what the root of the problem is.
And I think that makes the difference
between sort of products that are okay
and products that are truly phenomenal.
Because often what customers ask you for
is not the best solution to the underlying problem, it's just their way of
interpreting it through a very specific symptom.
And so I bring that up because it is so important to talk to customers, but also in our organization,
we spend a lot of time with our customer success team, the people who are working with our
customers every day, we're trying to understand how they're using our product and what they're trying to do with our product.
And then also our sales organization, right?
They are trying to figure out what problems
that our prospects are trying to solve as well.
And so distilling all of that information down
into its most pure form as the definition of a problem
is something we spend a lot of time with.
And so we spend a lot of time with sales and customer success on that front.
And then to Thomas's point, I think the other piece of it is understanding.
So let's say you get a really good understanding of a problem and you have a really good foundation
for a solution.
That's great in a vacuum,
but you have to understand the fundamentals of the business
to know whether or not
you're going to get return on investment, right?
Because when you start deploying design
and engineering resources to build something,
it's the most expensive thing the company can possibly do.
And so you can build something really great,
but there are lots of product teams who build
stuff that's really awesome that is actually not revenue positive for the business.
And so I think really great product organizations understand the fundamentals of the business,
even down to the cost of goods for us, that software, right?
What are our gross margins on what it will cost us to actually run this feature at enterprise scale.
What can we charge for it?
Can we actually sell it?
Is it going to create net new revenue or is it going to drive net dollar retention and
account expansion?
And so really getting fundamentals on those things.
And so it's actually not uncommon for us to say, to get almost all of the pieces lined
up and say, you know what, it's going to take us six months to build this and the ROI is just not there. And so we're not going to do it.
Even though it would actually be helpful for customers, it doesn't actually make fundamental
business sense. So for that side, we work a lot with the revenue operations team, the sales leader,
and then the engineering leaders to understand the actual fundamentals
of the business from a financial standpoint, both in terms of revenue and cost.
So basically you have to work with the entire company to do your job well.
That's a much more concise way.
Thomas, as we were kind of prepping for the show, you broke this down into kind of three
categories.
So we have designers, explainers, and sustainers. It's kind of a framework you use to think about all the different kind of three categories. So we have designers, explainers, and sustainers.
It's kind of a framework you use to think about all
the different kind of pieces of the organization
that you work with to successfully launch
and kind of sell a product.
Tell us a little bit about each one of those.
Yeah, absolutely.
So at least on the larger corporate side,
you do end up working with everyone.
And that's because the product development process
is very nebulous.
There's a lot of trying to decipher the signal from the noise, trying to find the root cause
versus the symptom.
So when we look at those three kind of buckets, you've got the problem, you've got the solution,
and then you've got your channel to market.
There are different people who you're going to work with in each of those buckets and the designers, the explainers, and sustainers can be found in each.
So what we mean by designers are these are the individuals who are going to be identifying
ideas, fostering those ideas, and starting to develop novel ways to unearth new value for a company.
But those ideas, at that point, they live in a vacuum.
And when we start talking more about AI, those designers might not be people.
So the explainers are unbelievably important in all three stages because whether or not
it's a good idea or not, no one's going to believe
it unless it's trustworthy and credible.
So Anthony, you said this very well in one of the early panels that an explainer's role
is to build that narrative, explain why, show how this idea connects into the rest of the
business and why this problem really should be solved.
Because when we look at these designers, they could be taking a new idea
and applying it to an older problem,
or it could be taking an older playbook and applying it to a brand new problem.
So you've got the ideas from the designers,
you've got the explainers putting some meat on those muscles,
and then the sustainers are ones who are going to take your product beyond that initial launch
So as much as the product development process
I'd love to say stops when you launch it absolutely doesn't and an industry you'll see product life cycles go for five ten
Twenty years and of course AI is starting to shorten those, but Sustainer's job is to
pull out maximum value as that product goes from launch to maturing and then sunsetting
in order to make sure that the company maximizes that ROI. And you're going to see that all
through the stages, especially when that product goes from solution to the channel
because nothing happens in a vacuum.
That's great.
Well, so let's dive into AI now.
One of the things you mentioned
is kind of shortening those cycles.
Eric, I know at Rudder Stack,
we're doing a lot of really interesting things with AI.
Tell us how AI kind of fits into,
and we can use this framework,
right? The designers, explainers, and sustainers. How do you see AI being kind of applied in these
different areas? Yeah, I think one of the distinctions that I would make at a top level would be
would be that when you think about how a business
leverages AI, there are two fundamental ways.
So one is internal processes. So earlier the example was given, you know,
three weeks, $3,000, you know, down to one hour and $20.
Right, and so that's a process efficiency, right? $30,000 down to one hour and $20.
And so that's a process efficiency. So you're using these tools to take an existing process,
an existing workflow within a business
and dramatically change the way that's done
to create leverage.
And that often shows up in terms of the cost and the time.
And then the other piece is how you leverage AI
as a tool to help solve a problem for your customer.
And so for us in software, that is,
can we use a large language model
as a part of our product to build something really useful for our customer,
that we couldn't do without a large language model.
And so I think that distinction is really important because you say, how are you using
AI?
Well, there are a number of different ways to actually use an AI, to use AI within a
business to create leverage.
So we approach it sort of, and I'll go,
I'll think about all of those different things
that you said from the lens of,
I'll just explain how we generally think
about using AI internally, how we do it.
So when you think about an explainer
and you're building a narrative,
AI is a phenomenal tool to significantly
speed that process up. So for us, that would look
like research. Building very deep research in a very short amount of time that would take an
individual weeks and weeks can now be done very rapidly., you can build confidence and hypotheses much, much more quickly so that you
can get the narrative, so that you can get to that narrative much, much more quickly.
Sorry, it was explainers, sustainers, yes. And then what was the last one? Designers, yes. When you
think about the sustainer, so once you have a narrative established, I'm trying to think about how
in the sustainer role we use AI. I don't know if anything comes to mind for you. Yeah, absolutely.
I feel like I know this happened so quickly, but that's kind of become table stakes. And
so when you actually get into implementation of whatever you're building, AI is just table
stakes, right? Whether you're writing code, whether all of those building. AI is just table stakes, right?
Whether you're writing code,
whether all of those sorts of things
is just sort of implemented in the workflow.
And then designer, I would say,
and this is where I think that distinction
that I mentioned earlier is really important.
So not only do we use a lot of tools
in the actual design process itself,
but we view AI as another significant tool in our toolset
in the way that we would even build something, right?
So when you're building software,
you have a lot of technology at your disposal
that you can incorporate into the solution
that you're building.
And now AI is a significant tool where you can say,
okay, can this actually create leverage
if we incorporate,
you know, a large language model into whatever solution that we're building?
Chandler, tell us a bit about one of the things you mentioned this morning is, as we were
talking and we're on campus at a university when chat GBT first came out, I think there's
a lot of fear around, oh my goodness, kids are turning in papers that are just written
by AI.
And I mean, it happened, right?
And this happens also in business as well.
People get lazy and they have AI do their work for them.
I think one interesting thing is that AI is getting better, so you can cut corners and
kind of get away with it.
But Chandler, you talked about kind of AI as a mirror.
And so, you know, a lot of what my job can look like is, you know, outputting individual designs or, you know, writing some code. And so the ability to utilize AI is, you know, very much on, you know, on my shoulders from the standpoint of what is my output.
And so I've learned, you know, because I kind of tried to just go the whole vibe coding route and just try it out and see.
And I've learned that AI really kind of reflects
your intentions. It reflects your laziness. But it can also reflect the areas in which you really
want to get better. So one of the guys mentioned before that somebody on his team was using AI to
learn, or maybe it was one of the guys in the crowd. But that, I really feel like that is
indicative of maybe the way my mom used to talk about, you know, learning how to Google something.
You know, it was like, oh, you know, you're not learning it because you're just going to go Google it.
And then, you know, it turns out I turned that into a career because I was just Googling stack overflow posts on how to write code over and over.
And so, you know, I think that the same thing is true with AI. You will get, you know, you will be accountable for the things that you output. So if that's
code, you know, use AI to help you, but don't use it to cut corners in ways
where you're not sure exactly what is getting, you know, committed to your code
base. And then for me, you know, from the design standpoint, there's tons of awesome
tools out there that can whip up a webpage really quickly
based off of some idea.
That stuff can be really helpful as like a,
get me 70% of the way there.
But if you're gonna talk about making a product
that you can sell to people that they don't find
frustrating to use, then that last little bit
gets, is where you're, it's like the 80-20 rule or whatever, don't find frustrating to use, then that last little bit
is where you're, it's like the 80-20 rule or whatever,
or 90-10 rule, the last 10% takes 90% of the effort.
And that's really, I think, gonna be 10 times more true
with the advent of AI.
That's good.
Well, let's get into some specifics here.
We'll pass it over to Thomas.
What are some of the ways that y'all are using AI at Omron
to do all these things we're talking about?
And I think, I mean, it seems like,
and correct me if I'm wrong,
but one of the overarching goals I think is like,
how do we shorten these product development cycles
to bring better products to market faster
and then continue sustaining those over the long-term?
Yeah, so in terms of product life cycles,
everything is getting shorter
because we can iterate
so much faster.
So when we look at the product development, we're not only trying to offload some of those
not as much value add stages, but then also be able to increase where there is some value
add.
So going back to our designers, explainers, and sustainers. What I personally love about generative AI is that a novel solution
isn't necessarily a bug, but is indeed novel. So we'll dive into a hardware example for a robotic
arm where so much of the design of a robot is still based in the physical sciences. It's still
based in Newtonian law, but the ability to use those laws in a novel way is
getting very exciting.
So an example of this is topology optimization.
So if you imagine a robotic arm and you know the materials, you know the strengths, the
fatigue of those materials, and so if you apply any known forces, you can determine
the stress
correspond that stress to how long that product will live. So in order to do that
within a digital twin environment you'll see a lot of draping a almost a fishnet
over that arm setting the material properties to very known properties and
then telling the AI to say iterate have some fun in some unsupervised
environments to say, what does the geometry of that arm need to look like in order to
hit X amount of cycles, Y amount of years?
And the geometry might not look anything like that's been designed before because everything
designed before has been maybe through subtractive manufacturing versus additive.
So that's where you'll see a lot more biometric mimicry coming into the robotic side, which
should be no surprise when we see humanoid robots looking a lot like humans.
On the explainer's side, what's so powerful about being a good explainer is staying true to your brand and what a brand is finding a simple story and telling it again and again.
So that's where LLMs become this huge accelerator, not only being able to train it on your own data, but being able to also integrate how is that story being told and who am I telling that story to.
So that allows you to greatly speed up by also staying true to that core value or once
again the problem that's being solved.
So you might be able to market to someone with having a completely different pain point
even though you know that root cause.
So you can do that very quickly.
Oh, go ahead.
I want to just click in there a little bit from a product marketing perspective.
What you just said is extremely interesting to me.
Can you tell us just like, what does that look like at Omron, especially
as you are the product manager, clearly you don't own, you know, this whole, uh,
kind of creating the narrative by yourself, but work with other teams for that.
Can you just give us a little bit of kind of detail around what that actually looks like day to day? Yeah, absolutely. It was said
very well before in I think the last panel that as a product manager you don't need to be the
smartest, you don't need to be the expert, but you need to be smart enough to know to surround
yourself with experts. And what's nice about the product development process now
is a lot more can be documented very easily.
So be it transcripts, be it some early presentations,
be it a patent, be it a copyright,
and you can feed all of this into LLMs and say,
okay, I want to build messaging around this specifically.
So you're also able to protect that data and also not train as much on societal data where
you might get that bias versus all the work that you've done to distill the signal from
the noise.
So we do that very commonly in Omron because it's a global organization and how we market
to teams in Europe is very different than how we market to teams in Central and South America
So no, we're just gonna dig into that as well
So clearly you guys aren't just using the kind of general tools out there. Tell us how does that look at Amra?
What have y'all done from a just tooling standpoint clearly, you know, you're not just have a corporate account with chat GBT
What do y'all do?
So it definitely started like that.
The, we didn't quite know what was the best way to start. So there was some corporate agreements with some open AI and the idea here being
how well could it work?
So what we were trying to do, going back to that So what we were trying to do going back to that problem,
we were trying to kind of flush out,
is this problem worth investing our capital into,
which in this case was our own LLMs
and our own repositories of data.
So even in that process,
we went through a little product development cycle
to say, yes, this is worth solving.
So we kind of got in the game to play it,
and we found that we should stop pumping our data
into the atmosphere.
So it was a little trial, but now we've
got a set of proprietary tools that
allow our digital teams to take that message that's
been well vetted, well proven, and say it time and time again without excluding
the audience that it will be pontificated to.
One of the things in our kind of somewhat bubble kind of startup world where everyone
that we work with is just more apt to probably be an early adopter, try a lot of new tools,
but then a lot of things work, a lot of things don't.
And one thing that we hear all the time
is there's a lot of pressure on companies
from the highest level to figure out how to implement AI
and drive revenue for our company with it.
It sounds like y'all have done that at Omron,
but a lot of companies really struggle to do that.
What was the adoption process internally like for,
obviously you all made a very big investment,
which I think is really cool you saying,
hey, we actually went through a product development process,
the same framework we're talking about is like,
what are the prob, what first, what's the problem?
And then how do we build the solution?
So clearly you were very thoughtful about this.
You didn't just say, hey, we need to slap AI in here,
which is what I think, you know, a lot of we see even from like the way people are building
products today is, are we thoughtfully adding AI here? Or are we just, you know, putting
it in some way, shape or form because we feel like we have to clearly all have been thoughtful
about it. But what was the kind of internal adoption process like?
So there was an article a couple of years back about when you're developing an AI algorithm
for every dollar you spent, it's $5 to deploy it.
And that was no exception at Omron.
And a lot of that was built into, well, how is this going to help us not only decipher
the problem, distill it out more clearly,
one, develop novel solutions, but then two, or I guess three, sustain it.
So at least to say how that process looked, we were fortunate enough that
the revenue from Omron is diversified over a number of industries,
two of which are a semiconductor and automotive.
We know over the past couple years, the automotive industry has gone through
a little bit of a renaissance.
So maybe five years ago when talking about protecting data,
getting data off the factory floor,
it really wasn't resonating a ton in boardrooms
and on the factory floor,
but now it's the other way around
where so much everyone reads about AI and the shareholders push into leadership turns to their management and says, don't
just stand there, do something.
So that doing something has come out through the voice of the customer processed from us.
So that was really the catalyst to say, our customers are asking this question, so they
are going to be moving quicker.
How do we keep up or how do we stay ahead?
Because within the product management perspective,
you can't just look six months ahead.
You have to look 10, 12, 24 and further.
So you kind of have to adopt this.
And I do want to thank Omron customers for advocating
for their need to accelerate because that was definitely the catalyst for Omron as well.
Dods, can you give your perspective on that from your role?
I think there's internal desire,
hey, let's figure out this new technology,
but also as Thomas said, being customer driven
and kind of problem focused,
how have you thought about implementing AI at Rutter Stack?
Yeah, I mean, there's real pressure in boardrooms
and Silicon Valley for sure, especially,
it's very clear from the amount of money
that is being spent that this is going to
and already has in many ways fundamentally changed
the way that certain things are going to work.
And so there is a lot of pressure.
I think a couple of things.
One is that the basic form factor
through which most of us experience AI,
I believe is very misleading.
The chat interface, I think, is,
it makes it so easy to do, it
makes it easy to have wrong thinking in a couple of different areas. So one
example is that you can just deploy this and it will be helpful in any situation,
right? You know, it's just sort of a, you can leverage AI and
you put this chat bot here and it's great, right? And I think what's really misleading
about that is it is going back to the original discussion about product management. It's
sort of a, you know, when you're a hammer, you know, everything looks like a nail approach.
And leveraging AI to truly solve a customer problem
is pretty difficult in reality, right?
I also think that, and this may sound contrarian
because of the amount of creative output
that comes from LLMs,
but I think the other thing about
the predominant form factor
is that it's pretty boring, actually.
I think the most interesting things
are going to happen at the API layer. We were, actually. I think the most interesting things are going to happen
at the API layer.
We were, Briggs and I were just in San Francisco
and one of the engineers from OpenAI
was talking about their new API
and it's going to be phenomenal.
It's going to, it gives you the ability to interact
with both tools that you've installed on your computer
and external tools within a single process,
which is going to be paradigm shifting, right?
And so, they call this agentic, the agentic world.
And there's lots of buzzwords.
But I think once we,
I'm excited about breaking past this form factor
of a chat bot because there are so many more amazing things
that we can do and build outside
of that form factor.
So that wasn't really answering your question.
But the other thing I'll say just as context before I give a couple specific examples of
is Thomas mentioned that it is expensive.
It is actually not easy to easy or cheap to implement AI at enterprise scale in a business.
It is really expensive.
And that again goes back to, okay, if we find a solution that works really well,
what are the cogs on that and does it actually make sense from an ROI standpoint for the business?
And it's really hard.
I mean, we are seeing AI everywhere, but when you sit down and actually talk with the people
who are trying to do this at a very large scale inside of companies, it is still a pretty
difficult problem.
I think that will increasingly get solved, especially as the cost of infrastructure and
the complexity of the infrastructure as market forces drive those down and commoditize a
lot of that.
I think it'll get easier, but it's not easy to just go into Omron and say,
let's just roll out some AI.
It's complex and it's really expensive.
Let's talk about a specific example.
I think this is a fun one.
So this is probably, I mean, this is early,
I don't know, I mean, a year and a half or two years ago,
which isn't that long relatively,
but one of the things that LLMs are really good at
is summarizing large amounts of text.
And so an area that is particularly helpful can be in documentation.
And so we, we wanted to see if we could essentially replace the search in our
technical documentation with an LLM that could summarize things to give our
customers answers more quickly.
Right.
Because a lot of times you'll have to read multiple different pages within documentation
to get full context, right?
It's like, well, that's a home run for an LLM.
And we found this company that they sort of specialized
in essentially wrapping a model,
like, you know, wrapping a model in a tool
that when scanning your documentation,
you could feed it all this information, right?
We even fed it information from our data warehouse, all sorts of different stuff.
And it was a tool that actually delivered that search experience as a service for us.
So we pay them money, they're a software company.
And what was really wild was that in less than 12 months, the new models from Anthropic and OpenAI were
better than that tool, right?
They just had advanced so significantly that now, you know, it's actually better just to
use, you know, whatever it is, what's the GPT-40, you know, for those sorts of things.
I think that makes it pretty difficult because the pace of change is actually very breakneck.
And so even if you implement things,
changes at the API layer and with the fundamental models
can actually make a lot of work that you do
and that is initially valuable kind of irrelevant.
Okay, sorry, that was a bunch of context.
Really quickly, ways that we use this every day
in our process.
I want to actually, I'm going to pull out a specific one because what you were just talking about, the pace of change. Ways that we use this every day in our process.
I'm going to pull out a specific one because what you were just talking about, the pace of change.
I'm actually really curious to get your and Chandler's thoughts on this.
What we're doing at Rutter Stack with V0, Verso's new product.
You came in my office a month, maybe a month and a half ago and had just built this prototype very quickly.
And we're just like, dude, check this out.
It took me maybe an hour and a half to do this.
So first I wanna ask a pace of change question
and then I wanna dig in to get your perspective
on kind of how it's changed prototyping
and then your channelers as well.
But from a pace of change standpoint,
you seem really excited and surprised at how well it worked.
Obviously, we've been kind of exposed to the power of AI
for what, three-ish years now,
and wanna just like understand from your perspective,
it's, was it surprising?
Like, if someone had told you eight months ago that you could do what you did with V0, I just understand from your perspective,
was it surprising? If someone had told you eight months ago that you could do what you did with vZero, would you believe them?
It's a great question.
Let me explain vZero really quickly. Versel is a platform that allows you to deploy software on the internet.
You have a web application where users can sign up, et cetera.
That needs to be deployed on servers.
And there's a whole process around actually delivering that
to end users.
And so Vercell is a platform that does a number of things
in that space.
It's a truly incredible platform.
And they launched this tool called V0.
It's a tool that it's a chat interface, you know,
only enough that actually probably makes sense in this case, but it's a tool
that allows you to build a website or web application.
And I think if you had told me just at a basic level that it would be that good.
I think maybe I would have struggled to believe it, but I think the thing that
I really was missing is that the LLM aspect of it, right?
Asking an LLM to write the code.
I mean, it literally writes the code.
You can see it, you can edit the code.
It's really neat.
That is actually already a commodity.
You can do the same thing using GBT, using, you know, cloud code from Anthropic.
That's a commodity.
What really is amazing about the decision that Vercell made was that they have the infrastructure
and system that makes it a complete experience.
Think about a platform that deploys websites and web applications.
They have millions and millions of those examples that they're actively deploying on their platform.
One of the biggest factors for large language models, and AI generally, is context in frameworks, right?
So if you can give an LLM very high level of context
and guardrails within frameworks,
then it can produce some really unbelievable things.
And so I think what makes VZero special is not the LLM.
I mean, they're doing some really interesting things
under the hood with prompts and whatever,
but what makes it amazing is that their level of accuracy is good because of the system
within which the LLM is operating.
And so I would say that I'm, I think two things.
One, I think like a lot of other people, I underestimated the actual power of LLMs, right?
So that's probably a reason I wouldn't have believed you.
But I also think that I'm becoming more enamored with companies that figure out
how to deploy an LLM within a system that gives them dramatic advantage over
everything else, including the raw models themselves, because that creates this
experience and the ability to execute things
in a way that truly feels like magic.
Totally makes sense.
Well, tell us a little more
how you're using VZR at Rutter Stack for prototyping.
We'd love your kind of high level,
and then maybe Chandler,
just to dig in a little more on why you love
and kind of appreciate working with these tools.
I'll give you a very specific example.
So when we talk about, I mean, Thomas mentioned this, right?
How do you really get down to the root of whether
a solution is actually going to solve a problem
for a customer?
That's a very difficult question to answer.
And our perspective is that you want to front load
as much of that risk as
possible so that when the business starts spending money on it, the likelihood of
high ROI is already baked in.
And in software, one of the things that is pretty expensive is building high
fidelity examples that you can use to get feedback from customers.
I mean, in software actually, I mean, I say that in,
in manufacturing, the cycle's way longer
because you actually, there's a literal physical process
and physical materials.
And so generally in software it's faster,
but we can now without having any designer development skill,
a product manager at Rutterstack We can now, without having any design or development skill,
a product manager at Rudderstack can build a fully functioning end-to-end feature
that looks and feels and operates
exactly as if it were live in our product.
And they can do it at phenomenal speed,
so within an hour.
And so when we do this research
and we conceive of a solution to a problem,
what we're doing now is we're actually building the solution.
A non-technical person is literally building the solution
and then literally having customers use it.
And we can now run that process in a matter of days
in order to understand whether the
solution is going to truly solve the pain point, whether we really understood the problem
at a deep level, and whether we're even thinking about the problem correctly, right?
And so this, it is truly phenomenal how much this has sped up the level of accuracy in
terms of solving the problem and we actually go deploy resources to build it.
So that's been truly incredible.
Chailor, tell us like, what did it look like before?
So if five years ago you were trying to quickly build
the V1 prototype so that the product team could take this,
show it to customers, get their feedback,
see is this something you're excited about
that's really solving a problem or are you know, are we off track here?
What would it have looked like five years ago? And then tell us like what your kind of workflow is today.
Yeah, well, five years ago, I wasn't terrified that I would lose my job, you know.
But in all honesty, it's that was a big part of it.
Like you would spend time and, you know, you try to output a prototype really quickly.
There were lots of great tools for doing that more and more quickly over time. and you try to output a prototype really quickly.
We'll be able to output a prototype.
And they've done a lot of the product thinking as to what the problem is that they're trying to solve.
And they're typing into it and they may have something in their mind.
But then it's on me to then take that and filter out the things that don't quite make sense
to maybe re-orchestrate it in a way where it's like, you know, this
is a decent idea but the UI paradigm that we should be using here should be something
a little bit different or let's like take a step back. So it really helps like speed
up the cycle of like, you know, engineering product and design conversation. So instead
of it taking, you know, four weeks or three weeks to build out a prototype and then
we start getting feedback and, you know, somebody finally uses it and clicks
around as like, this feels weird. You're doing that in like a matter of hours. So
yeah, it's really wonderful. But I will say this, like, on a grand scheme, you
know, the advent of AI and the fact that it can take this sort of first, the first
phases of an idea
and get you there really quickly.
I mean, this may feel counterintuitive,
but actually pumping the brakes
is probably the best thing to do going into these things
because the, I mean, we feel the pressure of like faster,
more, more, more, but the reality is that
it's moving at breakneck speeds.
And, you know, if you go about building a whole business
on something that ChatGBT or Elon Musk decides
to build into their thing and then you're just done,
you wanna have taken the time to pump the brakes
and think about it.
The other thing is that when you're spending less time
on those pieces to the puzzle
that are really there for you to try and solve the problem.
The only reason why you would build a prototype would be to get feedback from a customer
to use the thing and try and get your own feedback.
That is only to serve a purpose. The purpose is to try and get to the articulation of like what the problem is and what you believe the solution should be. So what if you take the time to slow down you can move really quickly at the beginning to try out
something but then you need to take the time to stop and think and the critical thinking part is
the key piece right as to like kind of what Thomas was saying before of like is this actually worth
building for or maybe it was you you know is this worth it you both said it yeah We're all saying the same thing up here, but I really do feel like
slowing down is probably the best way to utilize these tools because the tendency is to just move really quick and
unfortunately, you'll wake up with like you know a code base full of stuff that you don't even know what it is because you know some
cursor bot wrote it for you or you know you end up not taking the time to think through
what the solution should be after getting the feedback.
So that's what I would say is it really lets
the beginning part of the process speed up significantly,
but only to serve the purpose, if you do it the right way,
to spend time thinking and planning
and before you pull the trigger.
I think just to quickly interject you know you kind of joked about is a I gonna
take my job away yeah they call me C zero in the office instead of V zero
because my name is Chandler basically a slap in the face so he's just is a
replaceable machine no not at all and I think one of the points you made is knowing when to slow
down. Like you have years of experience and have built this intuition. You have like full
context around the product. And you know, Eric, you mentioned like how important it
is to give AI context, but that's something that you just intuitively have. And I think
moving forward, we'll see more and more how important that is that you know humans that do have
intuition and expertise are able to use AI to
Move faster achieve more but that at least as of yet is not replaceable
And I think that's just a great quiet. I like the way you say that is actually
We need to know when to slow down and it takes intuition and conviction to do that that today is not going to come from artificial intelligence.
Yeah, because AI can't give you, it can't give you conviction, it can't give you taste,
it can't really give you the amalgamation of all your experience to see and know that something feels right.
It can give you lots of different avenues by which you can try to feel that out
or it can get you like most of the way towards something that you are convicted about.
But when you try and utilize it to, it's kind of what I was saying before, it's like a mirror.
When you try and utilize it to cut corners and stuff, that's when it's really going to
bite you pretty hard.
Yeah, that's good.
We are coming up on time here.
Want to close out by just giving folks some maybe practical advice and maybe gearing it towards
Students, but I think all of us right AI is moving so quickly
It's hard for anyone to keep up myself included. It may be starting with Thomas just would love to hear kind of
advice for
specifically, maybe someone that's interested in getting into product, but even more generally just as you,
we kind of move forward thinking about how can and should we use AI day to
day, just kind of give us your thoughts.
Yeah.
So for those looking to bring AI to product development,
starting to play with it more, I say that there's two good lessons.
The first is when you are starting to integrate it
into that marketing side, focus on expensive signals
instead of cheap signals.
So what I mean by that is if you want to say,
oh, we're using more AI, don't say, hey, we're an AI company.
That's cheap.
Anybody can say that.
But instead saying, we have number one market share because we've
solved this problem and oh we just happen to use AI so we can make it quicker for you.
That sounds different. So you're leaning on more that expensive signal of your reputation
and it's still kind of similar to someone driving by in the yellow sports car. Anyone
can buy a yellow sports car,
but to be a leader in this Fortune 500 company,
that's a more expensive signal.
The second is when you are using AI and product development,
it's similar to maybe poker.
You're not playing the cards,
you're playing the person across the table.
So understand how that AI can be used to help that person see
what you want them to see and cater that message. So if you're using a little more
societal data to do that's perfectly fine. And I guess I'll close with those
who are looking to get into product developments. Know that product
development is hairy, it's messy, and a lot of organizations surround that
process with a lot of different
roles.
So it might not be the role you want, but just get into that process.
The goal there is that you want to be within pointing distance, within calling distance,
when something hits the fan.
So I got into product development not out of school, but instead I went into a organization
that had a product development team,
and I wanted any role in that office.
So I was a application engineer at the time.
I really wanted to design new products,
solve some great problems,
and six months into the job, there was a hiring freeze.
So there was a product development role opening up.
I just happened to be onboarded by HR.
I happened to have a pulse and they said,
you're in and I've been in product development since.
So get in the game as early as you can
and just be so good they can't ignore you.
Love it.
Eric, practical advice.
Practical advice.
I want to echo actually something that Thomas just said and then something that Chandler
mentioned.
What I've seen is that AI, when you actually start using AI within a team context, it really
shows you who the true craft people are, right? Because it is a tool that helps them become even more effective at a craft that they have
already invested in becoming really good at.
And so generally, I would say the like, fall in love with a problem and get really good at building like
actually become a good craftsman. I think that's going to become even more
important in one of the earlier sessions one of the panelists talked about
producers and consumers. I think it's going to become increasingly difficult
to make an impact if you're not a producer. And producers, it is increasingly going to be a requirement to become a craftsperson.
And that can be in any discipline, right?
It could be as a designer or engineer.
It could be as someone in product, someone in finance.
That doesn't matter.
But becoming really good at a craft is a way to ensure that you are like the AI will be an accelerant. I also think
that there is a skill set. I mean use the tools like you know be very curious about it. It is
moving really quickly but that has never been a problem historically. The pace of technology
has never been a problem for truly curious people,
especially if they're really good at a craft.
You can get wrapped around the Axle on trying to try every single new thing.
That's not the point. The point is being really curious about how this is shaping the world.
If that's your lens, then you'll view these as tools that you can use to make an impact.
Yeah, man.
Tap us up.
Be skeptical, but not fearful.
So try it out, but don't believe it 100%.
And don't be afraid to give something a shot and fail, because every failure is an opportunity
to learn.
Very good.
Cool.
Well, I don't know that we saved time for questions.
We ran a little long, but thank you all so much for having us.
The Data Stack show is brought to you by Rutter Stack, the warehouse native customer data platform.
Rutter Stack is purpose-built to help data teams turn customer
data into competitive advantage.
Learn more at ruddersac.com.