The Data Stack Show - 228: The Machine Learning Reality Check: When AI Makes Sense for Marketing Attribution with Lew Dawson of Momentum Consulting
Episode Date: February 12, 2025Highlights from this week’s conversation include:Previewing Today’s Topic (0:14Machine Learning Discussion (1:44)Conditions for Machine Learning Worthiness (6:59)Simplicity in Modeling (8:43)Predi...ctive Analytics vs. Machine Learning (10:42)Optimization and Investment Problems (13:03)Practical vs. Theoretical Saturation (15:16)Building Relationships with Marketing Teams (19:52)Reporting Metrics and Attribution Models (22:49)Tracking Common Metrics (25:40)Understanding Profit Generation (27:46)ROI Calculation Challenges (29:49)AI in Customer Data Analysis (31:23)Execution Challenges in Marketing Automation (35:23)API Limitations in Marketing Tools (36:50)Optimizing Ads with AI (37:50)Challenges of Real-Time Data Integration (40:01)Final thoughts and takeaways (41:13)The Data Stack Show is a weekly podcast powered by RudderStack, the CDP for developers. Each week we’ll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
Transcript
Discussion (0)
Hi, I'm Eric Dotz.
And I'm Jon Wessel.
Welcome to the Data Stack Show.
The Data Stack Show is a podcast where we talk about the technical, business, and human
challenges involved in data work.
Join our casual conversations with innovators and data professionals to learn about new
data technologies and how data teams are run at top companies. All right, you just made it to the third installment
of our deep dive on attribution
with Lou Dawson of Momentum Consulting.
Lou is a long time friend, former router stack customer
and has built data and tech stacks
at all sorts of companies.
He also knows the marketing side and tactics around marketing really well,
so he's the perfect person to talk through attribution with us.
If you're just joining, in the first two episodes,
we talked about the problem we're trying to solve with attribution,
we discussed all of the challenges involved with it,
and we went really deep on UTM parameters and how they're the basis for most modern
attribution.
We also dug into identity resolution
and how you can overcome some of the limitations of UTM
standards by establishing a unique merge key up front
and attaching it to every single campaign.
If you're working on attribution,
it's really helpful practical stuff. Lastly, we got into basic attribution models, which is really helpful for understanding how to even approach reporting.
This time, we're going to dive into advanced attribution techniques and discuss when to apply them.
We'll dig into specific reporting and measurement, and of course, we'll dig into how AI is influencing attribution
and a lot of marketing data in general.
Okay, let's talk about machine learning, you know,
to probably lead into AI.
And I want to frame this conversation, well actually let's talk about machine learning When should you start to apply some of these more advanced techniques?
How does machine learning enter into the picture here?
Yeah, and I definitely have a little bit less experience in this one, but you can use machine
learning algorithms to dynamically weight or change weight or choose an optimal weight
depending on what you train it on.
That's what I have seen it done. I haven't done that a lot though.
So I'll fully admit, if you have additional insights, fire away.
I haven't done it a lot either, I think. But one thought, actually, back to the customer lifetime value thing.
I have a friend that's really deep on that particular topic of, hey, let's take this data, customer lifetime value thing.
the modeling of like, okay, it's fascinating really like, okay, look, this like pattern results in higher customer lifetime value. Therefore, we want to model based on that
pattern. I think that's a phenomenal application of it. But I haven't seen it in the wild like
much.
Yeah, I've never seen that one. Like I've, you know, I know the theoreticals on it. Like
I tried to play around with it once, but it seemed so complicated, so advanced.
Yeah.
And back to the point, it's like, it's
a cool theoretical from my perspective,
but I've never like seen it happen before.
I'll tell you my experience on this.
I mean, we've done internal testing around running
Markov chain analysis and other things like that,
just because I'm the type of person who generates synthetic events. But physical events. running Markov chain analysis and other things like that,
and other things like that.
They're generally spending a lot of money to the point where if they can squeeze more
optimization out of it using a very advanced machine learning technique, it's worth it because
of the amount of money that they're spending and because they have exhausted most of the
easy optimization opportunities.
But the other flip side of that, that is actually just, I look back on the entire conversation,
like several hours of conversation we've had up to this point.
It is a significant investment to get to the point where you have
enough of the right data and a good enough understanding
of your business.
And the other thing I would add, this is another big one,
some level of stability in the business model
that you want to optimize
for that to be worth it.
That generally is happening at a scale where there's a lot of money,
it's a fairly large business, and a lot of times what's happening in those situations is
an agency will have, or software will have a proprietary model, or they'll do media mix modeling or whatever.
They are essentially purchasing that capability from a vendor.
A lot of times agencies are managing a lot of the different campaigns and other things at that point.
That was a very long way of saying, I agree, I haven't actually seen a ton of it in the wild hand roll.
It certainly exists.
You also need the people to stay long enough with some kind of like agency group that can do this. And then like you're providing data
for it to be done essentially.
Yeah.
And then, I mean, on top of that,
you'd also have to prove, number one,
prove that it's that much more valuable in my opinion.
It's like go and do it.
And then number two,
like how would you realistically prove that?
And then number three,
is it like picking up pennies in front of a steamroller.
And then number four, lastly, like, would you be biasing, you know, so would you basically
be, would you be overfitting your model at that point?
Right?
Like, is it even worth it?
Right.
So like, I think it'd be a big challenge to like write a generic enough model that's accurate
for an agency across a wide array of datasets.
And then also accurately predict and attribute for like new datasets.
Yes.
Becoming not good enough or too overfit.
Yeah.
So it seemed this one in short, I'm not trying to knock anyone out. There's done it.
I'm sure I'm sure people actually have done it super clearly. I can only think of one
company I've ever associated with where it might've been beneficial. And even then I
probably would have been like, I don't think it's probably worth it. And number two, like
I just don't, I don't see it as being worth the amount you spent kind of like John was alluding to like you'd spend a lot of time and resources writing something
that I would argue is probably not beneficial in most cases, but I've also
not seen everything.
So,
yeah, nor have I.
And we kind of answered, I think within that, like when do you use advanced
tech techniques, right?
And I think you just, you know, comparing the amount of effort and scoping it that's like, when do you use advanced techniques?
Comparing the amount of effort and scoping it well,
going back to your point, we have the question of people
and what you're actually trying to measure.
I advocate for use the simplest technique possible
to answer the question,
because you can always add complexity,
but it's really about answering a question to help, you know,
to help the business.
And the faster you can do that, the better.
Well, you can always add complexity more easily
than you can reduce complexity.
Yes.
Yeah, it's like distilling complex problems down.
Yeah, it's almost always more complex.
And it's the other thing that came to mind just now,
and then we can move on super quick.
But it's like, you think the infighting over, like,
linearly bad.
What if it's black box?
How do you think the infighting would be if it was a black box?
We don't really know.
The algorithm just told us, like, you get more credit.
Great point.
Yeah.
Well, in fact, that was very, that's a very suit observation and in many cases
I think that is a political motivation for having a third party handle it right yeah
Sure, because you don't want to be holding the technical bag. You know when that fight breaks out. Yeah
In like and I mean this stuff is hard.
It's very likely that at some point there will be some error in the model.
We did talk about things like television or podcasts are very hard to track.
Who goes to the URL?
Some people do certainly, but again, because of the type of people we are, But it's there.
to get a directional sense of whether we believe necessarily ML, it might just be statistics.
If I want to predict an outcome,
I haven't heard predictive analytics in a while,
I think it's out of vogue,
but that's essentially what you're trying to do.
I want to predict a conversion or predict a dollar amount.
So what are the inputs into that?
And then how are they weighted to predict that?
To know that at a high level or even some greater levels
is interesting, but that doesn't mean you have to deploy a super robust model that's for every single user.
You know what I mean? You can learn that generally and then come up with a multi-touch
attribution model from it, or just not use multi-touch attribution and then just have an idea
of, oh, look, this channel seems to matter more than this other one.
Oh, you know what, Lou? I just remembered something. I'm so glad that you mentioned that This channel seems to matter more than this other one.
I'm so glad that you mentioned that because it reminded me of something I wanted to ask you about that we have not talked about before.
But one really good use for a model like that that I've seen employed in the past is understanding when you reach the point of diminishing return
or something like that. And so of course, one of the entire reasons that a business does this
is so that they can spend more money on the things that are working
and stop spending money on the things that aren't working.
But one of the interesting things is when you uncover an opportunity to spend more money on something that's working,
And so one of the challenges you have is, okay, I can increase my spend in this channel to produce more conversions,
but that's not linear.
It's not infinitely linear, right?
At some point, you either reach the physical limitations of inventory How do you think about modeling that like in a particular channel because that's really important as well, right? It's not just like we'll just you know
Keep doubling the budget and keep getting more conversions because it doesn't work that way, right? I mean there's a fantastic question and observation
It's I will keep this broad
For now because this is this can actually be a whole topic in itself
But if you think about it at a high level, this is effectively, it's an optimization
problem.
And then if you go a slight click down, this is effectively like a gambling or an investment
problem, right?
So not advocating gambling or investing here.
But if you think about it, like a mathematical point of view, this is a very common problem in gambling and investing.
So I have a basket of goods that I'm interested in,
and I have a finite amount of resources, money.
What's the optimal allocation of my money to those resources
to get the highest return in most cases
on that allocation, right?
So like a Kelly criterion, for example, is a fairly common one.
And that was trying to be solved, which a slight sidebar,
I haven't seen many customers get to this point
where they're even trying to optimize like this to this degree.
Because usually, most customers I've seen don't
even quite get this far. But at that point, it would start to get into some complex mathematical
models like Kelly Criterion, where you figure out, okay, here's my window of opportunity,
here are all the things I could invest in, how much money I have, like, let's look at
my returns over time, and, you know, start doing predictive modeling, like John was saying,
and figure out what's my optimal location for a given scenario at a given point in time.
That's at a high level, that's how I would approach that.
Yep.
Yep.
Man, I feel like we could just keep recording.
It's such an interesting subject to do this stuff.
Maybe we should keep a list of future ones.
Yes. Yes. Yes, yes, yes.
Brooks is furiously taking notes.
I think just one other quick thing on this topic is you're talking about saturation, you know, channel saturation.
Then there's the other question of like how much do I invest?
The other question is like if I changed something, like did I alter my saturation ability?
So say I launch a new product, like is that channel I thought was saturated? if I changed something,
The other thing, and we've talked about the, which actually we can just jump right into it after I make this last point
because I want to talk about measurement and reporting and where we start there.
But the other, one thing we haven't talked about is the creative aspect of all of this.
To some extent you have to get the right combination of targeting,
of creative, of matching the product to the audience
in order to have the opportunity to start to saturate a market.
Which in and of itself is very difficult,
can often take a lot of experimentation, which requires a lot of measurement. and of itself is very difficult. Right. Well, I was actually viewing that like there's a practical saturation, a theoretical saturation.
So let's say you have bad creative, you can have practical saturation really fast.
Right.
Where if you fix the creative, then your theoretical is way harder.
Yes.
Yes.
Yeah.
Just as an example of something I personally experienced.
Practical and theoretical saturation.
You look...
Any comments on that before we talk about measurement and reporting.
Yeah, definitely pieces.
That's why it's so important.
Once you could, to a certain point, um, to do that ad level reporting, because
that's why that's ultimately why you have ad sets and ads, right?
Is you have a theory you have.
So your theory is if I run this campaign, I will convert these users.
Well, I'll convert, let's use the more official terminology.
I'm going to convert this audience.
Now within your ad set, you have subsets of your audiences.
So statements, and you're going to then, within your ad set,
run different ads to different segments.
So picking an arbitrary example, like let's just use age.
That's an easy one to visualize.
Like your audience is filled with people of different ages.
You're going to segment them by different ages.
And then you're going to, in your ads,
you're going to run different content
for those different segments, those different age ranges,
in hopes that you're going to connect with
and relate to those different segments,
and they'll be more likely to buy.
So 100% you're spot on.
We didn't even talk about that,
but that is a whole nother angle,
both of your strategy on how you drive campaigns, but that's also a whole nother angle, both of your strategy on how you drive campaigns. But that's also a
whole nother angle of how you need to measure your
performance, which you kind of are alluding to, too, is, are
you tracking the content that you ran with each ad, such that
you can then correlate like that content, which includes both
visual and copy back at some point,
like if you're complex enough, you can track all that back
and include that, like John was saying earlier,
in your predictive model or at least in your performance
so you can figure out what content is resonating best
with which audiences.
So yes, that's yet another area.
Well, that's one of the big reasons
we wanted to talk to you about this, because you're one of those rare people who, I mean, Yes, that's yet another area.
Well, that's one of the big reasons we wanted to talk to you about this,
because you're one of those rare people who just gave an unbelievably succinct explanation of the job of a performance marketer.
As their practical day-to-day, I am targeting these different audiences,
I'm representing those in some different segments or ad sets,
depending on the ad platform.
Which I think is so helpful for our listeners to understand,
especially who are on the other side of trying to report on that.
Actually, all three of us have been on both sides of this.
with your marketing team and understand I mean, even just establishing that shared language
is figuring out this optimization problem as quickly as I can. And that creates a lot of data, you know, detrius.
I'm going to put emojis in my campaign titles.
Every day the person's more.
Yes, yes.
Oh, my gosh.
Yeah.
But like a great example is like some people use spaces, some dashes,
underscores, like that's common, right? Like the tax monies are all over the place. Taking
you one step further, I think that's phenomenal insight, Eric. It's also being able to track
what was served with a particular campaign and as I alluded to, like that is, as you
get more advanced, that actually becomes very important
and the earlier as you pointed out you can establish that taxonomy even if you're not
calculating it early on like if you don't care that's okay but if you track it early on when
you get more advanced later you can actually go back and look at it over time and you can actually
yeah derive you can derive more value from that if you have like one two three four years back
Even if you don't look at it right away. Yep. So totally I think it's a fantastic insight like establishing that relationship and really working together
to
To make your data cleaner and easier to dissect and use and benefit you more. Yeah
Yeah, totally and even the even asking what are we trying to understand here because
If my goal is to test a bunch of different creative because I know this is a good audience
I'm just trying to find the right creative is a
Phenomenally different reporting problem than we're testing a new channel and we just want to see if we can get conversions or not
Right, right and boy, can you save yourself a lot of pain knowing that? new channel and we just want to see if we can get conversions or not.
And boy, can you save yourself a lot of pain knowing that ahead of time.
And I think it makes a difference too, like what you're saying with working with marketing.
If you're at a small or medium-sized company, you end up with sparse data problems.
So let's say you're like, I don't need that now. And then you're like,
ultimately materializes almost always in the form of some report, right?
Or some metrics that are produced.
We've talked about different attribution models, but Lou and John, because you've both built this reporting,
and John, we'll start with you this time. you think about if you were going to start to build some,
and of course this is based on understanding the metrics that the business wants,
but where are you starting out from a reporting standpoint?
And maybe even just go to the level of when you did this at your last business,
what was the first dashboard of charts that you produced? Yeah, so I think I had a really unique experience
where I started out just being on the technical data side
and then ended up having both teams reporting to me.
So I'm trying to think back before marketing,
when marketing was under a different leader. So I think one of the first things we did
were probably Google Analytics years ago.
Return on ad spend is the metric that comes to mind that everybody was looking at.
And you're breaking that down by channel.
So like a chart that has return on ad spend for Google and Facebook and Bing.
Bing was a pretty good converter for us.
By channel, a lot of our data was Google shopping and Bing
shopping were two of our top performers.
Sometimes it would be subset by that. inside of Google, like what's our realize for Google shopping or Bing shopping specifically. Yep, like versus, yeah. Versus others. Yeah, and as far as the optimization, like we,
for the longest time, and I guess some of it was probably just fortunate circumstances to our
industry and it wasn't like a super sophisticated industry in the mid-market, but we had competitors
like Home Depot at, you know, and like above us. but we had competitors like Home Depot
above us.
Sometimes we overlap and compete with Home Depot, and sometimes we overlap and compete with more of a mom-and-pop type place.
A lot of it was just like, okay, how much money can we spend
and keep this return ad spend at whatever our goal was,
at a five, six.
It was very high level.
And a lot of the times we didn't go super deep
because you put money in and it would work,
and you're like, okay, it worked.
Put more money in.
And then when it didn't work,
that's when you started to dig down to optimization.
One thing I love about that example is, which again,
so many topics we could talk about, but in areas where
you are competing with Home Depot,
that's way more expensive because they have very deep
pockets.
And end of quarter Home Depot, like no.
Right.
That's not it.
Literally, you can lose money.
When they're trying to hit a number.
Yeah.
If someone clicks an ad, you can lose money
depending on your margins.
But all right, Lou, same question over a number. Yeah. If someone clicks an ad, you know, you can lose money depending on your margins. But all right, Lou, same, same question over to you.
Yeah. Oh man.
I, so I think the first thing came to mind is tracking the common metrics,
which I'll unpack in just a second, all levels or nearly all levels.
So, and obviously this depends again on the complexity
of your business, but you want to know, you certainly want to know how you're performing
at a channel level, but you also want to know like how your campaigns are performing at
time, how your ads, ad sets, but also ads are performing over time, right? So
you want to know, you want to compare your, like the
conversion of your ads and against each other at times to
know, are those resonating, right? Like you want to
understand, sir, like your, if you just look at a campaign
level or channel level, it might be, it might be masked that
like some of your campaigns or one of your campaigns is
performing really well, or when your ads is performing really well
and the other ones are just tanking.
Right.
So you definitely want to win possible and I rather since broad first, I'll go
more narrow, but you want to ideally track them across various altitudes.
And last thing on this topic, like you do also want to provide a high level of view,
like a summary for executives because they need to understand as well.
Yeah.
Like what's the performance of week over week, month, week over weeks, not maybe
for executive levels.
Okay.
But they want to understand like how am I performing to plan?
How am I performing over time?
Like, is it make, does it make sense to continue to spend the amount
of money we're spending on acquisition, etc. Right. So summary all the way down to add
them now specifics on that. You obviously want to look at I mean, whether one of the
drivers of a business is how much revenue or more specifically, like how much profit
you're generating, right. So you obviously want to look at how much profit
am I generating specifically. Right. And so return on ad spend is one way to look at that.
For sure. There's not, it's not the only way, but you absolutely want to look at something related
to like how much, how much profit am I actually generating? Right. So there's the night, the somewhat naive, like, okay, I'm spending X on ad revenue and then I'm getting back Y
in profit or actually revenue is more common.
That doesn't always necessarily represent the full picture
though, if you think about it.
So, cause revenue doesn't actually equate to how much money
you made, right?
Cause you had to, you have cogs, right?
Like cost of goods sold.
Yep. So
more advanced businesses, ideally, you actually want to factor that into like, how much at the end of the day after, you know, your
cost of goods, how much I'm actually making on that, right? Yep. Another angle to look at for this in a lot of cases is, like,
what's the quality of traffic that I'm acquiring?
And you look at that from the angle of the customer and it's, what is the
customer lifetime I'm acquiring the customer lifetime value I'm acquiring
by like channel campaign, et cetera.
Right.
So is it particular ad set campaign, et cetera, channel? Is that acquiring quality customers or terrible customers?
And you can assign what is a quality customer,
what is a terrible customer by lifetime value.
So you can look at customers are spending a lot of money.
That's probably a good quality customer.
And then you can associate that with your acquisition
campaigns.
And you can factor that into your
measurement of, okay, like this channel is making money, but
like we're acquiring terrible customers who actually are not
providing much value to negative value to this company over time.
Like if you, you know, you're giving loss leaders, you're
selling, or you're having to give too many discounts. So those are a couple ways I would think about like measuring this at a high level.
Yep.
So I give you an answer. Did you want to go more specific?
No, that was...
You reminded me that we did. So we started with ROAS, but we quickly, and this is funny,
we had some very basic,
true ROI that included cost of goods sold calculations
that when I first was working with a group,
we dig into it and we realized that somebody at some point
had hard-coded a single percentage to calculate all this.
So they're just like, oh, in general,
our gross margin is 30% or 25%.
So it was just hard-coded.
And this was for, I don't know, 30,000 items.
That's amazing. So that, yeah, so that was why. So we updated that to be, you know,
as accurate as we could per item and that made a big difference.
I would hope so. Yeah, yeah, across 30,000. Yeah, right. That's insane. Okay, well we would be remiss not to discuss AI.
Oh yeah.
So, Lou, tell us your view on how is AI impacting this whole situation?
I mean, one of the things that comes to mind immediately is that, especially as it relates to
someone thinking about creating reports around this data,
AI is making it way, way faster to turn creative assets around
and run experiments extremely rapidly, because it's essentially removing the need for
turnaround time across teams, et cetera. I mean, they're entire platforms, entire platforms that are, that literally do this.
Right.
I mean, you can essentially describe creative and it will just, it will
generate all these variants for you.
And some of them, I think even like run the campaign and give you like early
touch results and stuff, which is crazy.
So that's one way, but what are other ways?
Oh man, you touched on the one I needed.
I had come into mind.
So yeah, definitely.
I think there's two aspects of what you said.
There's copy generation.
There's also visual generation.
But then that's taking that one step further.
It's using, and in some cases, using a RAG or LLMs
to actually input a lot of your customer data.
So a customer feature table and tell me,
how can I best slice up my audiences to then send them to like end-to-end campaigns or specific platforms like a retention
platform, acquisition platform, and then how based upon what we've seen in sales can I optimally
choose copy and or visuals for either that audience and or segments within that audience to best with the
highest probability to convert them right so you can use you definitely can use AI for that kind of
stuff that's another thing that comes to mind. Here's one I used this weekend and I hadn't
thought about this I used was working on a project at home and I used chat GPT to look for like
a certain like style door that I wanted and it like produced results you know like like
Google would have inside of it and clicked on one of them and then I'm thinking and then
now I'm thinking like how was that attributed I wonder like how like like I wonder how those
you know yeah how those a distributed or don't yeah
I wanted that too and actually like I played around with that a few times and usually when you
You click on a link and chat GPT like if you ask it for a source
It depends. I think it's UTM underscore source. It's a GPT. Oh, okay. Yeah. Yeah
Go look on one. Yeah, I'll have to do that. Yeah
Oh, okay. Yeah. Yeah go hook on one. Yeah, I'll have to do that for yeah
Interesting and UTM. Yeah, okay. I mean, I'm sure perplexity and other yeah, they have their build that in as well
Yeah, yeah, I mean that's self that I mean I say self-serving it's wise of them to do that right, then they can show people naturally as they pick up their analytics like oh, I'm actually getting a lot of
You know results from perplexity or GPT or whatever.
And then the other thing on the AI side of this,
I'm really interested on the customer data side of how can we score.
There's customer lifetime value stuff or lead scoring or things like that.
It seems like as we get more robust data and better AI
models, that'll be more easily possible.
Yeah, I think the, I mean, actually, Lou, we haven't talked a ton about this, but
I'm interested in, because we've talked around this, but to that point, I agree.
In theory, right, LLMs excel at sort of next best action, right, or completion, right?
And so theoretically, if you give it a bunch of inputs and a bunch of context,
and even example sequences, right, it can process that and then make a good recommendation.
The big challenge with that is actually just the practical nature of executing the movement of a user into different campaigns and or the sending of different messages through different channels based on what that next best action is.
At least from what I've seen,
you have marketing platforms that are providing this functionality within their ecosystem,
which can be really helpful, but creates local optimization
because the customer journey spans a lot of different channels.
And we know that from a data and reporting standpoint,
because everything we just talked about is getting all of this different data from all these different channels,
and then trying to understand where conversion is happening and what's contributing to that.
But if you think about actual execution of that, say we run an analysis on our attribution data and we realize, you know what, a multi-touch
pattern that tends to work really well is, we'll just use the example you gave, Lou,
where someone finds us on search and then they convert on paid social on Facebook or
whatever that is.
Let's say you're way more complex than that, but there's this promise of like,
and let's say you're way more complex than that,
it could actually pick up the signal and then at a very large scale
automate the categorization of users
into like, let's stop showing you search ads and let's stop showing you search ads
and let's start showing you paid social ads,
which is manual, cross-platform, whatever.
But there's not an orchestration data layer
connected to all these APIs that actually allows you because the vendors themselves
want to optimize it in their own systems.
They're going to open their APIs because at that point they're just a last mile delivery service
and you're sort of an API endpoint.
I know that's funny, and that's why I wanted to ask you, Luke,
you think about, okay, we have a marketing tool Anyways, I am very long-winded on this,
but Lou, I've actually spent a lot of time last week thinking about that after a conversation with the customer just learning about their issues.
But what's your take on that?
Because you've conceived as some pretty wild systems along those lines as well.
Yeah, it's a great observation. to optimize that to a degree using AI and LLMs.
So for example, Google Ads, for example,
they do expose, they do expose like turning ads on,
turning ads off, changing the budget, et cetera,
and things like that.
So from that perspective, like if you think about,
again, back, I have some background in this,
like if you think about it as like an investment problem a little bit, where you're getting signal
from an AI algorithm, whether it's ML, LM, et cetera, and you're analyzing the data as
it comes in, clickstream or whatever, theoretically, you could stitch those two together.
So you could stitch ML modeling, either in real time or frequent batching together
with, okay, I'm going to make an effectively going to make
reallocations to my ad portfolio, I'm going to change
the budget for different ads, depending on how they're
performing right now with the LLM is saying, or the ML model
is saying in terms of what I should be allocating my
spending to. So they're definitely, you definitely can do that. Like that definitely is another use for
email AI. I think that's extremely complex. And I personally, I've never seen this like super
successfully done because I would equate this similar to it's like, it's like day trading a
little bit within a great bit with a lot of
experience in and that is tough like that is really tough I only yeah that's
a really tough problem to solve and do well right like there are very few
companies that do that well and if you think about like the market is so
optimized in that and it's somewhat unpredictable it's very challenging to
gather all the pieces of that together like all the data points. I wouldn't recommend, but that is another application as you kind of highlighted.
And then I think the other challenge in closing that you pointed out is there are certain portions
of some of these app platforms that they don't open up, or there are multiple platform integrations
that you have to fuse together in order to really successfully be able to do this
in real time or in real time in an automated fashion.
So like if your content's out of a CMS,
like get your content out of your CMS,
how do you associate your CMS content
with a particular ad?
What if you need to change it in real time?
How do you change it in real time?
Then re-upload it and yeah, right?
Like as you highlighted that the challenge has quickly become aplenty, shall we say.
Yeah.
Well, the good news is that it's going to probably take most people enough time getting
to basic linear attribution, multi-touch attribution reporting, that by the time you nail that down,
someone will have built the API data layer
driven by an LLM,
that you can just plug all of your wonderful
attribution data in and then you'll be off to the races.
Perfect.
Easy. All right.
You said it, right?
Like, let's just go do it.
Yeah.
So you're pretty quick, right?
Yeah, easy. Couple sprints. Yeah, yeah, yeah easy couple sprints
Yeah, exactly just awesome sprints. Yeah, just a couple sprints. Well Lou
This has been absolutely amazing. One of my favorite things is that we've uncovered a number of other topics, you know identity resolution
And you know the feature table, you know to dig into among other things
And so we'd love to have you back on the show for another marathon.
This has been great.
I've learned a ton.
I think our listeners have learned a ton.
John, your experience and insight has been really helpful.
So this is awesome.
Let's pick another multi-hour topic and dig in again.
Yep.
Sounds great.
Yeah, John, Eric, thank you both for letting me come on.
Really appreciate it.
It's fun.
It's fun to chat with you both.
Yeah, thanks, Lou.
The Data Stack Show is brought to you by Rutter Stack, the warehouse native customer data
platform.
Rutter Stack is purpose-built to help data teams turn customer data into competitive
advantage.
Learn more at ruddersack.com.