The Data Stack Show - The PRQL: Breaking Down BI: How AI is Rewriting the Rules with Paul Blankley of Zenlytic
Episode Date: July 21, 2025In this bonus episode, John previews the upcoming conversation with Paul Blankley of Zenlytic. The Data Stack Show is a weekly podcast powered by RudderStack, customer data infrastructure that enable...s you to deliver real-time customer event data everywhere it’s needed to power smarter decisions and better customer experiences. Each week, we’ll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
Transcript
Discussion (0)
Welcome to the Data Stack show prequel.
This is a short bonus episode where we preview the upcoming show.
You'll get to meet our guests and hear about the topics we're going to cover.
If they're interesting to you, you can catch the full-length show when it drops on Wednesday.
Oh, welcome back to the Data Stack show.
Yeah, excited to be here.
We're here live from Denver.
Yeah, from my house, actually.
Yeah.
So yeah, I got to catch up with Ben Rogajohn last week
in person here in Denver, and now we get to do this.
Yeah, the Denver Data Cue.
It's not a week, you know?
Yeah, exactly.
Awesome.
Will you catch us up from when we last talked?
Yeah, there's a lot of exciting things going on.
On the Zen Linux side, we've been getting a bunch of great new logos
like J.Crews, Stanley Black and Decker,
some of these just fantastic companies to work with.
And we've just been seeing AI just go gangbusters
in terms of overall capabilities of the models.
And it changes a lot of how generally this stuff
needs to work in the future.
And it just changes so fast.
So it's like nothing I've ever seen in terms of rate of change
of the industry overall.
Yeah.
So I'm always curious these two questions.
One, is it going faster than you would have thought?
And then like the follow up to that is like,
what is something, let's pick a six month or a year time
from you're like, wow, this is like, I did not expect this.
So I think, I think definitely faster than I expected.
And I see a lot of really bad takes where people are like, Oh, well, the
model is only getting sort of incrementally better as like, you just
don't use these things out to realize that like the rich they're improving.
So, so it's like definitely fast man anticipated.
And I think it's also the domains in which they are getting dramatically better is what's maybe most
interesting to me. So it's like things that they continue to be sort of approximately human or like
subhuman that are a lot of sort of mind vendors are just sort of understanding, communicating,
these kinds of things that humans do a lot.
And then if you look at things that they are already superhuman at and increasingly getting
more dramatically superhuman at, it's like coding.
Like you can throw an AI agent and it will win or come in, you know, within in the top
five the best programmers in the entire world who have been training on doing these, you
know, same with mathematics, same
with, with any symbolic sort of task.
And the reason for that is that you can generate a massive amount of training data on these
symbolic tasks that you can verify or correct.
So it's like, you could say, Hey, this test case needs to pass if this code is written
correctly and then you can just reinforce and learn that process
at a just truly massive scale, way more so than you can sort of solve problems.
Right. So while the models have certainly improved in how they handle things in sort of
softer domains, how they've improved in terms of code generation or math or physics or other
hard or more simple domains, that has been maybe one of the most interesting things to me.
It's also shaped a lot about how I think about the future.
Cause I think within six to 12 months, we're going to be superhuman and
never respect symbolic domains.
So, so it's like, there will be basically no human coder in 12 months.
That is a better programmer than a language model.
That doesn't mean that the human won't be writing better code than the language model.
is how do we give the AI the right context to work with to be able to use the stuff it is superhuman at
to really help the human make their decisions.
All right, that's a wrap for the prequel.
The full-length episode will drop Wednesday morning.
Subscribe now so you don't miss it.