The Agenda with Steve Paikin (Audio) - What's the Secret Sauce of Political Polling?
Episode Date: April 8, 2025How does one know which pollster to trust? What happens when they get it wrong? And just how do they get those results in the first place? We're joined by David Coletto, David Valentin, Erin Kelly, an...d Clifton van der Linden.See omnystudio.com/listener for privacy information.
Transcript
Discussion (0)
Renew your 2.0 TVO with more thought-provoking documentaries, insightful current affairs coverage, and fun programs and learning experiences for kids.
Regular contributions from people like you help us make a difference in the lives of Ontarians of all ages.
Visit tvo.me slash 2025 donate to renew your support or make a first first time donation and continue to discover your
two point TVO.
During any election campaign, we are bombarded with polling data on the horse race numbers
and what issues are resonating with the public, but are all the numbers trustworthy?
What happens when the pollsters get it wrong?
And just how do they
get those numbers in the first place? Joining us now to help demystify the art of political
polling, we welcome in the nation's capital, David Coletto, founder and CEO of Abacus Data.
He's also an adjunct professor of political management at Carleton University. And with
us here in our studio, Aaron Kelly, CEO and co-founder of Advanced Symbolics, Inc.
David Valentin, principal at Liaison Strategies
and Clifton Vanderlinden,
associate professor of political science
at McMaster University and founder and CEO of Vox Pop Labs.
And it's great to have you three here
in our studio in Toronto.
David Coletto, great to have you on the line
from the nation's capital as well.
Cliff, I wanna start with you first because your lab has developed
something called the Signal, which you are in partnership with the Toronto Star
on during this election campaign. What does the Signal do?
So the Signal is a poll of polls. So in the same way that pollsters will go out
and sample public opinion, we go out and sample the polls and we have a series of
algorithms that look to find patterns
that different pollsters have exhibited across time so that we can do algorithmic adjustments
to those patterns to try to use that polling data to accurately forecast election outcomes.
Let me see if I can get you sued here.
Which pollsters are more reliable than others?
Well our weights, what we call a house bias, is algorithmically determined.
So we are not manually assigning weights or classifications or grades, if you will, to the different pollsters.
This is done through a Bayesian hierarchical model.
So what?
It's a model that takes all the polls going back to 2006 and looks at how well different polling firms have forecast different elections.
It looks at their sampling methodologies and how they've changed over time.
And it uses all of that data to try and create an aggregate sense of what the election outcome might be
based on the historical patterns of different polling firms.
Gotcha. Okay. Let us look at, Sheldon, you want to bring this graphic up.
Here are four leading pollsters and some of the numbers
that they have.
This is the latest federal polling,
I think this is as of this morning,
for the election campaign.
Here's David Coletto's group off the top,
Abacus, which has essentially got the race tied, 39-39
between liberals and conservatives, NDP at 11,
of course.
We won't go into the smaller parties for now.
Liaison, that's Valentin's group over here.
He's got 45, 39 liberals over conservatives.
Polly, that's Erin Kelly's artificial intelligence
algorithm, 39, 36, so liberals up three.
And then the signal has got 42, 40.
Now here's my question.
OK, Valentin, can you all be right?
Well, yeah, we all can be right.
I think one thing to keep in mind about most surveys
is that they come up a margin of error.
And so we're not saying it's going to be 38.
We're saying it's going to be 38 within two or three
or sometimes four percentage points.
So if the poll says 38, but the actual number is 39,
that's actually considered quite good.
Aaron Kelly, you look at these numbers
and nobody has the same numbers.
So people will reasonably ask, how can you all be right?
What's the answer?
It's just like David said.
It's your, I mean, I can give you
an example of how we create the data
to illustrate why sometimes
you're going to have margins bearers.
So in our case, what Polly is doing, she's doing a randomized controlled representative
sample of the population from social media.
So we're really, really, we've been doing this for 15 years.
So Polly does a random crawl through the graph.
It's a probabilistic sample, which is the gold standard for getting the sample. And we've got a random crawl through the graph. It's a probabilistic sample which is the gold standard for getting the sample and we've got
millions of people in the sample. So we're very very we're very very
convinced that we've got we know that we've got and we've tested and measured
we've got a great representative sample we know if they if we say they're in
Ottawa they're in Ottawa. Here's the here's the trick though the writings are
different from from the cities, right?
So it's very easy for us or for Polly
to determine that you live in Toronto
and that David Coletto lives in Ottawa, even neighborhoods
we can get down to.
Writings are much more challenging, right?
So if you're in Ottawa and we know you're in the Glebe,
do we know which writing in particular you're in? People don't usually identify we know which writing, in particular, you're in?
People don't usually identify what they're writing.
A campaign is 36 days long.
So people might say, well, this candidate came to visit me.
And we say, OK, that's that writing.
So we've got those breadcrumbs.
But we have to do those by hand.
So we have to feed those to Polly by hand,
because they're so unusual, right?
And they change.
Okay, let me follow up with David Coletto on this.
Because David, you've got the race tied 39-39 today.
And the other David, David Valentin here,
has got a six point lead for the liberals.
So again, people will reasonably ask,
how can you both be right?
What's the answer?
Well, I think there's always other factors
in determining how we get these estimates, right?
So my survey is done using online panels,
and David uses, I believe, IVR, which
is a telephone survey based on an automated telephone survey.
And so what we call mode effects can come into the results, right?
So in the case of a telephone survey, for example, I believe right now liberal Canadians,
those are going to vote liberal, are more engaged right now.
They're more likely to be picking up the phone, more likely to be answering the survey.
So I think what we call response bias may be in part
why you're seeing some of the telephone-based research
being more favorable to the liberals.
On the other hand, other pollsters, like myself,
make some adjustments in our waiting
that might push down that liberal vote
because our ND feed number is slightly higher.
So there's choices that every pollster makes
that might get us to different results.
But in the end of the day, I think David's right,
that we're still fundamentally seeing
the liberals having an advantage,
even if those top line numbers are slightly different.
David, do you have a view as to whether or not
you get superior numbers, more accurate numbers,
if you do it by telephone, off social media,
off the internet, what?
Well, I think on average, you know, my sense of it is if I'm doing a political project,
I prefer telephone.
But everyone has their own methodology and there's perfectly valid reasons why online
panel sometimes takes a little longer to catch up.
We've seen this in other elections where there's a little bit of a lag effect.
But usually in the final week of the campaign we always more or less end up
at the same place. So even though we have a little bit of a difference now, I
think in the end we'll probably be just a few points away from each other.
Cliff, what do you say on that issue of whether or not you get superior numbers?
Phone, internet, social media, opening up the window and yelling at your neighbor.
What's the best way? I mean the first thing to say is that it's not always consistent.
So sometimes you get IVR panels that produce better forecasts,
sometimes they come from online samples, sometimes they come from machine learning techniques.
There's an element of randomness, so we all acknowledge that in all of our models there's statistical error.
That statistical error is a result of some of the randomness in the way that we sample.
So you get variability over time.
Probabilistic samples are the gold standard.
The idea of a random sample where
there's a non-zero and known probability of inclusion
of all members of a population of interest,
I think there are arguments to be made why all of us lack or omit certain people from the population,
certain kinds of people, in the way that we produce our estimates.
And that leads to some variability in the results.
The question is how well do pollsters know their panels,
and how well are they able to control for the inevitable bias,
and I mean statistical bias, sampling bias, that's present in their samples. Okay, Aaron, let me follow
up with you on that because you've got critics, when you first started with
EventSymbolics, when you first started the company, you did get criticism from
people saying you're overly reliant on social media, which is not an accurate
reflection of anything except the toxicity and hatred of society. So how
do you, you know, how do you manage all that and deal with that
to make sure that the haters are not overrepresented
in a sample survey?
Well, exactly.
And that's why you sample and don't just take everything,
right?
And it's why you rely on people and not keywords.
So every person in our sample gets one vote.
So we run lots and lots of experiments on our sample.
So first of all, they're not more negative than,
and that's not because we choose them for their negative,
it's a randomized controlled sample,
but we regularly test them because we hear people say,
everybody on social media is negative.
Not true.
What's true is that the negative people post more, okay?
And you remember them because they're so toxic.
So you have to filter that out, I guess.
Well, we don't filter it out.
We do have toxic people in our our sample but they get one vote. Like so the
toxic people will be will be posting 50 times a day but we count them once we
don't count them 50 times. The positive people will post once and be done
with it they also get one vote. So we're but if you're doing keywords you're
over representing those negative posts. So it's just a matter of making sure that you're sampling properly,
so you're not over-represented in one group or another.
David Coletta, we all know the expression here, garbage in, garbage out.
And basically that's a lead up to ensuring that if you don't ask a good question,
you're not going to get good data coming out the other side.
So how do you decide the wording of your questions to make sure that you get good data coming out
the other end? So it's yeah it's a great point. You know we we know that not only
the wording of the question and what order do you ask them can have an
influence over the responses people give. And so you've got to be very careful about that.
You have to know that when you're asking about vote intention, for example,
we put it right at the front of the survey
because we don't want any of the other questions we might be asking about
related to policy or the leaders or the context of the election
to have an influence over that final answer
because we might dig
deep into one issue that might push people that they wouldn't otherwise go into the direction
of a particular party, right?
So I think it's important to understand that, you know, as consumers of these polls, your
viewers need to understand that as the pollster, as the designer of the research, I have a
huge influence over the potential outcome that we get.
And so the trust that we all have around this table is really important, right? And we also
have to recognize that when we get criticized, I get criticized all the time, one day I'm a liberal,
one day I'm a conservative, one day I'm a new Democrat, depends all on the results,
is we're also running businesses. And so it is in our interest to get this right.
If we don't get this right,
then why would you hire me or David or Kelly,
Aaron, I'm sorry, to do the work that we say we can do?
And so elections are that one moment
where you can actually judge the accuracy of pollsters.
And I'm really proud that in Canada,
far, far more elections we get right than we get wrong.
We just seemingly remember those ones we get wrong.
Let me follow up with David Valentin.
Is this to say that if you wrote a question
that started like this,
who would you prefer to vote for in this election?
Carney, Poliev, Singh, May, et cetera,
you might get a different result than if you asked,
who would you want to vote for in this election?
Liberal, Conservative, New Democrat, Green.
Is that right?
You could, yeah.
And part of that is because people can get confused.
We just had an Ontario election.
If you're asking people, did you vote for,
would you vote for the Liberal Party?
They might think you mean the Ontario Liberal Party if they're in Ontario.
If they're in Nova Scotia, they might think,
well, I voted PC, I still support the PCs,
even though they might be voting
for the Federal Liberal Party.
So when you ask the question then,
are you asking with party leader names
or with just parties themselves?
With both, in our case.
With both?
Yeah, so if you take one of our surveys,
you're getting the Liberal Party led by Mark Carney,
the Conservative Party led by Sir Paul Yev, because we want to avoid that confusion and the easiest way, you know,
when we're scripting, we're always asking ourselves, what is the respondent going through? Do they understand what we're asking?
Are there any questions they could have about the question? Because if you're confused, you're just gonna select, you know,
I'm undecided, I don't know. And that's not really the point of the exercise.
You want to know what people are thinking.
Right.
Cliff, obviously you want to know who people are going to vote for,
but what else do you want to know from pollsters?
Well, I think to going back to David's point,
I think you want to have a ground truth that you can measure the validity of
and the reliability of polls against.
You want to be able to know, I think, as a nation that when pollsters tell us
not only who's going to win an
election but also when they tell us how people feel about our health care system
or how they feel about military spending or how they feel about any range of
public policy issues you want to have confidence that those inputs from the
population are reliable and that they're valid and then you want to have a sense
that those who govern are going to take them seriously
Erin I want to pick up on this who's going to win because there are people already who are saying
You know, what are we two weeks into this election campaign with 20 days still to go?
There are people already saying oh well, I know who's going to win. Is there any poll that can tell you?
Three weeks ahead of time how it's going to turn out?
It depends on the election, but I don't think this is that election.
What does that mean?
I mean, you saw the numbers that are pretty close.
I mean, there are times, like in the Ontario election, we had it pretty early that Doug Ford was going to win that election.
There was no movement in the polls throughout that period.
That right. Nothing significant that would tilt the election away from a Doug Ford majority.
So you could predict three weeks to go Doug Ford's going to win?
Yes.
We are not there yet with this one.
I think we definitely need to see the debate and any effect that that might have because
we know that Mark Carney doesn't speak French as well as, you know, the other candidates.
He doesn't have the political experience doing a debate that I think Pierre Poliev has so that'll be interesting to
see. But there's still a little bit, when it comes to the seat count the liberals
are quite ahead. When it's a tie the liberals are ahead because the seat
efficiency is much better for the liberals. They've got a more efficient vote. So they're
definitely ahead even though they look just three points in the popular vote
that's what we were looking at there. When you look at the seat count,
the liberals are significantly ahead.
So I think that's why people are saying that,
but they're not in majority territory at the moment,
so at least on our count.
David Coletto, I say this all the time,
and I hope I'm accurate when I say it,
so you tell me if I'm not.
I always say when polls come out,
they're an excellent indication
of what people thought yesterday.
They cannot tell you what people are going to do a month from now.
Now, that is true, isn't it?
Well, it is.
It's absolutely true.
But you can look at other numbers beyond the horse race and try to anticipate
what has to change in order for the conservatives to win this election or
for the liberals to consolidate it.
Right.
And so I always say, like, as much as when we put out a horse race poll, more people focus on that. I'm more interested in what's below that horse race, right? And so I always say like, as much as when we put out
a horse race poll, more people focus on that.
I'm more interested in what's below that horse race, right?
That we know that more Canadians are interested
in this election than they were in the last two.
We know that Mark Carney has a net favorable rating
that any political leader in any part of the world
would like right now.
And then the question is, so what does that mean?
And so Steve, it's not that the poll that I take last week can predict two weeks from
now, but it does tell us both what is the strategies that these parties are doing, how
Canadians are feeling about these choices.
And therefore, if these don't change, which is why I would say today Mark Carney is the favorite to win unless the public's views of him and their their opponents and the issues
Change and that's I think the real value of the polls is we understand the relationship
Between believing one thing and then saying I'm going to intend to vote another way or a certain way
Sure, but this is where and David Valenson. Let me go to you on this
This is where it can get deliciously confusing.
Because if, let's take the recent Ontario election.
Polls always showed the public was very unhappy with the current government's record on health
care and education.
And yet Doug Ford won a third consecutive majority government in which health care and
education were barely an issue at all during the campaign.
So how are voters supposed to make sense of all that?
Well, I mean, the polls also showed that Doug Ford was going to win. And so I think that's
what's important to keep in mind is that we can't predict three weeks out, but we can predict a week
out, a couple of days out. So when we get closer to election day, we'll see more surveys and we'll
be able to judge those surveys by the election results. But I think people are contradictory
and complicated. You know, sometimes the NEP number will go up a point and the liberal number will go down a point.
The idea is that those NDP voters went to the liberals or the liberal voters went to
the NDP, but it's much more complicated.
There are undecided voters who became decided.
There are people who were decided who became undecided.
There are conservatives who went to the NDP.
There are NDP who went to the Bloc.
There are Bloc people who went to the liberals.
Movement is in all directions. It's not as simple as 1 minus 1 is equal to minus 2,
and then a plus 1 here gives you an extra edge.
It all flows in different directions.
And I think that's one thing to keep in mind when you see
the movement in the service.
Well, again, Aaron, if I go back to the Ontario election,
not all numbers are created equal, I guess.
I mean, it may very well have been the case
that the public was unhappy with Doug Ford's
record on health and education, but they still thought it was more important that he be the
guy in the chair to take on Donald Trump.
And those were the numbers that ultimately were decisive at the end of the day.
Fair to say?
That's fair to say.
And you also can only measure what you ask, right?
So I mean, as much as you say, you know, what do you care about?
If there might be something they care about that you didn't ask and it doesn't get recorded.
Now, we've got a bit of an advantage in that
because we're doing it online.
So we can do entity detection.
We can see what's important.
And yes, healthcare is important,
but it's much less important than the economy
and me having a job in Canadian sovereignty
at this moment, right?
So it's not that it's not important, it's not the big ballot issue.
Okay, let's talk about the media's role in all of this because Cliff, we often
hear about how media occasionally, uncritically, just vomit out all of the
polling data that's out there without giving it proper context, proper
reportage, whatever. What are the media's
responsibilities during these writ periods? at proper context, proper reportage, whatever. What are the media's responsibilities
during these writ periods?
Well, I think when it comes to polling, in particular,
I think it's important that audiences are given
proper context around polls, that movements
within the margin of error are not represented
as substantive shifts in public opinion. I also think that we see real
effects from how media represent polls and how they speak about polls. This has
an effect on the ground in how people think. So we can take the example for the
2011 Canadian federal election which really the big story out of that
election among other stories was the orange crushing in Quebec, the surge of NDP voters.
Well, many people attribute that surge to Jack Layton's appearance on Toulon Montparel.
But if you look at the time series data, you see a small increase in Quebecer support for
Jack Layton after his appearance on that television show.
But where you see the big surge, what we call a discontinuity in the regression
analysis, that a big break in that curve is when three polls were released several
days later, Krop, Leger and Nanos all released a poll on the same day saying
there was a shift in support for the NDP. And then you saw after that a huge jump in Quebecers' support.
Whether, you know, there are different causal mechanisms
that we can attribute to this effect.
It could be that Quebecers took a little bit of time
to think about or contemplate their potential vote for the NDP.
It could also be that when they felt that other Quebecers were
warming up to the NDP, it had this affirmation effect
or this validation effect.
There are different explanations,
but the descriptive statistics are there.
The media portrayed these three polls as a signal
that the NDP was moving ahead,
and it actually amplified the NDP vote
in Quebec in that election.
Conventional wisdom, David Coletto,
suggests that that is a very Quebec-based phenomenon.
That when Quebecers see the train leaving the station,
they want to get on it.
Are you able to tell us the different characteristics
that perhaps voters in different provinces
tend to have in elections over the years?
Well, I think, I don't know if there's a tendency,
a regional one that way.
I think what we know is that in this case, let's talk about this election.
I don't see any real regional dynamics at play.
I think this has become a national election campaign in which one or two issues are dominant.
And whether you live in British Columbia or Ontario or Atlantic Canada, you're reacting
to very much the same things. Now I think there's some subsets of that more by
generation actually than by region and which I think is the real story and
that's driven by how we get information Steve. It's not so much where we
live but if I'm tuning into the agenda on TVO then I'm gonna probably be also
watching news on other broadcast news outlets but if I'm under 30 odds are I'm going to probably be also watching news on other broadcast news outlets.
But if I'm under 30, odds are I'm not watching your show, Steve.
And if I am, it's because I'm watching clips on social media or I'm getting most of my news on TikTok.
So I wrote a piece just today that basically said like this is two or three elections at once, not because of where you live,
but actually about where you get your
information. And if you're under 30 right now, this isn't really about Trump or
tariffs, this is about affordability and housing. And if you're over 60, it's the
only thing you're talking about, and that's Trump and tariffs. So regionally,
I think there's always some dynamics and sometimes a national election becomes
much more regionalized, but in this case you've got a national election becomes much more regionalized. But in this case, you've got a national election that's much more driven now by differences
in how we get our information and the role that fragmentation in that information ecosystem
plays.
David Valentine, let me get back to the issue of media.
What are you looking for when you see how media report on the huge variety of polls
that are out there?
Well, one interesting thing is that our firm works with the National Ethnic Press and Media Council of Canada.
And they represent most of Canada's
non-English, non-French media.
And so it's great to talk to editors and reporters who
maybe sometimes don't have as much experience in covering
surveys and not only sort of give them the numbers,
but also walk them through, here's how the survey was done,
here's what this means, here's why a trend line is more important than any specific
snapshot. So we're looking to see the media replicate that also in the English
and French press which were the most part they do. I think what we also see
now though is that there is a cottage industry of people who are writing blogs
who have very long threads on X, who are on threads, who are creating TikToks
covering the election, covering the surveys.
And sometimes they don't cover it in the same way
that traditional media does.
What's your view, Erin, on whether or not
polls reflect public opinion or move public opinion?
I actually was interested in talking a little bit
about how the media reports.
OK, follow up on that.
If I could.
I think when we report how a poll was conducted,
19 times over 20, 95% percent, I don't think anybody
understands what that means.
And I think there's a big misconception
about what that means.
And I think that's why a lot of people think, oh,
the polls are wrong.
95% confidence doesn't mean there's a 95% probability
that this is what the result will be.
That's not what it means.
It's actually got to do with peer review and repeatability
of an experiment. So I think we do a disservice in how we report the statistics. It's really
the margin of error that matters when it comes to a political poll.
But when we say 19 times out of 20, that means there's one rogue poll that's going to come
every now and then, doesn't it?
No, that's not what it means.
What does it mean then?
It's a big complicated thing, but it means that if I were to do a, if I had,
so I'll use the example in medical science.
If I do a vaccine trial or any kind of trial,
I do the trial in Oakville and I get 95% of people
are protected against the disease.
But then I do it in Hamilton and I've got a 50%.
It's not within my confidence interval.
It's an indication that the methodology used
is not repeatable.
So I'll do it again and again.
Is it repeatable?
It's really there for scientists to figure out
whether or not the experiment you did
was valid in the first place.
It's a peer review kind of, or a repeatability experiment.
We don't repeat our political polls.
So it's not really that relevant.
What's relevant is the margin of error. But because we have to put it at a 95 percent,
this is what media often, I've actually had journalists tell me, oh well we want it at
95 percent, but if you bring down the confidence, you can actually say there's a 68 percent
chance that this is the result it's going to be and you don't overlap the margin of
error. So it's a bit of a mathematical thing but you would actually get more interesting polls if you did that but we're sort of
tied to this it's kind of like why are the tires on the car the width that they
are it's because of chariots and Roman times like this is why we're doing it
this way because it that's the way we do it in like medicine and stuff but it's
not as relevant in a political poll so anyway just that that's like a whole
different conversation. Your question...
About moving public opinion versus reflecting public opinion.
Yeah, I think it reflects public opinion.
There's actually what Cliff mentioned to the side, I haven't looked at that one,
but generally speaking there's been a lot of studies, peer-reviewed studies,
that have looked at whether or not polls change public opinion,
and we look at that too.
Here's the thing, people who watch political polls judiciously are probably pretty interested in politics
and they know who they're gonna vote for right it's not the polls their family
might influence them but it's not the polls that's gonna influence them I don't
think and people who don't pay that much attention are using the polls to make
their decisions so I don't there hasn't been evidence that I think the polls
are a reflection for the most part.
OK, well, I'll ask Cliff this then.
If you're publishing the sort of grand analysis of what all the pollsters are basically suggesting,
how do you know that by doing that, you are not encouraging people to kind of stampede
over the winner if it looks like party X is going to win the election or conversely
stampede away from the winner because they're
Terrified about that particular party winning. How do you know there is a risk? There is a risk
I mean if you look at the 2015 Canadian federal election
You saw the the lead position change hands among each of the three parties
They the the NDP had the lead going into the election campaign according to the polls,
it transferred to the Conservatives and then after the last leaders debate the Liberals took it.
This was Mulcair, Harper, Trudeau.
That's correct.
And Tom Mulcair himself after that election attributed some of that decline in NDP vote
share to polls and media coverage. So there you know to the extent to which there's empirical
evidence in that particular case that's a subject for further discussion but
nevertheless the idea that polls can influence people's vote I think there's
there's plenty of evidence to suggest that it is possible and it does happen
in certain cases but it also maybe should happen in certain cases.
We in Canada our electoral system as you know is a single member plurality
or first past the post system and so sometimes people vote strategically.
They vote not necessarily their first-order preference but they vote
for the in order to ensure that they keep another party out of the win in their riding.
To do that, they need credible on the ground data of how the parties are performing at
the riding level.
And I think as all of my colleagues have indicated, that is the hardest part of polling in Canada,
is taking that swing model that happens at a national or regional level and applying
it at the seat level.
Let me get David Colletto on that as well.
How much strategic voting, based on what you folks put out there, do you think happens
during elections in Canada?
I think it happens quite a bit.
And I think the polls do help provide information to voters who are looking for an outcome,
right?
Let me use an analogy.
It's like going to a restaurant, right?
You've got a set menu and you ask the waiter,
what's the most popular thing on this menu?
And that information might cause you to pick that item
that that waiter mentioned, or you might say,
I don't really like that.
I'm gonna go with what I was gonna go with anyways.
But I view polling as more as just other information
that the media can share with voters
who are trying to make a decision.
So if your goal in the 2011 federal election was to stop Stephen Harper from winning that election
and you found out that the NDP now were gaining in Quebec and maybe elsewhere,
that might have caused you to go, okay, well, that's the alternative who can beat Stephen Harper.
And I do think that's what happened when that orange crush in Quebec spread
to a lot of the other parts of the country. So I obviously as a pollster,
I'm not gonna say what we do is detrimental.
I think though it does impact.
It's another source of information that voters consider.
And Steve, I'll say one more thing.
It's not so much that most Canadians are tuning
into what Advocates is saying
or what Liaison is saying about the state of the race.
They don't even know what our companies are mostly.
But it's the influence that the polls have on folks like you, Steve, who are reporting
the story, who are making the choice that says, well, the NDP is at 8%.
So I'm going to write an endless amount of stories about how in deep trouble they are,
as opposed to reporting on what they are actually promising Canadians to do.
And so I think in that way there's an indirect effect on
on elections through the way that media cover that election as opposed to the voters themselves knowing exactly what the polls say.
Well, that is a great point which which makes the numbers that we put out there at the top of the program
all the more problematic as we try to understand what narrative we
in media ought to be telling.
We're going to write very different stories or presumably broadcast very different programs
if it's a tied race as opposed to the liberals with a six-point lead.
So David Valentin, what are we supposed to do with that?
Well, let me just pick up on what David was saying.
You know, polls are just one piece of information.
You might walk around your neighborhood,
and if you see NDP sites everywhere,
come to the conclusion, hey, the NDP
are doing really well in my neck of the woods.
Or you might talk to your mom, and she
might tell you, you know, I just met the NDP candidate,
and they're fantastic.
At the same time, we're doing the polling,
and viewers are seeing polling results.
They're also getting campaign ads.
They're also getting mailers from the political parties.
They're also seeing all this different activity.
But do you have an outsized influence?
I mean, yes, campaign droplet, drop literature is influential.
Signs are influential.
Polls are really influential.
Polls are influential to, I think, the same extent
that we shape media coverage because we give more information
about why things are taking place.
Why is a leader in Alberta? Why is a leader in Alberta?
Why is a leader in Quebec? Why are they in to our reviewer? Right because we know from the polling
Oh, okay. You know, we see that there's a close race in Quebec. We see there are close numbers in Montreal
We see there's opportunity for a different party in
Winnipeg or Victoria, right?
And so the tours give you information based on where people are going
But where they go is informed by polling that the parties themselves are performing.
So it's this endless feedback loop.
But at the end of the day, we're gathering data independently.
We're putting it through scientific processes, and we're doing the very best that we can each do.
None of us go into this with the intention of shaping a result.
We want to know what people are thinking and accurately represent that.
Erin, last minute to you, and that is, what do you worry about Polly missing, not picking
up?
Well, here's the thing with Polly.
The thing with her is she's got open ends, right?
So we don't have an IVR where it says press one for Carney, press two for Polly.
So she's reading all of these conversations, and we have, that's our science, she's reading all of these conversations and we have, that's
our science, is classifying all of these conversations.
So the big part that we were, you know, like one area of research that we have is what's
the difference between political sarcasm and humor, right?
Like if you post something from the onion, you're not necessarily making a statement,
but if you're posting something that is sarcastic, it could indicate that you favor one candidate
over another.
So figuring out those nuances, that's the hard part of what we do because we can't
just bucket people easily.
Well, we do.
We have to bucket open-ended statements as leaning this way or that way.
Artificial intelligence still not great
at picking up sarcasm or nuance?
Oh, she can get the sarcasm.
She can.
It's humor versus sarcasm.
Sarcasm, yes.
Sometimes it's just humor though.
It's not political.
Gotcha, okay.
Well, listen, I'm one of those people
who can't wait to see what the polls are gonna do
over the next, how many days we got left in the rip yard?
20 days or so still to go.
So bring it on, folks.
Let's see your numbers.
We want to keep going, keep going.
With thanks to David Coletto and the nation's capital for joining us on the program tonight.
And here in our studio, Erin Kelly, David Valentin, Cliff Vandalinden.
Thanks so much, everybody.
Thank you.
Thank you.
Thank you.