Modern Wisdom - #534 - Dr Stuart Ritchie - Why Is Behavioural Genetics Such A Hated Science?
Episode Date: October 3, 2022Dr Stuart Ritchie is a psychologist and science communicator known for his research in human intelligence and an author. The influence of our genes on the outcomes we get in life has been long establi...shed and replicated in science. However the public response to this has been very unhappy, making Behavioural Genetics one of the most heated areas of research there is. Expect to learn why some people dislike behavioural genetics so much, what happened with the recent SSRI rug pull, whether Emotional Intelligence is an actual thing, how to be skeptical without becoming nihilistic, which psychological phenomenon were debunked during the replication crisis and much more... Sponsors: Get 83% discount & 3 months free from Surfshark VPN at https://surfshark.deals/MODERNWISDOM (use code MODERNWISDOM) Get 15% discount on Craftd London’s jewellery at https://bit.ly/cdwisdom (use code MW15) Get 15% discount on all VERSO’s products at https://ver.so/modernwisdom (use code: MW15) Extra Stuff: Buy Science Fictions - https://amzn.to/3y666Wl Follow Stuart on Twitter - https://twitter.com/stuartjritchie Get my free Reading List of 100 books to read before you die → https://chriswillx.com/books/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom - Get in touch. Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact/ Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Hello everybody, welcome back to the show. My guest today is Dr. Stuart Richie. He's a psychologist
and science communicator known for his research in human intelligence and an author.
The influence of our genes on the outcomes we get in life has been long-established and replicated
in science. However, the public response to this has been very unhappy, making behavioral genetics
one of the most heated areas of research there is.
Expect to learn why some people dislike behavioral genetics so much.
What happened with the recent SSRI Ruggpole, whether emotional intelligence is an actual thing,
how to be skeptical without becoming nihilistic, which psychological phenomenon were debunked
during the replication crisis, and much more.
But now, ladies and gentlemen, please welcome Stuart Richie, fuck him in the show.
Thanks for having me on.
Given your current academic background at the moment, can you explain to people why you
think it is that behavioral genetics has so much distaste, distrust,
dislike generally.
There's a lot of different reasons. I think the main reason is
some kind of a misconception about what actually means to say that
behavioral traits are related to genetics. So I think when people hear
that, especially when the trait itself is controversial, like you can
get into a whole
debate about intelligence or personality without even mentioning genetics. People get upset by
just mentioning those traits. But when you say they're linked to genetics, people make lots of
assumptions. People think they know your politics, they think they know what you're trying to say,
what you're trying to slip under the radar, under
the radar, under people's notice. Those assumptions are things like, well, if it's related to
genetics, it must be completely unchangeable. If it's unchangeable, then we don't need to do
anything about our political situation, and we don't need to help people out, and people
are just stuck, like a tram on a tram line. They can't turn off or change
or in like that. And if you're interested in behavioural genetics, you must be using it to justify
our current political situation. So I think that's one of the major assumptions, like immutability,
the fear of immusability.
And that's of course not what Bayesian ethics says at all, Bayesian ethics is about trying to understand,
how things are right now,
not necessarily how things might be
if we change things in the future,
or indeed if they may have been different in the past,
we have lots of interesting studies
of how the genetic contribution to things differs
across different
times and different places and different political regimes even. There's some interesting
research on communist regimes and how that might have affected the heritability of traits
we can.
How so, yeah, tell us about that.
Well, there's some research in Estonia, which obviously used to be a communist country.
So yeah, this is a research using a polygenic score, which your viewers may be familiar with
from hearing your interview with Robert Plumin, one of my colleagues here at King's College
London.
And so the idea is that you look at the genetic contribution to various traits and educational
attainment being the main one.
Before and after communism in Estonia, so that was just this incredibly cool paper that was done.
Would be nice if it was replicated in stuff, but this is an interesting, it's an interesting
You need more communist regimes to come in order for that to happen. It's gonna be difficult.
Well, you need to do that same study in lots of other like Warsaw Pact countries, which are now no longer communism.
So the idea is that in people who were born after communism was gone, genetics explains more of
the variation in people's educational attainment, for instance. So like, and the idea is like
abroad interpretation or one interpretation you can draw from that. Is that a free or a political system, one that doesn't, you know,
press people in the same way that communism does, allows them to kind of reach their genetic potential
as one way you might want to put it, more effectively than one which kind of where the environment
really was kind of suppressing people's ability to ability to be who they really, really were.
So that's one interesting thing.
But in just saying that, you can see how the environment makes a difference to how genetics
operate.
And that's research that's done by behavior geneticists who are not criticizing polygenic
scores.
They're saying, look, we can use polygenic scores to illustrate
interesting things about our society
and how society changes and so on.
So I think it's a real misconception to say that,
to say that, you know, genetics means that we're immutable
in some way.
I mean, the classic thing people talk about,
the kind of, almost boring cliched thing that people talk about, the kind of almost boring cliché thing
that people talk about is glasses, right?
Is eyesight is really hurtable.
So myopia, short-sightedness, really really hurtable,
runs and families, all that sort of stuff.
But we can cure it instantly,
or effectively cure it instantly,
by less like putting on a pair of glasses
at the right prescription.
So things in the environment can completely alter the way that genetics has its effect.
Now, it's a slight cop out to say that because we don't have the environmental equivalent of
glasses for other traits like personality, like we don't know the stuff that will massively
instantly change your personality. We don't know the things that will massively, instantly change your personality. We don't know the things that will massively, instantly change your intelligence or your educational attainment
and all that. Yet, maybe we will at some point in the future, but we don't have that right
now. Maybe it's a good thing that we don't know ways to massively alter people's personalities
using the environment because of course that might be used by somewhat nefarious political
regimes or cults or whatever else. I mean, cults are already pretty good at like controlling people's behavior
and so on. And if they had ways of changing people's personalities more effectively, then that might be rather scary. So, um,
yeah, but that's, but that's, you know, one example that's very often used. Um, another one is height. Like height is really
here. But we know that tall parents tend to have targets, short parents tend to have short kids on average. So it goes and
yet, if you don't feed a kid while
they're growing up, they'll be
stunted. People in North Korea are
like six inches shorter than people
in or four inches, whatever it is
shorter than people in South Korea
because of all the malignition.
No one's criticizing the fact that
behavioral genetics suggests that
that's the mechanism by which tall parents have tall kids right or not behavioral genetics I guess but just heritability
generally right the height is heritable. What is it about other elements that causes people
a shoe size or eye color what is it about that that causes it to be such a different ballpark?
Well you've put your finger on the thing here, which is the double standard, which is people are very, very happy to talk about genetic influences on stuff like height or eye color
and things like that that don't have a political valence, but as soon as it, you use those exact
same methods, and that's the important thing, like you use the same method studying twins,
studying families, studying molecular DNA differences between people.
You use those exact same traits to study stuff like intelligence, personality, whatever, and people flip out.
And that's, I think, a weird double standard.
I mean, what would one of what would one of them say in response to that? They would say,
well, those traits are a lot more complicated. They're socially influenced. Things like educational attainment
is influenced by all sorts of stuff.
The quality of the school you went to,
the socioeconomic situation that you find yourself growing up in.
Just to jump in there, didn't plow and find
that when you equate for everything else,
schools on their own have less than 5% of an impact on someone's
educational outcomes.
That's the, yeah, he's the off-stead ratings of schools to show that.
Yeah, I think probably in the UK that's probably true.
I think in other countries that might differ slightly, like I think in countries where
the schools really, really dramatically differ.
We've done a pretty good job of equalizing because of things like off-stead inspections,
of fairly, you know, fairly American people that of things like off-stead inspections, of fairly, you know, fairly.
American people that are listening off-stead
is an external accreditation board
that comes in and makes sure that the school's
doing everything right
and that the students are getting the appropriate.
And we also have a much more homogenous teaching curriculum,
right, everybody across the entire, maybe UK,
but certainly England has one set of exams at this age,
at this age, at this age,
unless you're in some weird, like,
international business, school-y type thing.
Pretty much everybody has the same stuff.
So you have this.
It is a bit different, and it is a bit different
in Scotland where I'm from.
They have the, in Scotland they have the curriculum
for excellence, which I believe is not all that excellent.
But anyway, yeah, you're quite right. So,
yeah, what Robert did in that study was found that once you control for the selection
into schools, so private schools, for instance, and sorry, yeah, there's two different,
there's the private school versus state school one, and there's the off-stead ratings one,
and in both cases, the quality of the school
doesn't make that much more difference.
But I think what someone might say
in response to my argument is that there's a lot of,
I'm just using school quality as one potential thing,
but there's a lot of other stuff that goes into that.
And something like intelligence
compared to something that goes into height.
I'm not entirely sure about that
because there's loads of interesting
biological stuff going on with nutritional intakes and all the sort of stuff that
happens when you're trying to understand height and almost inevitably these traits that we think
are really straightforward. When we do more in-depth genetic analysis of them, even something like
eye color, it becomes a much more complex story as to exactly how it's going on. There were some
papers recently on eye color that they were basically saying, like, the
story is much more complicated than we thought.
But that's also the case for things like intelligence, education, personality, all that sort
of stuff.
But I think there's this kind of allergy to even talking about stuff like education,
as if it could be studied in a kind of genetic sense or even among some people,
like they think that it can't even be studied in a psychological sense as well, like it's a purely
social thing, like it's ridiculous to try and quantify it and there's a lot of opposition to
quantification and using standardized tests and all that kind of stuff in these kind of contexts.
So yeah, I think people have this strange double standard
when it comes to that.
And also there's a double standard in the opposite direction,
which is they're very happy to, when someone says,
you know, this particular, you know, intervention
in the environment, whether it's teachers doing
something different or parents doing something different or the government doing something different, you know, this had a big effect on people's
on kids educational attainment, for instance, using those same kind of methods, you could you could look at, you know, genetic influences too, like polygenic scores using the same kind of statistics and the same kind of stuff.
And yet people are not happy to take the same,
draw the same conclusions there.
So there's double standards everywhere.
People are just kind of really,
they feel this real aversion to genetic explanations.
Well, so tell me about where this is coming from
because it seems like,
he's the broad science explanation, right? That coming into talking it seems like, here's the Broseye and Sexplanation, right?
That coming into talking about behavioral genetics, group
differences and individual differences got
lumped in together, group differences got
used by people that have some pretty nasty ideas about
better and worse types of individuals.
And that has caused downstream for behavioral genetics
to kind of be lumped in with quasi-racists ideas.
That's one reason.
Another reason is we live in a meritocracy.
And in a meritocracy, if you are your successes and you own your failures, someone that is told
that you have a predisposition toward being more successful ahead of arriving into the
world, given that success and your achievements in life are one of the fundamental things that you take your status and your wellbeing and your sense of everything
from. That also feels very unfair. You know, in a world that's trying to give people
equality of opportunity, realizing that people start, like literally start the race at very,
very varying levels is kind of a hard thing to bring in here. What do you want us to do? Do you want us to try and normalize for genetic predisposition
before people get to school?
That doesn't seem very fair.
Okay, what happens if we completely flatten the environment
so that everyone gets the same?
Okay, so what you're saying then
is that the only difference is an outcome
are going to be exclusively genetic.
Well, God, that doesn't feel very fair either.
Yeah, yeah, absolutely. And the funny thing is that I think people, you know, this is something
which the average person understands very, very well. There's that paper that showed that,
you know, the people who get who are the most accurate at predicting the heritability of
different traits. So how heritable things are, obesity, height,
intelligence, whatever.
The people who are most accurate at saying,
yeah, genetics are probably involved in that,
are mothers with more than one child, right?
So there's this idea that if you've got one child,
then that's interesting, but if you've got another one,
you can see, all right, I'm not parenting this kid,
particularly differently, I'm not doing anything
massively different, and yet not doing anything massively different.
And yet this child is totally different. And to be honest, if you know anyone who, if you know
brothers or sisters or you know whatever siblings of any kind, like you know that they differ
dramatically in their personality and it's probably not that they were like expressly
parented in a particularly different way, there are genetics that influence that. So we all realize that.
Some kids are starting school with slightly higher
aptitude for sitting down and concentrating
and somewhere or not.
But the thing is, the mistake is to assume
that there's only one political outcome from that.
There's only one political interpretation of that,
because the more liberal interpretation of that is,
well, if people start off in different things, then you have to think in terms of
John Rawls, like the Vale of Ignorance, and this is what Paige Harden talked about in
Herbbrooke, the Vale of Ignorance, which is, you know, if you didn't know what traits
you were going to have, you could be entered into the genetic lottery as Paidputsit,
and start your life as one of many
people. How should we set up the world such that the world is as fair as it possibly
could be to whatever you might be like? And so that implies a lot of leveling off. That
implies a lot of extra resources for kids who are struggling, making sure that things are
as equal as possible, making sure that opportunities are given to kids
who maybe wouldn't otherwise get them.
So you could draw a very progressive liberal conclusion
from that, just as much as you could draw the conclusion of,
oh well, we shouldn't bother helping people
because genetics is having an effect.
Yeah, it does seem like an incredibly
illiberal way to view the situation that that evidence over there that could assist us in helping people to get themselves to the kind of life and the kind of world that they want to have
We shouldn't know about that. Well hang on it whether you decide to believe in it or not doesn't mitigate the impact of it on those people's lives
Yeah, and I think I had a great conversation with Paige,
and I appreciate it the fact that she engaged with,
given that she's very from the left,
an incredibly difficult circle to square, right?
It is very hard to work out how you combine
left-leaning politics with a deep understanding
of behavioral genetics and the impact that
they seem to have on people's outcomes in life.
Yeah, totally.
That's, you know, one, you know, we must admit, as behavioural geneticists, that one reason
that people have drawn the kind of more right-leaning conclusion and are, and fear the right-leaning
conclusion is that there are lots of figures in history who have drawn that conclusion,
right?
But if you go right to the very start of when people were inventing intelligence tests and so on,
this was very much the idea.
So Godfrey Thompson, who was this very well-known intelligence researcher at the start of the 20th century,
at University of Edinburgh, there's a building named after him in the University of Edinburgh Education School, where by the way, they probably don't teach that much about IQ and so on anymore,
but it's somewhat ironic that it's in a building named after him, but fair enough.
He was of the opinion that, and he didn't do so much genetic stuff, but he was of the
opinion that intelligence differences that kids have imply that we should spend exactly the same on every single child.
So, so like,
no matter whether they're struggling or doing really well,
every single child should get the same amount of resources from the state.
He wanted things to be as equal as possible, and yet he was one of the pioneering
intelligence researchers and major contributions to our understanding of
of exactly what intelligence is and what the G factor is, the general factor of intelligence,
all this kind of stuff. And the general influence that he and other people around
about the time had on the education system was that there are all these people out there
who are, would not otherwise be noticed by the good schools because of course at that point getting into a good school meant knowing people, your parents being rich and all that sort of stuff.
And so we should try and have an objective way of doing that. And that's why they influenced the Butler Education Act in the 1940s. And that's how we got the grammar school system that we had in the UK for a long time. Now, the grammar school system turned out to be not particularly effective
because the people who went to grammar school had a great time but the people who didn't pass
the 11 plus test, which was the test that you, the IQ test that you did, ended up in secondary
moderns, which were very, very poorly resourced and there's all these like hellish stories of, you know, the latter half of the 20th century,
the people went to secondary moderns and had a horrible time.
And I can totally understand these crumbling buildings and all this kind of stuff.
But that was not the intention.
Like the intention of the psychologists who set this up was a kind of, was a kind of noble one,
which was that, you know, we're going to try and find
smart kids from poor backgrounds and give them an academic
curriculum that is appropriate to them. Not give them more resources, just give them an academic
curriculum that includes stuff that's more appropriate to them and give an academic curriculum
to people who are not as academically inclined. So that was originally into intention.
Subsequent to that, there came lots of people
who did in fact believe in eugenic theories
and a lot of all the stuff that we know here
IQ tests associated with.
But the original IQ tests were made for noble reasons.
The very, very, very first one, Alfred Binet, in the 1910s,
was invented to help kids who were struggling in school.
Kids who maybe had particular what we would now call
special educational needs, was invited
to have an objective way of diagnosing them
and giving them extra attention and help.
So the kind of if you believe
this stuff, therefore you must believe in like predestination and immutability of traits
and you must be, you know, they have these these right wing views on things. It's just
completely, it's a historical and it's and it doesn't follow from the science.
Am I right in saying that the best evidence
we have at the moment is that IQ correlates 0.8 by your 60s
or something like that with your parents,
but 80% is that right?
Well, the hair-tability, so there's this weird result
where the hair-tability goes up over time.
So like, environment seems to have more of an influence
at the start of your life,
and then it gets less important in measuring, you know, its impact on intelligence. And then yeah,
it goes up to the heritability, that is the percentage of the variants and the trait that is
associated with genetic differences. Yeah, it's 60, 70, 80% by the time you do it in older folks. You know,
there's studies like the Vets are the Vietnam Experience Twin Study where you've got all these
twins who are in, they happen to be in Vietnam and that's not the most important part of it,
but you've got all these twins who were there and they did an IQ test and so on. So they've got a younger
IQ test and an older IQ test. One that they did when they went to Vietnam
and they were like 20 years old
and one that they did when they're later in life.
And you can see that their ability increases over time.
Yeah, well, I mean, that's fascinating.
The fact that we have so much of ourselves
that is a part of our parents is kind of something
that's beautiful, but yeah, I think the fact that
Plumman said it does not predetermine, but it does
predispose, was how he put it to me.
Yeah, Robert has some of these phrases like that, and it makes a different, it matches,
but it doesn't make a difference, or something like that, it's what it's like.
Yeah, well, the best way to express that is.
I'm not sure, but there was a tweet that I put out that I got a lot of pushback for when I first had
blown on and it was a quote from him from the episode.
And he said something along the lines of,
the single most important thing that you can do
for your child's future happiness,
educational outcomes and income is the partner
that you choose to have them with.
Yeah, I can see why people would be upset by that.
But that's one of the things that we know will have will make a difference
This other stuff in the environment like interventions are we are much less certain about
you know
just
you know for instance things like
Breastfeeding does that and does that improve people's intelligence? Like that's a big, big thing.
Design nutrition.
Yeah, exactly.
All sorts of things like that.
We just don't know, the studies are not that good on that.
Whereas the studies are pretty reliable
and knowing that genetics has an effect on people's outcomes.
That's a good question.
There has been this big replication crisis everywhere
at the moment, and it feels like studies
that were often used either
bro scientifically or really scientifically are being thrown out. How much has behavioral
genetics been axed by this recent replication crisis?
Well, I think behavioral genetics, I wrote about this in my most recent book science
fiction, which is about the replication crisis exactly. And I think behavioral genetics
was one of the very first fields to be completely smashed by the replication crisis
in one of its respects.
So in behavior genetics, you've got lots of different types
of research.
So you've got research with twins,
do the difference between identical twins
and fraternal twins and like inferring things from that
about how genetic particular traits are.
So that's one thing.
But then you've got the molecular stuff. And when I was a PhD student, so like 10 years ago,
all the way through that we had candidate gene research. So it was like, this particular gene
that we have a theory about is related to depression, for instance, or something's more complicated.
Like this gene, and if you get abused as a child, it's going to relate to depression.
This particular gene is related to memory skills, and so it makes you smarter if you have
this particular variant of this gene.
And endless research on that, loads and loads and loads of papers published across all the
top journals and many of the non-top journals, millions of dollars of research funding, people in, based in their entire careers, writing their
piece, de-dissertations, you know, doing job talks, getting employed at top universities
on the basis of this candidate gene research.
And it was all, as to risk, not all, but like 99 percent.
Not since.
It was all unreplicable, almost all unreplicable. There's like one
example I can think of that it didn't fail when people thought, wait a second, should we try like
replicating these candidate-general results in big studies? So not just like 100 people,
but like several thousand people to see if this particular genetic variant is linked to
you know variation in memory and so on. And these were big effects. Like there were studies published
in some of the top journals that were like 20% of the variation in people's memory abilities are
explained by this one this one genetic variant. Looking back it seems ridiculous but at the time that
was very much you know the kind of standard study that you would do.
It all felt a bit.
People tried to replicate it.
They couldn't find that those genes were in fact related to the disorders and the traits
that were related to them.
And there was a massive replication crisis.
And now we've moved on, instead of doing these candidate gene studies, we've moved on
to genome-wide association studies, which instead of looking at like one genetic variant, you look at literally hundreds of thousands of genetic
variants. And it turns out that for things that are complicated, even something like height,
but also things like educational attainment and so on, it's not that there's one gene that
has a big effect, it's that there's tens of thousands of genes that each have this little
infinitesimal effect and all builds up.
So, you know, you might have a difference here and I might have a difference here.
And there's so many, many, many, many differences that we have that make me slightly taller or slightly shorter than you or whatever it is.
And that's now how we think about things.
And in those genome-wide associations studies, we are starting finding like replicable evidence now. But like,
Bayesian ethics was completely, you know, trashed by the replication crisis.
You know, in the early 2000s, kind of before we started even talking about the replication
crisis in psychology, this was sort of happening. And, you know, the one that I can think of that
that hasn't, you know, failed entirely is the Apple E gene, the Apple Liper protein E allele,
which is if you've got one allele of this, I think you've got substantially higher, like
twice the risk of getting Alzheimer's disease, you've got two alleles of this, something
like 10 times the risk.
And that comes out every time for Alzheimer's disease, that's like the gene that we know
about it, it's related to Alzheimer's disease.
Yeah, I've got one of them, I did my 23andMe. I've got one of them, which is
you know, enjoy me while you can.
Exactly.
I don't like to.
Oh, it's cool.
Michael.
But, you know, the behavioral genetics has made substantial reforms and we don't do that
sort of research anymore or mostly, I mean, you see the odd paper coming up. Like, I asked to
review the odd candidate gene study and like, the reviews can just be like, this will not replicate.
It's a revocate.
It's just, it has to be valid.
And so, yeah, as I say, the stuff we've moved on to, which is the kind of world of genome-wide
association studies, polygenic scores, all that kind of stuff, is much more reliable in
its basic associations.
However, there's a whole bunch of other things,
other like concerns that are around that sort of research. And I don't mean ethical concerns
and political concerns. I mean scientific concerns. So, like, how much does a source
to mating, I mean people having kids with people who are like them, bias are estimates.
How much does not having diverse samples mean that we're not learning
enough about actual humanity in general. So like the vast majority of this research has
been done on people from a European background. White American people or white people from
Europe basically. And so when you try and use those apologic scores, they don't predict as well when you look at
people from other backgrounds. So we need to do a lot more
research on that to really understand, you know, how these
traits are made up. And, yeah, so there's, and there's all sorts
of other, you know, statistical and technical and all sorts
of our concerns with that. But we are getting more basic, you
know, replications done. Whereas in the candidate
gene research, we just went way ahead of where the science actually was and started saying,
we found the gene for intelligence, the gene for memory, the gene for depression, all
the sort of stuff. And it was completely wrong. And I think we have to constantly remember
and have it at the back of our minds, or maybe in the front of our minds at all times, like
for decades, maybe a decade, we were going
completely on the wrong track and everyone was, you know, maybe not everyone, there
were some people who were saying, well, for instance, there were some people saying,
didn't Ronald Fisher, the geneticist and statistician, right, a paper in 1918 that said,
we should expect really tiny effective genes and not really big effects. Isn't this
research kind of going against that?
Some people did point that out at the time, but nobody listened to them.
And we had a whole field spending endless money, taxpayers' money in many cases, just
wasting it on statistically invalid studies.
So we should always remember that.
And I think even areas that are now a bit more rigorous, like behaviour genetics, need
to bear that in mind.
It's like the, I don't know if this is a historical,
but you know when a Roman general came back
and did a triumph in Rome,
there was always like a sleeve standing on the back
of the chair, it's saying like, remember, you will die
in the year.
I mean, I think that might not.
I know, I brought that up about Marcus Arrelius
the other day, as he walked through the streets of Rome
and everyone was hailing him as this philosopher God King
and he was this benevolent leader,
he would have someone behind him the whole time
that would say, you are only a man.
Right, precisely.
So I think we need to constantly have that.
Like there was a replication crisis in your field
just a few years ago.
You know, so.
Okay, so that's the genetic side of things
or the looking at the individual genes
itself. In terms of the behavioral genetics, environment versus gene side, heritability side,
how robust has that been in terms of replicating? I think that if you look back at the twin
stuff and the family stuff, not actually trying to go in and measure DNA
differences, but looking at family differences. That stuff is much more robust, much more
rigorous. Those studies are relatively straightforward to do in the sense that they were doing
them early in the 20th century and so on. Interesting potential fraud case with one of the very famous studies on intelligence and
their ability.
Several birds, if anyone wants to look up, some interesting case there.
I'm actually halfway through writing an article about this and whether, you know,
silverboard was a fraud or not because there are people that have questions question that.
I think probably he was a fraud, but, um, but anyway, his results have been confirmed by subsequent research, which is that,
yeah, if you ask, you know, to what extent are traits like intelligence related to
genetic differences between people, you'll robustly get, uh, the answer.
And then if you take a step back and just, you know, forget the genetics and ask
the things like, you know, is there a general factor of intelligence? Like factor of intelligence? Are people who are good at one type of intelligence tests? Do
they tend to be good at every type of intelligence test? That's one of the most replicable results
in psychology. You get that every single time tests correlate positively to the other.
And yet a lot of people would be very upset to hear that.
Funny isn't it? It's quite ironic that people are upset by one of the most rigorous and
replicate findings, but it's because it's this thing I mentioned right at the start.
People assume that when you say that, you must mean all this other stuff. It's like, no,
when I say that, all I mean is that people who are good at one type of test tend to
be good at all other ones. Are there multiple types of intelligences?
Mm, well, it depends on what you mean.
So EQ is EQ a thing.
Well, there's a few things to say.
What you do in IQ test, you will notice
that if you do an IQ test, whatever,
you will notice that there are lots of different types of tests.
So you'll do a memory test, vocabulary test, a speed test,
all sorts of spatial rotation tests, shape rotators, everyone was talking about on Twitter. I would sell. Yeah, word sales versus shape rotators. Are you a word seller or shape rotators?
I think I might be a... Well, I don't know. If I look at people in the physics department,
I definitely feel like words. But those types of tests could all be described as like specific
skills. If you look back at Charles Speerman in the early 1910s, who was the first person
to talk about general intelligence, we remember him very much for talking about general intelligence,
but he also talked about specific skills too. The general factor of intelligence explains, like, 40% to 50% of the variation across all
different tests.
That's the rigorous finding.
That's what comes out.
But there are specific skills too.
And what that means is, they can vary independently of the general intelligence.
One silly extreme example, just to illustrate it, would be, if you sat down every single
day and memorize the dictionary every single day, then your vocabulary would, you know, one silly extreme example just to illustrate it would be like if you sat down every single day and like memorize the dictionary every single day, then your vocabulary would, you know, your specific skill of your vocabulary would improve, whereas that's not going to help your,
your, you know, shape rotation or whatever. And in fact, there's a whole literature on like working memory training, you know, like those brain training games that were quite a thing for a while. People did research on that for a long time because they, there was this initial finding
published in a big journal, preceding the National Academy of Sciences, that said that if you do
that stuff, it improves your, your general ability. Turned out to be Alex. Everyone was like,
oh my god, you know, we got to, got to try and replicate this. And then it didn't replicate.
But it felt a bit when people tried to do studies, they had better controls and all that sort of stuff.
It felt a bit when people tried to do studies, they had better controls and all that sort of stuff.
But that was the idea, you know, that training up one task can transfer to other ones. That seems to, at least for the case of working memory, that seems not to be the case.
But yeah, so that's one thing to say, is that there are all these different specific skills.
They tend to correlate together, but they don't correlate together at like one,
so they are different to some to some to be. So if you mean other multiple intelligences in that respect,
there are specific skills, that's the most obvious one. Then there's, is there this thing called
multiple intelligences theory that was made up by Howard Gardner at Harvard University?
And when I say made up, I actually use that word advisory because he didn't have any data,
he just decided that there were
these different types of intelligence. It's not the same as verbal, auditory, kinesthetic learning,
which is another thing which schools love, but it's that kind of thing that there are,
there are these multiple different types of intelligence arithmetic, arithmetic intelligence,
and so on. And over the years, he's added new intelligences, so he's decided that there are new ones.
There's existential intelligence now, inter-personal, inter-personal intelligence.
Naturalistic intelligence, which is about like how much people know about plants in the garden
and stuff, and he's decided that that's an intelligence. Now, those are skills. Those are
real things that we care about. It's cool to meet someone who knows the difference between plants, you know,
when you're out in the woods or whatever.
That's great.
But is that what we would call an intelligence in the same way that we would talk
about like, you know, verbal and spatial abilities?
Probably not.
And does it mean that there's no such thing as the general factor of intelligence?
No, because there's actually no empirical content to the multiple intelligence
to theory whatsoever. It's just one guy's opinion, which is remarkable.
Like Harvard psychologist had a massive impact on the world and the way they think about
how the human mind works, having done no research.
You're saying that there's still potential for my academic career to take that path?
Yeah, you don't need to do any research. Just come up with an idea that people like and they will
spread it around every school. I'm really into doing that.
Well, exactly. Exactly. So that's a good thing. Then you mentioned emotional intelligence.
EQ, it's IQ for breakfast. That's the...
It's like another, which is like another thing. And my understanding of that is,
a few years ago, there was a meta-analysis done on all the research predicting job performance from EQ, IQ and personality.
And what they found was, if you just look at EQ on its own, it predicts job performance.
People who have higher emotional intelligence do better at work.
So there you go, that correlation is there. If you just look at the bivariate correlation between those two things, totally robust there.
However, if you put intelligence and personality,
like the big five personality factors,
you had Christian Jarrett on the show recently
talking about that sort of thing,
into the same equation, then EQ no longer has an effect.
And that's in predicting job outcomes.
And that's because EQ is just a redescription of stuff that we already knew about.
So it's just a redescription of being caring about what people's feelings and emotions
are.
That's one part of it and some of that comes across in some personality factors.
And being smart enough to operate in your mind. Okay, that person must be thinking this way
You know, I can kind of if I say this to them it's gonna have this effect like it takes quite a smart person to be able to kind of
I mean
It's slightly derogatory way of putting it would be like manipulate people in that way
So in predicting job performance. It's just a read description
So I can't remember which way round is there's the like people talk about the jingle and jangle fallacy, right?
And one of them is the jingle fallacy is describing the same thing with a different name
and therefore thinking it's something different.
And then the jangle fallacy is calling different things the same name and thinking that they must be the same.
Maybe it's the opposite we round.
I can't remember which way, which is jingle, which is jangle.
But psychology, psychologists do an
awful lot of that, like, re-describing things with a new name and thinking it's something different.
One example is grit, right? So there was this big book in like 20...
Angela Duckworth?
Yes, Angela Duckworth with Boop.
About 16, something like that that came out that was grit, it was mad. In every single school in
the country, everyone loves it, like the power of persistence
and passion and all that sort of stuff.
If you look at the studies,
it's literally correlated at 0.9.
And a scale of minus one to one,
it's correlated at 0.9.
Conscientious.
With conscientiousness, it's just the same thing.
I didn't even know that, and I'm pretty good at that.
Yeah, well, there you go.
And there's meta-analysis that have come out
saying that it really is just a re-description.
So it's not wrong, it's not that grit is a little nonsense,
and it's also not that emotional intelligence
is a little nonsense.
It's just like, it's a jangle.
It's just a new way of talking about something
that we already knew about.
And so the contribution there is not clear
what the contribution is apart from,
maybe popularizing the old stuff.
That's what I would say.
I would say that given the way that Pop Psych works at the moment,
that create a branding problem,
a catchy branding problem,
is 90% of the battle.
I know that growth and fixed mindsets
came under the ax.
Actually, why don't we do this?
Give me some of your favorite
eviscerated psychological concepts from the replication crisis.
What's dying bleeding out on the cutting room floor?
Well, I mean, the classic one, I'm very happy to talk about growth mindset.
So we can come to that.
But the classic one that kind of was there that kicked off the replication crisis in
any ways was priming, like social priming, which is that there are all these things in the environment, words, phrases, ideas, and that when you see them, it changes the whole
way that you act.
So the classic study, which I remember reading as an undergrad, and I was taught it in
lectures, and I remember reading the study.
I was like, wow, this is amazing.
It was in the textbook.
Incredible.
People, you sit people down in a lab with a computer, they're doing like some task where
they have to like tell whether a sentence is a real sentence or a word is a real word.
Like there's words and known words, you know, and some of the words that come up in one
of the conditions, you know, for half the participants are to do with old age.
So they say things like Zimmer frame or fragile.
Yeah, exactly fragile.
One of the words that I remember that was used in study was Florida because
apparently people in the US associate that with old people.
Fantastic.
It seems about 10 years, but that's what they say.
And then what they found was that the, you know, for the other half of the
predictions, it was just random words that didn't have any particular theme.
And for, you know, the half that saw the words that are
to do with old people, they walked more slowly out of the lab, right? So they, they,
when they were leaving the lab, there were like, people measuring with stopwatches and they
walked more slowly. That was apparently finding, I remember reading that in social psychology
literature and going, wow, that's incredible. That's amazing to believe that. And there
was a whole range of studies like this showing people the American flag makes them
much more likely to vote Republican at the next election.
Even years later, you show them the American flag once and they'll vote Republican much
later.
People, you know, serious scientists published that in scientific journals.
Serious science journalists wrote about that in, you know, whatever, you know, whatever
magazines that they were writing about science and and didn't think that's a bit possible. Another classic example which I don't think has ever
been doing anyone's ever attempted to replicate it but it was you go into a
room and there's a box in the middle room like a big cardboard box and half
the participants sit in the box and they do like a creativity task like how
many uses of a for a brick can you think of, like that sort of creativity task, and half of the participants
sit just next to the box, right? And what they found was that the participants who were
sitting next to the box had a higher creativity score, and they thought it was because they
had the idea and their head activated of thinking outside the box.
Right, I mean, literally this is how absurd, you know, some of the social psychology
at all. I know. I know. I'm on. That was published in psychological science, one of the top
journals of the field in 2012, I think. And so that idea of priming or social priming,
behavioral priming has really fallen by the wayside. Linguistic priming, I think is pretty
solid. So like, if I say an active sentence,
then you're more likely to say an active sentence back rather than a passive sentence.
You know, that sort of, that sort of thing, like that slightly more boring stuff is, is,
is totally, it's totally real. But the kind of Darren Brown thing where you like just show
someone one phrase and it totally changes their, their behavior. And I think Darren Brown
actually like picked up on a lot of this stuff and put it into his magic act.
I don't think he was doing priming either. I think he was doing magic tricks, which is totally fine, but he claimed that he was doing this kind of putting an idea in your head sort of
thing. That stuff is completely formed by the wayside. You don't see new studies on that kind of
thing anymore very much anymore because there were some very prominent attempts to replicate that.
Including the one with the people walking were slowly out the lab, they actually got,
instead of people with stopwatchies, who by the way knew what the hypothesis of the experiment
was, they put laser, like infrared things on the corridor.
Like 100 meters.
Yeah, exactly, so that they would break them as they walked through, and there was no
difference. When you do it objectively like that, there's no difference.
In the people who'd seen the old words versus not.
Growth and fixed mindset.
Yes, well, there's another one, which I think the story of growth and fixed mindset is a bit different
in that it's not complete bollocks in the sense that the priming stuff was, but it was massively overplayed.
And I think the real, if you look at the original
discussions of growth mindset, early 2000s, maybe late 90s, early 2000s, this was,
this is going to change your life. Change the way you talk to your partner. Change the way you,
your kids do at school. Change the way you think about your life. It's going to be massive.
It's going to have an enormous effect. They publish the paper in science. So like what's meant to be the
second best journal in the world, saying that growth mindset could solve the Israel-Palestine
peace process. Like I'm not kidding, that's a paper that exists in published in science.
And it's all like very borderline statistic results, this school results, not so-so. I actually
having never remembered that. I feel like I should go back and do a little debunking of that because I've not seen anyone
talk about that. But, you know, and over time, what you now find is that the people who
are doing research on growth mindset claim things like growth mindset is a cheap intervention
that can have a small to modest effect on kids' educational attainment. That is, if you go
into classrooms, you teach kids, working hard is good. You can change the way that you, you
can change your level of ability, you can change your skills, you know, then that does seem
to have a small effect on kids' behaviour. As I think I would predict it would, that makes
sense. If you tell kids, you're completely stuck,
there's nothing you can do, then, you know,
I can totally imagine that many of them will,
will take that to heart and at least have a small effect on,
there seem to be slightly bigger effects on kids
who are from very low income backgrounds, for instance,
so that makes that makes sense too.
So like, now the claims, they used to be these
absolutely dramatic claims, but it would change your life.
Now it's like, this should be part of the toolkit
in education.
And that's good.
And that's because people have come along
and done meta-analyses of all the research
and found that the effects are really, really small in average.
Like the effects are not earth-sharing effects.
They're just about there, sort of thing.
And that's much better, but I haven't seen anyone say,
oh sorry, we misled the entire world for decades about,
about how much of an effect these things will have.
I feel like this kind of thing might be being used
in a few schools now if they had talked about it
on the level all the way through.
But it's in every school in the country
because the claims being made for it were revolutionary.
When in fact the data just didn didn't back up at all.
I had David Robson on to talk about the expectation effect. Have you seen it?
I mean, I'm aware of the sort of effect, but I don't know, is there a book or?
Yes, yeah, yeah. So he wrote a book, Science Write, a very good, really, really good.
Now, I highly recommend you go and check it out.
And so he's talking about basically the placebo effect
but across multiple different sorts of domains, right?
The expectation that you have about your outcomes
can very heavily influence the outcomes.
My two favorite studies from that,
one of them was talking about
gluten intolerance has 10x from 3% to 30% over the last 20 years.
So they brought people into a food hall
and they put them under study setting.
Some people did and did not have self-reported gluten
intolerances.
They told everyone that they were going to give them a meal
which had gluten in, had no gluten in.
Right.
People were breaking out in hives,
running to the toilet with diarrhea,
they had inflammation, all of this stuff.
The other one was a study about VO2 max tests.
It turns out that a particular type of genetic variation allows you to blow off CO2 and upregulate
oxygen more efficiently.
People were brought in.
There was split randomly into two groups that were mixed with do and do not have the genetic
variant.
One group was told you do, one group was told you don't, you should do really well, you
should do really badly.
Surprise, surprise, the group that was told that they should do badly did worse overall.
However, what they found was that people in the group that were told that they should do badly, did worse overall. However, what they
found was that people in the group that were told they would do worse, but did have the
correct genetic variant to blow off CO2, didn't do as well as the people that didn't have
the genetic variation, but were told that they should. And David synopsis for this was,
your expectations are more powerful than your genes. Now he's doing it is a little bit of a tongue-in-cheek sort of play.
My point being, you have the opportunity with the expectation effect to create a placebo
sense of what may be the outcome, is that justified in jazzing up some sort of effect?
And is there maybe an argument to be made that the
most sexy that you make it, the more outlandish that you make it sounding, it does end up becoming
a self-fulfilling prophecy. You do end up getting people who believe in ego depletion. I'm
sure that you've seen this. People that believe that willpower is a limited resource, have limited
willpower. People that don't believe that that's the case seem not to. I'm not sure how replicable that is.
Well, I think there's been some controversy
over that replication.
But yeah, I get the idea.
Is there a justification for people jazzing up
any kind of psychological tool that you may be able to use
because by doing that, you increase the expectation effect,
which does genuinely increase the effect.
The only issue is that it's not happening on the mechanism that it's claimed on.
Yeah, I mean, well, it's not just psychological stuff. I mean, if you look at like the whole
term of bedside manner that doctors use, you know, doctors can have a big effect on how their
patients do by just being nice to them or acting in a way that acting in a way that's not like,
you know, really harsh or accusatory or whatever. And I think any doctor will tell way that's not like really harsh or accusatory or whatever. I think any doctor
will tell you that that's the case. However, I think we really, the most important thing
as scientists that we have, when you're doing an intervention in a school and you're a teacher,
I think jazzing it up, whatever, carry on. If it helps, that's completely fine. But as scientists, our number one thing is
to try and get to the truth. And we need to completely control as rigorously as possible for
these kind of effects. And that's really hard in a behavioral context. It's really hard to
make the control group feel like they're doing something similar to the kind of active treatment
group. It's really be difficult. They're maybe not having exactly the same kind of intervention.
If it's one of those video game studies that I mentioned before, there may be, you know, there were loads of studies published
where the control group was just people who didn't do anything versus people who were playing the video game.
And what you want is someone to be playing like a different video game, which you don't think has the active ingredient of like the brain training or whatever it is. But that's hard. And it's so hard that when you look across studies, you find that tons and tons and
tons of studies, even you know randomized medical trials, don't really have very good controls,
whether it's like a placebo pill or whether it's just, you know, treatment as usual or whatever.
People don't put as much effort as they should
into these controls.
And I think you're absolutely right.
There are these big expectation effects.
You've got to try your best to roll them out
because it's a big source of bias in these kind of studies.
I'm just reading a meta-analysis on homeopathy right now
that claims homeopathy helps people's ADHD.
I'm just writing a blog about it, which is a very
interesting case of a meta-analysis by the way, which does everything right. It was pre-registered.
It had a bias check. It had a publication bias check. It was all done by the book. And yet,
obviously, it's a ball of xanomy because it's homeyopathy. So, so it, so it like they're my, you can follow all the rules and still
and still have and still not have a realist it was all but one of those rules that people need to
follow a lot more and when people look at the quality of trials, medical trials over the years
they do find that the quality has been increasing somewhat you know since the you know
latter half of the 20th century,
people are getting better at doing controls,
people are getting better at blinding,
people are getting better at doing randomized studies now.
So like if you look at a randomized controlled trial now,
on average, it will be better than a randomized controlled trial
from like, you know, the 1980s or even 1990s,
a little bit on average.
That's what the best data shows as far as I'm aware. But
yeah, these expectation effects still come up in, you know, in loads of different, in
loads of different trials.
So I've just realized, my question was, should we not be telling the public something
which can benefit them by using the expectation effect to fill in gaps in the magnitude of
something that's maybe quite small? What you're saying is that science
needs to protect itself from this by using unbelievably quarantined controls, making sure
that this doesn't impact because the expectation effect can be such a big deal. I mean, in
that case, it works in both directions. You can influence publicly what people think.
All right, what the other one that I saw, which I haven't spoken to anybody about yet,
and you're the guy, what happened with the recent SSRI
and depression stuff?
Because that demonstrates, I think, quite nicely,
how important it is to get this right
because you're potentially going to commit tens of millions,
hundreds of millions of people, perhaps, over several
decades to a particular course of treatment that may be based on something that isn't legit.
Have you looked at this?
Yeah, it's funny because it relates back to the homeopathy thing that I just mentioned
because in that homeopathy mess analysis, they say, well, look, we know that there can't
be any actual molecules of the substance left in the homeopathic remedy,
because it's been diluted so many times that there aren't any molecules of whatever active ingredient left.
But the randomized controlled trials show that there is overall an effect, and therefore we don't need to know the mechanism,
we just know it has an effect. And I think that they are wrong about
that. But it was interesting that they said that because under most circumstances, I would
actually, that argument sounds quite good to me, just not in the case of homey-opathy,
which obviously is literally impossible for it to be having an effect. And given the
laws of physics as we know them. But the S's are I think is interesting because they're
making a similar argument. They're saying, so this paper that came out recently
from, I think it was University College London,
said, when we look at,
so the theory underlying SSRI antidepressants
is that they allow, for differences in the amount
of serotonin that people have in their brains,
and that might affect their depression levels. They mean that there's more serotonin just going around
in there. And it was always a bit of a vague theory. And I think most researchers would
say that the kind of chemical imbalance theory is a very, very high level like it's a thing
that maybe doctors will say to their patients, but it's not
actually that justified by the science.
And indeed, this study kind of confirmed that, which is that there's no obvious differences
in the amount of serotonin in the brain of people who are depressed versus people who aren't.
Now, if you think back to the homeopathy thing, then, it's like, well, okay, the mechanism
isn't there.
Maybe the mechanism that we think, you, okay, the mechanism isn't there. Maybe the mechanism that we think has an effect isn't there.
But if you look at SSRIs, they're not like homeopathy in that, they have big side effects.
People who have SSRIs have a huge range of different side effects.
So there is something active going on in there.
And it could be that they're having an effect even if not via the serotonin.
So if you look at the studies on antidepressants, my friends Soloni Dattani did a great series
on, you know the website or world and data. Yeah, you've probably like everyone's seen
the graphs from that website and stuff, but they published a whole lot more interesting
stuff. She did a whole series on antidepressants and the research.
Her overall conclusion was, and this is broadly,
my conclusion is, well, when I looked at literature is,
yes, the effects are exaggerated in the literature.
So if you look at the way that antidepressants studies
are published, there was actually a really terrifying study
done on antidepressants.
I think this applies to almost all areas of science,
which is they took registered trials.
So when you do a medical trial,
you've got to register with the government, right?
That's just kind of, you know,
legally you have to do it.
Otherwise, there's no way you'll ever get it published.
So there's registries and they have like all the trials
that have ever been done on them.
And in this particular selection of trials, they found that it was about 50-50. So about half of the trials in have ever been done on them. And in this particular selection of trials,
they found that it was about 50-50.
So about half of the trials in the antidepressant
didn't work and half of the trials it did.
But by the time it got to the studies
that actually got published in papers,
almost all of the negative studies
just no one ever sent them for publication.
But almost all of the positive studies
people did send for relocation.
Then if you look at those, the few negative studies that actually got published,
they kind of shifted the outcome a little bit. They said, well, we're going to measure things on this depression measure,
but actually we didn't find a result on that. So we'll just, we'll talk about a different depression measure,
which we did find a result on. So this like, you know, dredging through the data to find any old thing.
Then, if you look even closer,
you find that even some of the negative ones
were kind of written up and slightly spin sort of way,
like a positive sort of way,
saying, like, well, this is very promising,
and all this sort of stuff.
When actually it was just a null trial.
And so the literature that we see at the end of the kind
of process, and it's almost like a,
like a laundering process of literature.
It doesn't bear much relation to the studies that actually were done, which is really scary.
And I think that happens across all areas.
They actually found in that same paper that it applies to therapy trials as well, not just
antidepressant trials.
So, I mean, that is like a really, really terrifying discovery.
Having said that, I still think that if you adjust for that kind of thing,
the publication bias and so on, it is the case that anti-depressants do seem to have a very,
like a small effect on people's levels of depression. I don't think the mechanism matters in that case.
I think even if it's not to do a serotonin, they seem to be doing something and the randomised visual trials do seem to show that. But then there's massive interpretational differences
in how you look at the numbers. So Irving Kirsch, the famous critic of anti-depressants,
published a book called The Emperor's New Drugs, I think it's called, about anti-depressants.
And he said, an effect size of point two is the average that you get out of studies in antidepressants.
So like point two, standard deviation difference in the depression score.
What a tiny pathetic effect. You've got to give up these drugs that don't have an effect.
That's not going to work at all. You look at the most recent meta-analyses that are published,
that account for things like I was talking about the publication bias. And they say,
most recent meta-analyses that are published, that account for things like I was talking about the publication bias, and they say, effect size of point two, effective medical treatment.
This is really good. Even though it's not a massive effect, this is still going to have
a big impact across the population and all sorts of stuff. So people can look at exactly
the same data and draw massively different interpretations. My interpretation is, we've
got to do a bit better on this. We've got to do proper research and really understand this and get better at treating depression
both from the therapy angle and the drug angle.
But at the moment, it does look like on average, for each person who uses them, one type of
anti-depressant will have some kind of mild effect, at least on their depression. Now, I think one of the big things that hampers depression research is that we're not very
good at defining what depression actually is.
So there's loads of research on whether, you know, we're at talks about this thing called
intelligence, right?
And intelligence doesn't exist in the sense that there's not like a thing we can measure
with a ruler in the brain or in like that, that is intelligence.
We infer the existence of intelligence from the fact that there are all these different
tests that you give people and they correlate positively together and there's this thing
that comes out, this latent variable that comes out and it's called intelligence, that
explains half the variation in the test and all that sort of stuff.
And so the question is, is there something called depression?
That's the same thing that we can infer from all the symptoms.
Insomnia, low moods, just crying sometimes for no reason.
All the different things that come with depression,
is there this thing called depression?
And the argument from quite a lot of people, Ico-Freeza is one who's a,
and someone that I've known for a long time, a researcher who's done really good research on this. His idea is that there isn't necessarily this thing called depression and what we should be
focusing on is the symptoms. We should be focusing on this kind of network of symptoms which sometimes
bump into each other and cause each other. So, So like the insomnia causes low mood and the low mood causes you to be angry at your
spouse and that causes you to and so there's all this kind of like network and you can try
those various statistical approaches you can try to understand this kind of network of effects.
And that's a different way of looking at things than saying that there is this thing called
depression that causes all the symptoms. This latent variable, which
we'll never be able to measure, that cause all symptoms. I'm kind of undecided on this
debate. I think it seems quite a strong thing to say that there's no such thing as depression
because a lot of people seem to have very similar symptoms over time. And the general medical approach is to try and treat the underlying cause rather than
symptoms.
And so this would be quite a departure from that.
But I think there's a lot of mileage in this kind of research of examining the symptoms
rather than saying, there's this one thing which we have to try and treat. So, that might be one of the reasons why the trials on antidepressants are all over the
place is that they all measure depression slightly differently, people experience depression
slightly differently, people are at different phases of depression, and the symptoms might
be different, and if one symptom causes another, then in different ways and different people.
So it becomes like a moving target rather than, you know, I don't know if you were doing a drug of, have you
doing a test of like, do you have the COVID virus in your system, which is, you know, yes,
there's difficulties in some ways of measuring that, but like it's a much more objective thing
than, you know, are you depressed? I mean, there's many different ways you can answer that
question. That seems very much like a behavioral geneticists answer.
I spoke to Ploman about a year ago and he was telling me he says he has a predisposition for being fat.
And he said it was such a great example and it really helped to identify how this collection of genes contributes to the outcomes that we get in life. So he said that whenever he walks past a bakery,
if he smells fresh bread, he will be going in
and he will be buying it and he will be eating most of it.
But there are many ways to get fat.
You may be fat because you have an aversion to exercise.
You may be fat because you have down-regulated
grueling release so that the hormone that tells you
stomach that it's empty, that signals to you to eat.
Maybe that's up-regulated or something. There's a million different ways that tells you stomach that it's empty, that signals to you to eat, maybe that's over-upregulated or something.
There's a million different ways that you can do that or for you to be fit.
Maybe you don't sleep so much, so you're always up early and because you're up early, it
means that you go and exercise because you've got nothing else to do, right?
There are tons and tons and tons of different things.
The same thing goes for what you're talking about here.
What you talk about when you're referring to depression is a particular milieu of a bunch
of different things that people seem to link together.
Where do they come from?
Is everyone's depression the same?
You're never going to actually get to experience somebody else's brain, so you use these words
to describe, but we all know how culturally influenced a lot of the things that we consider.
Ah, right, We use the language that
other people have done. It's precisely why juries aren't allowed to or told not to watch the news
while they're doing. For precisely that reason, to not try and influence their opinion. So it does
make a good bit of sense. I also think, is it what's it called the Richmond scale of depression?
And it's not to 61 or something like that or the Richard scale. There's a lot. There's the back depression of entry and there's a whole like...
Artony.
Yes, you can use, yeah.
Yes, and I'm pretty sure that no matter what the mechanism is that SSRIs could be working
on, it seems like they get to move someone about a point on one of the scales that I seem
to remember reading about.
Yeah.
And you got, well, a point isn't nothing, but it's a big difference if it's the one between taking your life
and staying alive.
Sure.
So saying that there is no use for them,
that seems like baby and patho are being thrown out,
but saying that it's this effect,
but then also going, well, serotonin does seem to impact
people's subjective sense of well-being
and how they actually, so if it, like,
it's a mess, it's a mess.
And a huge amount of that mess comes from low quality research, like low quality studies,
low quality, uh, and it's a present trials, low quality research on depression more generally.
People not thinking about alternate explanations of the, the, the data and kind of getting stuck
in, in particular paths.
And I think this is a broad problem across all, uh, uh, research.
But yeah, you're completely right that this becomes like a real thicket
of convocation. However, I feel uncomfortable going too far down that particular line because
like for many years, the anti-psychiatry movement has made these similar arguments. I feel like an aversion to say,
well, maybe they were kind of right in some respect, because they are, they are, I mean,
as about to say, they are insane. Maybe that's not an insult to them, because they don't think
they're such a thing. But I live, so I work at the modestly hospital in London, which is a psychiatric hospital.
Our campus is behind the psychiatric hospital.
And every so often, there's Scientologists out there protesting because they have this
big anti-psychiatry thing.
So they're really worried about...
What's the problem with psychiatry?
Well, Elrond Hubbard didn't like it.
I think possible because he saw
a psychiatrist and the psychiatrist said, you need help. And so he was, and so he was,
now downstream from that, the echoes of Elron Hubbard's bad interaction with a psychiatrist are still
that's my understanding, that's my understanding. And I think, and I think, you know, they are
trained from day one, but like psychiatry is the evil thing and psychiatrists are out to manipulate us, control us and show our brains.
Imagine that.
Yeah, to manipulate and control us.
Yeah, particularly psychiatrists, it's very strange.
But there were particularly, there were campaigning about electro shock treatment, which is another
like interesting thing, which like my understanding is, if you talk psychiatrists, that for very,
very, very severe cases of depression, where people are like literally catatonic and can't
move from, from depression, electro shock, electro-convulsive therapy, I think we call it, and does, does
help, it can kind of jolt people out of
these kind of cathedonic states. However, we don't really know that much about it. It's certainly
a long-term effects. It's unclear. I saw a meta-analysis that looked a bit shaky in terms of its
statistics. I'm going to write something on this soon. There's a Scientologist that inspired me to
look into this. But what I don't think is is a terrifying
cabal of psychiatrists who are trying to control the world, I think it's been observed anecdotally,
at least in many cases, that this helps. People use it. But yeah, there was a whole,
through the 1970s, 80s, this whole movement of, doesn't know what's its thing, it gets to freenia.
It's just a bad adaptation to the societal conditions
that people get, get, get, no, no.
There's this thing called schizophrenia, it's really bad.
Psychosis, at least, there's this thing,
and it's really, really bad,
and people lose complete control of their grip on reality.
And that's a really serious thing.
And it's not to do with society.
It's something that's gone very, very wrong in their brains.
And that's another thing that we don't know how to treat
that well.
I mean, we've got drugs, antisecotics that can kind of hold it
back, but it's difficult to know how to sort of prevent
or predict it.
Kind of like the broken clock is right twice a day scenario.
It seems like if you have a replication crisis and a lot of studies
and previously held models are up for the chopping block,
there will be some people in the past that decided to point at that.
And that ends up making them seem like Cassandra's, right?
Very effective ones. So given the fact that you've got this thicket,
as you put it, which is a complete mess,
even for people who, like you can read scientific papers,
who, like you understand how effect sizes and p-hacking
and all of this stuff can be done.
And coming out the back of two years
where everybody had to be a closet,
epidemiologist,
a virologist, and understand you've got the media perverse incentives, clickbait, you've
got academic issues with regards to the only stuff that gets published is the stuff that
is set, all of this stuff all put together. First off, it's not a surprise that people
are losing faith in the powers that be, in
the ones that they should have done.
The bigger question, I think, is how people can be effectively skeptical without losing
faith in everything and becoming confused and nihilistic and easy to manipulate?
I've just written the start of a book proposal on exactly that question.
I was like, how do you question the consensus without losing your mind and becoming a conspiracy
theorist?
And I think this is a really difficult task because the incentives are so strong.
Once you see that the scientific establishment has really screwed up on something like
the replication crisis or, you know,
you mentioned during the pandemic, there were all those cases of studies that came out. There were fraudulent papers having people retracted from the Lancet because they were based on entirely
fraudulent data. There was a study on hydroxychloroquine that had to be retracted. And the Lancet were like,
well, from now on, literally this is what happened. They said, from now on, we're going to make sure that at least one of the reviewers of
the paper, one of the peer reviewers, has expertise in the topic that they're reviewing.
It's like, well, wait, you didn't do that before.
So, you know, I can, I understand why some people are like, holy shit.
But I think part of it is that we need to raise our standards in general.
Like, we need to, first of all, we need to have some standards rather than just accepting
stuff that someone on our site says, which is a massive temptation for all of us.
There's someone we trust. They said something. So we're like, oh, yeah, I'm sure that's
I'm sure that's true. And that's kind of my argument in the previous book in science
evictions is that like, if you raise your standards, then you won't become a homeopath or a conspiracy theorist or whatever because the standards that you
have are, you know, we'll just kind of, you know, all that stuff will get chucked out
as well. It's just that you'll also chuck out lots of really crap, you know, mainstream
science too. So I think it's probably like a good checklist that can be written for like
how you should read papers.
And I kind of sketch one of them out in the previous book, but I think I'm going to do a book
line treatment of that, like what to look at when you're reading the scientific evidence.
But even then, like you can be an expert in every single area, and it takes expertise in each
chair. So I think what we can do is encourage like a free and open and to be honest,
as aggressive as possible, debate on scientific research. So there's like, there's organizations
out there like the Science Media Center, for instance, and I would encourage anyone if they see
like a controversial paper in the news, take a look at the Science Media Center website because
what they do is they ask a whole bunch of scientists who are unaffiliated with the study itself, what
they think about it.
And so you get six or seven responses, and sometimes they say this study seems perfectly
good, but regularly they say this has been way overhyped.
There's a major problem with this study.
You know what would be an amazing website?
Would be rotten tomatoes, but for sci-fi-lo-designed studies.
Yeah. rotten tomatoes, but for sci- into science studies. Yeah, well, there's already a website called PubPear,
so like publication peer or PubPear,
which tries to do that exact thing.
So they've got a little bot that goes through Twitter
and finds threads that people write about papers
and post them under a link to that paper on the website.
So again, another thing which people might be interested in doing
is if you find a paper that looks dodgy in some respects, put it into PubPear and see if
there's any discussion. And that's pretty much what you're talking about. The only problem is
there's so many papers out there and nobody has the time to look into them on. There's probably
not discussion on every single paper that you want there to be discussion on. But another thing they
do on PubPear is look for fraud. So they'll say, do you think
that the microscope picture in figure three looks like it's being photoshopped? And it turns out
that in many cases it has. There's this incredible, and she calls it a scientific integrity
consultant called Elizabeth Bick, who goes through thousands of biological papers and finds that
scientists, you know, they don't
just manipulate data when they do fraud, but they manipulate the pictures as well, whether
it's like blocks or microscope images, all that sort of stuff.
So many of them have been like duplicated, retouched, recolored, cropped, all that sort
of stuff to show the results in a much better light.
And that's scientific fraud, but it happens in thousands of papers every year.
And peer reviewers look at it and go, no, looks fine to me and it gets passed on.
And so this whole community has to come out
and re-review the papers.
And I think encouraging that sort of thing,
exactly as you say websites that do this kind of thing,
that collect together reviews of papers,
collect together people's opinions.
Another thing scientific journals can do,
and some of them are doing this now,
including some of the top ones like Nature. They're publishing the peer reviews alongside the paper,
so you can see what people thought of it when they first saw it. And in some cases, including a case,
I saw recently, I have a big, you know, this genetics paper that claimed to like revolutionize
genetics when in fact, I would say it didn't at all. You can see that the reviewers said, no, no, no, there's a big problem here.
The control condition doesn't work.
And the editor just was like, well, better publish this anyway.
And just completely ignored what the reviewers said.
But we can see that.
That's obvious.
We wouldn't have been able to see that.
Back before they published reviews, science was just done in a complete black box.
So this movement towards open science that is publishing the peer reviews online,
publishing the data set online so anyone can go in and dig into it and have a look,
making sure that the paper is open access. So anyone, even if they don't work at a university
and they don't want to pay for the paper, can click it and read it, making sure that the materials
are online. So other scientists can come along and replicate it in exactly the same way. A big problem is that
scientists read each other's papers and they don't even know where to begin to start replicating
the paper because they're like, oh, there's actually no description here of what they did in this
particular condition, so I'm going to have to spend months emailing back and forth with the
scientist. What is the point of scientific papers
if they don't actually describe what was done in the experiment?
Well, it turns out in many, many cases they don't.
So yeah, there's this movement of open science.
I think one sort of rule people could have in their heads
is if a paper looks like it's open and transparent,
they've published their data online, everything's open and clear, they've registered their
plan. Before they touch the data, they've actually registered the plan of what they're
going to do with it, so they don't fuck about with it when they get the data.
I'm like, oh, well, I'm sure that participant wasn't paying attention or that person, let's
cut out everyone above 50, even though that wasn't the plan.
Let's cut out everyone who's above 50, because I think probably older people
will have a different reaction to this drug.
This sort of thing happens all the time,
and you can kind of justify it to yourself.
Yeah, yeah, it is the case that older people have
a different reaction to this drug.
So let's just cut them out altogether.
Oh, Lord, the whole, we found a significant result.
Let's send that off for publication.
And it wasn't that case, that wasn't the case
before they did this.
So having a clear plan and sticking to it,
but posting your data online probably most frauds are not going to post their data online because they're scared that there'll be some like obvious thing in the data like number seven comes up
too often or something like like and they and they like so it doesn't seem like a real
dataset. And those are kind of things that you can look for, but I also think seeking out other
scientists opinions and something like the science Media Center is really useful for that.
But also just like looking on Twitter and seeing what people have said about paper, like,
okay, you'll get lots of uninformed takes on the paper as well.
But, you know, seeing what people have said, you don't necessarily have to agree with it
or have to take it as gospel.
But, you know, seeing that, okay actually, maybe tweets one to five of the thread
are kind of like our garbage, but tweets seven actually does illustrate that there's a problem
in table three of this paper. So I better just adjust my certainty about it somewhat. So
like, that's the thing. I mean, that's what I'm doing when I'm writing about low quality papers.
I read the paper and have my own critique of it, but I'll also see what other people have said
about it, because the whole point of having a scientific community is that we all like criticize each other all the time.
And one of the big problems is that we've got to the point where it can be very socially awkward to criticize each other.
Some fields, this is much better than others. In psychology, as much as I enjoy going to like psychology lectures and so on,
everyone is generally quite nice to each other and they're like, oh, that's a great paper. Well done. They start off
like my friend, Chris Shabri pointed this out recently, like often the person who's
cheering a seminar will start the question and answer session by saying, that was a brilliant
talk. Thank you so much for that. That sets the wrong tone because the tone you actually want is,
okay, thanks for giving us the talk. Does anyone want to critique that? You know, and in economics,
I've only done this once. In economics, I went to economics seminar thing and I was told I was
doing a talk, but I wasn't talking about my own stuff. I was to talk, I was given someone else
this data set who would was presented earlier that day,
and I was told to just check it, critique it,
and do my own mini talk at the end of his talk
that was critiquing it.
That's the sort of thing we want.
Now, economic seminars can get a bit like
mature, aggressive, that sort of thing,
and that's, you know, can distract
from the actual content, which is what we want.
But that sort of level of like constant skepticism,
constant argument is what science is all about.
And I think we lose that if people are basically too nice
and too trusting of each other.
And those are like being nice and being trusting.
They're good things that we want to encourage.
But maybe not so much in science.
Stuart Richie, ladies and gentlemen,
if people want to find out the stuff that you do online
Where should they go? I'm on Stewart G. Richie on Twitter
I'm also Stewart Richie dot substack dot com for all my longer ratings and my book is science fiction's came out a couple of years ago
Amazing Stuart. I appreciate you. Thank you. Thanks so much Oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh,