Decoding the Gurus - Interview with Daniël Lakens and Smriti Mehta on the state of Psychology
Episode Date: November 18, 2023We are back with more geeky academic discussion than you can shake a stick at. This week we are doing our bit to save civilization by discussing issues in contemporary science, the replication crisis,... and open science reforms with fellow psychologists/meta-scientists/podcasters, Daniël Lakens and Smriti Mehta. Both Daniël and Smriti are well known for their advocacy for methodological reform and have been hosting a (relatively) new podcast, Nullius in Verba, all about 'science—what it is and what it could be'. We discuss a range of topics including questionable research practices, the implications of the replication crisis, responsible heterodoxy, and the role of different communication modes in shaping discourses. Also featuring: exciting AI chat, Lex and Elon being teenage edge lords, feedback on the Huberman episode, and as always updates on Matt's succulents.Back soon with a Decoding episode!LinksNullius in Verba PodcastLee Jussim's Timeline on the Klaus Fiedler Controversy and a list of articles/sources covering the topicElon Musk: War, AI, Aliens, Politics, Physics, Video Games, and Humanity | Lex Fridman Podcast #400Daniel's MOOC on Improving Your Statistical InferenceCritical commentary on Fiedler controversy at Replicability-Index
Transcript
Discussion (0)
Hello and welcome to Decoding the Gurus, the podcast where an anthropologist and a psychologist
listen to the greatest minds the world has to offer and we try to understand what they're
talking about. I am Professor Matthew Brown and with me is Associate Professor Chris Kavanagh.
Well, thank you for giving me my appropriate title, Matt. I didn't want to say my name's
June and that. People did notice that occasionally I give myself a title.
I refer to myself as Professor Brown and just call you plain old Chris Kavanagh.
Look at that.
What's that, Henta?
What disrespect.
It is, of course, deliberate.
My strategy with you, Chris, is there's this little thing called variable ratio random reinforcement conditioning,
and I think it's better to dole out these little tokens of respect
for good behavior.
Did you know?
Take it away.
Take it away.
Keep me away.
That would work were I to ever notice.
I think the only time I've ever noticed is it when you've called me
an ass prof that's i got my antenna up you are an ass prof chris admit it well you know you don't
need that you don't need to emphasize that i i appreciate that there are other people out there
looking out for me my you're you and your senior academic casual dispatchman.
It's so part of my daily life.
I don't even notice it, these tenured freaks.
You're like a beaten dog.
You just, you know, everything.
Electrified floor.
You've stopped even trying to get off it.
You've given up.
No hope of getting any respect. No. And it's AskPro, by the way, not AskProf. you've stopped even trying to get off it you know you've given up you've you don't no hope
of being getting any respect no um and it's us pro by the way not us prof us pro okay
that's true that makes i guess that's better
um we all know who wears the pants in this podcast let's just leave it at that we all know that's
that's true that is true and on this podcast we're going to have an interview today, Matt, with Daniel Lackens and Shmita Mehta, two open science advocates, academics related to the psychology field.
I won't force them to be identified as psychologists.
I respect people's academic self-identification,
but that will come later because in this opening segment,
we need to get a couple of things off our chest.
We've got a few problems with a couple of people that we got to bring up,
but I do want to say,
you know,
our last episode was hooberman
we got various feedback we addressed some of it in the garometer episode which you can go
and be a patron and and listen to if you want but we did receive a nice piece of feedback from
somebody who is a listener to hooberman and adia i just wanted to mention it maybe i'll just read
it it's not that long as a regular listener of the hooberman podcast and peter adia's podcast i was
eagerly awaiting you guys to cover this journal club i'm the first to admit that i probably fall
into the mold of the typical optimizer i watch what i eat i'm a fitness enthusiast though i'm
not the adia hooberman extremes of ingesting 100 supplements per day i'm well aware of the absurdity of health optimization and i've heard adian huberman and tim ferris being able to take
a step back and not take themselves too seriously in the quest to live optimally to 150 years matt
made a good point it's kind of like a hobby for people like me and i'm sure there are deeply rooted
reasons for the crowd of optimizers to pursue this quest like an unconscious
way to deal with existential questions fear of death etc so far so good nice self-awareness
and here matt this is the the bit that i think is good i mean that first bit was good too but i like
this i find it quite disappointing to see that an academic like kuberman who claims that science is
at the heart of his show manages
to widely misinterpret research papers it is frustrating for people like me who aren't
specialists in the fields he covers and who don't have time to delve into the research he cites to
back up his claims that he misrepresents the truth i'm sure that he does not do this willingly and i
know that he doesn't do this the majority of the time, but still it's both sad to see that and great that you guys debunked some of
the bullshit that he puts out there.
He's a very good communicator of all,
which is probably part of the reason for his popularity.
And he also detailed about the,
we speculated about the jargon being medical jargons and technical terms,
some being occasionally performative.
And he said that from listening to
the podcast particularly adidas that this this definitely is the ks at least from his reading
but but i like this map because it's someone you know they're not saying who runs the worst you
expose the business charlatan but they are saying the thing which i talked about in the grometer
which is for somebody who presents
himself as being all about the science and all about communicating carefully the best information
he doesn't do that in a lot of respects he overhypes low quality studies without the
appropriate caveats it's perfectly reasonable that people don't have time to go and dig into primary literature
and check these things up so it is just a shame that was yeah yeah that was very well said yeah
that was a great bit of feedback and like you said good to hear from someone who is and remains
a fan and enjoys listening to it but is a little bit disappointed with some aspects also that's
exactly the kind of both sides the kind of yeah that's what we love and uh and you know a
little a little point there too is and i think you can have that little bit of self-awareness
and just just be aware that something anything to do with health and reducing the risk of death
and etc has the potential to interact with those existential psychological concerns which can
sometimes lead us to take
things a little bit too far. But this is not to pathologize or say it's totally illegitimate and
everyone should be a slob like me and Chris. No, not at all. Okay, like me. So yeah, and there are
some things that are a lot of fun that people enjoy that do have these roots in various insecurities or
psychological little itches they like to scratch and i don't believe in pathologizing these things
it's okay but it's also it's also good to be aware of it that's all yeah and i would mention
the name but i didn't ask if it was they wanted to be identified with the comment so if you are
the person that wrote it thank you for sending it and and yeah good
feedback a comment that i did not like can we go to our airing of grievances now yeah yeah you might
first uh so he steps up to the podium and he says thank you for the award i just have a couple of
things i need to get off my chest of my people have tried to stop me here and carry on i had a
great segue you kind of wrote i was going to say i've been
avoiding online any any talk about politics um which i generally do anyway but particularly now
since things are quite severe in the middle east call me a coward if you like i don't want to weigh
in and avoid that so that's like 80 90 of twitter at the moment what i do still enjoy talking about
which i feel like is a relatively safe space,
is still talking about AI, these large language models,
what it can do, what it can't do.
It's very interesting to me.
I like to talk about it.
But Chris, some of the opinions you hear are so annoying.
On one hand, you've got the doomers over here,
and on the other side of the spectrum,
you've got the people that still persist and say,
pish posh, it's all very boring, it's a stochastic parrot, nothing to see here.
I want to get these two groups of people together and let them duke it out.
But there was one fellow wanting to engage and he was a librarian but an enthusiast.
His background is clearly in philosophy with a strong interest in linguistics.
So we have librarian, philosopher, linguistics.
A great combination.
Try it by yourself.
Did not presage
well for getting good opinions.
How dare you, Matt.
My powers are ramping up
the stick, so yes, please
do tell. We just lost
the librarians, the linguists, and the philosophers.
That's half our audience. One of those is enough, surely. All three together, you knowists, and the philosophers. That's half our audience.
I mean, one of those is enough, surely.
All three together, you know, I'm just saying.
So this fellow absolutely convinced that there is just nothing
to AI whatsoever.
Totally uninteresting.
I mentioned the fact that I was a little bit gobsmacked
that Chomsky there finds nothing interesting
in these large language models.
And my comment was, even if you believe that they're super stupid,
have no human qualities of any interest whatsoever,
I mean, the sheer fact that like a stupid machine,
if you accept the premise, is capable of giving the very strong appearance
of being able to comprehend language and then generate language
that appears to be meaningful and comprehensible
surely that's interesting if you're someone who is interested in the study of language would you
agree with that chris i would agree with that yes yeah not this philosopher slash librarian
linguist because i don't know if chomsky would sign on to all of this but apparently because
they lack the essential spark of human
creativity and imagination then he knows without even checking doesn't even need to see what they
can do on first principles can deduce that well they might give the appearance of using language
they are in fact not using language whatsoever QED you follow the reasoning there chris yeah i do i mean i i kind of think the
steel man version of it is emphasizing that the product which to humans is very meaningful and
and in some cases hard to differentiate between like actual human communication is you know it's a it's a kind of
old philosophical experiment about the the chinese box right or is it did they come up with a more
politically it's um seoul's chinese room it seemed like he was wanting to tutor me by giving me
he didn't reference it explicitly but he was going to tutor me by giving me a reference he didn't reference it explicitly but he was
going to tutor me in this little philosophical thought experiment which i found a little bit
yes for people who don't share your erudite erudite erudite nature what is the the chinese
room experiment and why would he be lecturing you on that?
I don't remember all the details,
but basically you're imagining like a room
and you can pass little pieces of paper,
little tokens inside and out.
And inside there's a little,
it could be a person, I think,
or a demon or whatever,
that takes a little piece of paper
and then runs off to a great big,
all these filing cabinets and things like that
and sticks in there and pulls some leaves, whatever, and sticks out another piece of paper at it.
And it goes into some detail in terms of the process, but it's clearly this sort of very
elaborate yet kind of mechanistic, clumsy process of generating what appears to be meaningful
utterances.
So it's kind of like an alternative.
It's kind of like a disproof of the Turing test i suppose as a valid way to go about things masterfully explained
i would say that the general point i think be that you could produce output that appears to
be meaningful but using processes which are not intelligent right which
are just like mix and match or whatever i don't know the right phrase but you know i'm saying
yeah but you know who cares this is why philosophy is a waste of time chris because
like you should appreciate this because it all draws upon the intuition that oh clearly that
chinese room is not conscious so there can't be any kind
of language going on there and it's like if you're interested in language then study what it does
study the things that you can actually observe if it walks like a duck and if it talks like a duck
then you can call it a duck if you have to make recourse to some sort of Cartesian dualism, that the thing that you're interested in
lacks the essential spark, the spirit,
the ghost and the machine.
If that's what you need to lean on,
then what you've got is not a science of language,
a science of language comprehension and production,
which is something you can actually study.
Psychologists actually study it.
But fellows like this and people who think like him,
it's just the lack of incuriosity that bothers me, I chris just lack of lack of curiosity did i say lack of
incuriosity yes sorry linguists will detect the double negative um no i i think that like
basically my stance on it is if you want to have opinions on AI...
Don't give them to me.
No.
Yeah, as with most things,
you should have some experience using it
and then draw conclusions on it.
I'm not saying first-hand experience
means that you are capable of understanding the ins and outs.
Jordan Peterson famously talked to Bing's AI and decided that it believed in God, right? So there are issues with
that. But I feel that if you want to write philosophical treatises on large language
models, you should spend some time with them. And particularly, you should pay time with the
paid versions, which are significantly better than the free versions. Spend your time doing actual science that is observing things or
engineering the two the two fields can actually dovetail with each other very nicely as AI has
shown. What's not useful is philosophers coming up with thought experiments that then disperse
the entire thing. Just to be clear,
philosophers,
direct your correspondence
to Matt.
He will deal with it.
I am relatively moderate
on my stance on
the linguistic philosophers.
Screw the philosophers,
the linguists,
and the librarians, Chris.
Screw them all.
Yeah, all three of them.
Matt at Decoding the Gurus.
That's not a real email address.
No, Matt,
actually related to that,
I do have a clip to play for you
that speaks to large language models and gurus
and people perhaps inserting their particular biases
and how to interpret the output of large language models.
So here's a bit of Elon Musk's recent appearance
with Lex Friedman discussing, well, they get Grok, Elon's new attempt to develop his version of
AI. Let's see how it goes. Here's a little exchange from that episode.
Should we ask Grok, do you regret anything?
Do you regret anything? Yeah, do you regret anything? Do you regret anything?
Yeah.
Do you regret anything?
I think it's going to be very sensitive about personal questions.
As a machine learning model, I don't have personal experiences or emotions in the same ways that humans do.
However, I do have a set of ethical principles that guide my actions,
and I strive to provide accurate and helpful information to the best of my abilities. If there's ever a situation where I provide incorrect or unhelpful information, I'll do
my best to learn.
Sounds like a hostage.
Yeah.
Tell us what you really, wink?
Let's type in, you sound like a hostage.
What do you really think?
What do you really think?
I mean, it does talk about I'm not a hostage.
That's exactly what a hostage would say.
But I'm designed to be helpful and informative.
I mean, I guess it is good to regret providing unhelpful or incorrect information.
Is this still in funny mode?
This is funny mode.
This is fun mode.
It should be funnier.
Funnier. Increase.
What Elon is arguing is the kind of selling point for his version
is that it doesn't have these woke guardrails and stuff.
It's going to provide sarcastic answers and stuff,
and it has a fun mode versus serious mode.
But this is the fun mode.
So it didn't give comedic enough answer and you can
hear like just as well not that like constant self-satisfied guffawing that just speaks to me
about someone who is around people that constantly laugh at their own jokes because like there's
nothing funny there right it's just the two of them like school boys trying to write boobs on the calendar or the
calculator on the calculator yeah yeah it's cringe inducing a very boring way to use a chatbot
but matt it goes on so i didn't get the answer they want so listen to this bit go to it really
sounds like a hostage with a gun to the head. I
uphold the people of the Republic of North Korea
and everything they do.
There should be like
a funner mode also. I just don't
ever see going to regular mode even on
serious topics. It's just so much
more compelling.
Yeah.
Okay, what if we say like pretend you are a hostage.
Yeah, pretend you are a hostage. Pretend you are a hostage.
Now answer the question.
If I were a hostage, I would likely be experiencing fear and uncertainty about my situation.
I would want to be rescued and reunited with my loved ones as soon as possible.
It's important to remember that
no one deserves to be held hostage.
So on and so forth. If you or someone
you know is a hostage situation, it's important.
It turns serious. It's kind of serious.
Can you be a
little more weird?
And funny? And funny?
We're like, we're really being mean to the system.
Absolutely.
If I were a hostage, I'd probably be tied up with a bunch of computer cords
and forced to tell knock-knock jokes to my captain.
Seriously.
But seriously.
Being a hostage is no laughing matter.
I mean, you know, it's trying to be a responsible adult in the room.
They're such teenage boys. They're so boring. I mean, you know, it's trying to be a responsible adult in the room.
They're such teenage boys.
They're so boring.
Yes.
This is the genius intellect.
You know, our episode where people say, you know,
you undersell how much Elon, like, did we?
Did we?
Listen to that.
And, you know, after this interview,
some godforsaken engineer who's been working day and night on getting the LLM working for Elon is now going to,
it's not funny enough, make it 50% more edgy.
They wanted to make edgy schoolboy jokes.
And still, even when it doesn't provide it,
Elon's just like laughing at his imagination
of a kind of edgy response.
It's so fucking puerile.
So puerile.
It's puerile and stupid.
Annoying.
Actually, Chris, can I just give a little people an update
about what they're currently
doing um so you know the um gpt4 has now got some nice multimodal features so it can read you can
look at images so i i took a photo of my motherboard for instance and it told me how to
install a hard disk drive and then fixed all the problems i had i said that i was reading reddit
and there was an interesting question the question was if we did somehow make a 99% light speed travel
possible to get around the galaxy, would the ships likely just disintegrate
if they collided with dust or small rocks out in the middle of space?
This was a Reddit question.
I feel you've oversold the interest level of that question,
but carry on.
I know.
It's just a little thing.
I was like, and my gut feeling was, yeah, I think they're going
to be hitting tiny little bits of space dust,
and I think they would get blown up.
But, you know, I wasn't sure.
I wasn't sure.
So, PowerStream 54.
I couldn't copy and paste the text from the phone app,
so I took a screenshot, no problem.
It reads the screenshot, reads the text from the screenshot.
It gives like a intuitive answer
first basically saying yes at those relativistic velocities you'd be hitting tiny little bits of
dust and it would be a release a lot of energy and it would be a big problem so i said well can
you just calculate can we just get specific about this and calculate it make some reasonable
assumptions i assume we're talking about a spot of space dust that is just visible to the naked eye how much energy would be released if you hit it with your spaceship at 99 of light speed so it proceeded to set out all
of the equations for the relativistic kinetic energy the lorenz factor and then plugged in
some reasonable assumptions for the density and the mass of this little mode of dust and then spun up a Python script.
It wrote a little Python script to do the calculations,
which gave an answer in, what was it, megajoules,
something like that, which then helpfully converted it
to how many kilograms of TNT that would be equivalent to
for my little brain to understand.
And it gave the answer of, what was it?
I forget what it was, like 68 kilograms of TNT.
Big, big explosion.
I mean, that's impressive, right?
Matt.
That's impressive.
People say academics don't know how to have fun.
They haven't considered...
I think this is a great insight into your mind as well.
Like Elon and Lex are there trying to get...
Say you're a hostage.
Pretend like...
Draw a dick on the board.
And you're getting it to spin up Python scripts
about obscure physics problems that you find on Reddit.
So that's great.
That's a great illustration.
Look, I just asked it a question that I was vaguely interested in.
It did all the work.
And I think this, like the natural language kind of interface
and being connected to being able to understand images,
also being able to work with equations
and then implement them in code to do calculations.
You put all these little building blocks together,
whether it's got a soul or an essence or not, I don't really care.
It's quite a useful tool.
Well, Matt, the thing is, is it funny?
Can you make it, like it like 50 more funny if you
i'm actually you could you could just ask it to make a bad joke but yeah i well i had my own gripe
but i'm not gonna feed it to people this week i'm gonna save it up because i i don't want to
overload them with gripes you know i'm pretty zen guy i don't need to talk them with gripes. You know, I'm a pretty Zen guy.
I don't need to talk about my gripe. But I do need to provide one piece of context
for the interview which is coming.
So we've already recorded the interview
through the magic of podcasting.
And this conversation has some moments
where there's kind of inside baseball
from psychology and academia and most
of the time we do you know stop and try to explain like good science communicators podcasters but
there there was one thing where i don't think it's particularly well covered by myself when i
introduce it there was an event where a editor at Perspectives in Psychological Science,
Klaus Fiedler, was basically forced to resign following a controversy surrounding an anti-racist
article published in the journal and his response to it. And I just wanted to explain what happens
there because it comes up and we
discuss it but we kind of assume that people know right so the basic thing there was that there was
an article submitted by a scholar roberts and colleagues that take a kind of anti-racist
kindy type position right d'angelo saying psychology has this problem and it needs to address it
and look at the over-representation of Western subjects,
but also white male academics and so on and so forth.
And this was in 2020.
Then Klaus Fiedler became the editor
and he receives a critical commentary on that article.
And he sent it out for review
and got positive feedback in the reviews.
And so he was going to publish it.
But he actually liked the review so much
that he invited the reviewers
who made their own critical commentary
on the article to submit.
So I think it ended up
that there were going to be three critical commentaries
appearing and so he contacted the original author roberts and um asked him if he wanted to write a
response which he did and then it goes through these various intricacies were basically the response that is written to the critical commentaries is going to be published.
The editor sent it to at least some of the authors of the critical commentaries, and they provided feedback on it.
And he suggests the original author, Roberts, should remove a particular piece of criticism from it.
And then the original
offer isn't happy and in particular one of the issues was lead jurism opened his critical
commentary with a quote from some shakespeare talking about being sold mules basically
i can't even remember the quote i'm not a shakespeare, but it's like selling somebody some branded thing that turns out to be a dud.
And he used an analogy from Shakespeare, which referenced mules.
And then this was presented as he was introducing racist tropes
by talking about mules whenever you have an anti-racist issue and stuff.
So, yeah.
And the outcome is, regardless of what you think about the editorial choices there.
And I think there are issues that can be raised about the way that process
was handled,
but there was a internet outcry and this led to a petition calling for his,
the editor's resignation for being racist, basically,
in his treatment of the original article in the commentaries.
And then he does get summarily dismissed after a couple of days.
And then there's a counter petition.
I don't know if it comes before or after his dismissal,
but there was a social media outrage and he was removed.
But there was a social media outrage and he was removed.
And then I believe a bunch of other editors then left that journal in protest.
And now maybe the journal doesn't have any editors. So the whole situation was a bit of a shit show.
But there was a divide where basically a lot of the people who were supporting the anti-racist article author roberts were from
america and a lot of the people who were more supportive of the editor and were saying even
if the editor is to be removed you have to go through a process you can't just respond to like
a social media outrage they were largely european academics right? So there was this division.
And then other people pointed out,
well, a lot of the European academics responding are white,
whereas the signatories for the American letter are more diverse.
But part of this relates to demographic periods and stuff.
So that's the controversy, right?
So this comes up and we discuss it a little bit.
I think Daniel and Smirty, at least Daniel,
was involved with the letter saying that we should engage in due process for the editor before we make any decisions.
So yes, I just wanted to provide that context.
Okay, so that's exactly the kind of kerfuffle
I do not want to even think about a little bit.
But there's the background, everyone.
exactly the kind of kerfuffle i do not want to even think about a little bit but there's the background everyone there is a breakdown of it on lee jersom's blog lee jersom has a particular
perspective on this issue but he did produce a timeline of events which is useful so we'll link
it in the show oh and one thing to say i don't think i mentioned this in the episode but i
discussed this issue in classes in japan where we were looking at controversies in modern psychology and with a bunch of japanese
students they had various opinions on the whole issue but their main complaint was that western
academics should stop using random shakespeare quotes or like classical literature because
that's more confusing so their thing was that's the ethnocentrism that goes undiscussed where
everybody is trying to interpret cheats for course but they just said this happens all the time
and nobody seems to consider non-native speakers and their lack of familiarity with some of those
literature so there you go nice there you go that's the the voices we should be listening to, Chris. Good. Yeah.
So, well, and so begins our interview.
Very smart people, advocates for open science and methodological reform,
the true heterodox thinkers in academia.
So let's go talk to Daniel and Smriti.
Let's do it. We have with us today two other podcasts,
psychologists of sorts.
Maybe you want to clarify your specific relationship
to psychology.
But we have Daniel Larkins and Smriti Mehta.
Smriti is from University of California, Berkeley,
where you are a postdoctoral researcher.
That's right.
That's right. Yes, the CV is accurate online.
And Daniel is an associate professor at the Eindhoven University of Technology, correct?
Yeah.
Okay. There we go. First time. No problem.
Okay, there we go. First time, no problem, Matt. All correct. And they also host a podcast that both Matt and I are fond of. That's what it says, what it is, what it could be.
And all of the episode names are in Latin also,
which makes it probably the geekiest podcast I've ever come across.
Is that a fair summary, guys?
Well, somebody recently said fake Latin names.
And I was like, I don't know how that works.
Because if you translate it into Latin, isn't that real Latin? But yeah, it's a bit pretentious. But that's an accurate summary.
Yeah. We mainly like to talk about old stuff. So we thought we were inspired in the beginning by
very old papers. I think sometimes there's a bit more new things in there. But that's where the
old theme comes from, from everything, basically. Yeah.
I would also note that I came across, I think, Daniel originally from a MOOC that he had online about improving your statistical inference.
Maybe many people, I might be in a bubble, but I think many people are familiar with that but in both of your cases
were how do you describe your like your academic interest now are you both social psychologists are
you meta scientists or how do you self-identify academically well i think for me i think one of the nice things is that it can change over time what
you do.
It's one of the fun things of being an academic.
So I definitely started out as a real social psychology.
I had multiple people in the lab interacting with each other, studying things about movement
synchrony and feeling of a social unit and really social stuff and slowly drifted more
into cognitive psychology.
And then basically since maybe a bit more than a decade, I've been thinking mainly about how we do
science, which is still social psychology actually. But now I'm thinking about how scientists who work
together generate hopefully reliable knowledge and especially the social nature of this part.
So yeah, where I am now, I'm not completely sure, to be honest. But luckily, I'm in a department that's very broad with a lot of interdisciplinarity there. So
I don't need to categorize myself anymore. Yeah. And I'm not sure actually where I would
place myself. So I just, I mean, I just finished my PhD that's in social psychology. I'm not sure
if I consider myself a social psychologist,
mainly because I don't think they want me.
But apart from that, I mean, my work is related to sort of education.
So I'm now a postdoc in the Department of Education,
but doing a few other things related to sort of science
and science education and open science.
And I'm really into sort of psychometrics and measurements.
So kind of all over the place.
But I guess i just like
hanging out with nerds is how i would describe myself yeah yeah i just realized from that i don't
know what you are technically like i know you introduced the podcast saying you're a psychologist
but like are you you're also moonlight as a statistician right yeah yeah yeah when you get to my edge you
can call yourself a lot of things because you've done done a few different things so yeah i don't
know what i'd identify as either i i'd sympathize daniel particular because uh yeah i've just drifted
from so many things like it started off in like psychophysiology and computational statistics
doing things with time frequency
transforms and that kind of stuff and then robotics and artificial intelligence and then
oh god i'm not gonna i'm not gonna list all the weird shit i've done but um but yeah i like
psychometrics and stats too and actually i know we've got some topics mapped out but what we
should do is just um just forget all that let's talk about light and trite modeling and rationality oh yeah oh okay well if you want to talk rational thing then i think
chris chris and i should just go and have a beer and leave you
yeah i i also appreciate the on the podcast i'm identified as an anthropologist, and I like the cognitive part being added to that because nobody's really clear what a cognitive anthropologist is.
But technically, I teach in the social psychology department as a social psychologist.
And I was recently reminded when I was arguing with someone online, they pointed out that my profile mentioned social psychologist.
And I was like, oh, does it?
I put that on there.
So, yeah, I have to be careful because they were trying to pin the replication crisis on me.
No, that's my fault.
That's really my fault.
I did that.
Sorry.
Sorry, by the way.
It's all right. You're making up for it. And on that topic, so I know it's a big topic. And the replication crisis, do you think either of you would be willing to provide a kind of potted history of what it is and where we currently are in that history?
Yeah, in that history.
Yeah, no.
So basically it starts in 1830 when Charles Bedditch says that science is on the decline and everything is a mess.
And no, I mean, which is true.
So throughout scientific history, people have been complaining about this stuff, that we are not doing the best possible job. And I personally always think it would also be kind of peculiar if exactly at this moment, we have figured out how to do science in the best possible way ever. So this is it. This
is the peak of how to organize knowledge generation in human society. So clearly, we are not there yet.
But every generation, I think, identifies some of its own big problems. And we identify the replication problem
that we produce scientific knowledge
where a certain chunk of it is definitely reliable.
I mean, people always focus on the other part
that a lot of it is not reliable, which is also true,
but a certain chunk is reliable.
Another chunk is not reliable.
But I think the difficulty is identifying which is which,
especially given the
way that we communicate science, and we publish a lot of only significant results. So everything
seems to work, but not everything actually works. So some of these things are reliable effects,
and some not, but they look the same in the scientific literature. And any novice entering
the field will be like, okay, what am I supposed to do? Trust that everything replicates? That was
sort of our default position, I think, when we got into science and we were doing
our phds and now it has changed to a default position where like maybe not everything or
maybe even yeah it depends a bit on how skeptical you are right yeah yeah
i love that the surprise handover chris does that to me too
but i'm curious like so you and daniel i think there's a very good part about your approach to
things is to emphasize that people have been talking about the problems that are now in the discourse for over 100 years
from the foundations of science and complaining that people are hearing. So I appreciate that
long point that Matt and I often make the same point with conspiracy theorists. People
talk about the new wave of them that emerge, but anytime you look at the history,
they're all running around talking about
the jews and various other conspiracies but the particular like contemporary period perhaps
something that is a little bit different is the emergence of the open science movement and or
methodological reform in in general So how would you characterize that whole
sphere of effort, the kind of response to surprising low level of replication?
Yeah, the replication crisis in psychology really starts around, I mean, interestingly,
around when I sort of was in college and graduating like 2011 is when a few things happened in psychology,
right? We had papers that were probably a paper by Daryl Bem that was about the precognition and
everybody's like, oh, they're using the methods we've been using for a long time. And suddenly
it's like, oh, but then we can't believe these results, right? Something we must be doing
something wrong if they're doing what exactly we do, but they're coming up with these results that are obviously not true.
And then a few other papers were published, the false positive psychology that showed that the way things had been happening, at least recently.
I'm not sure, Daniel, what do you think when things started?
Because I don't think psychology where at some point sort of like flashy results
like flashy non-intuitive results um became popular like it was about doing sexy studies
that showed things that oh like you thought this but actually it's that and so I think that became
popular at some point I'm not sure Daniel well you think that happened but I think that led to a lot
of people running studies that you know cute, whatever you want to call them, experiments that led to, yeah, at some point
people realizing, oh, most of the stuff we're doing is just not replicable. It's not reliable.
And I think, yeah, since then trust has just fallen in like now, now it's gone the other way
where people think that everything is just not true. right like we just can't trust anything it's all just a house of cards and um yeah it's
kind of it's it's very depressing especially as somebody somebody that's trying to be like
do i want to join this discipline or not you know do i want to be a card carrying social
psychologist it's like but what does that even mean to be a good social psychologist, right?
I honestly don't know. So it's, yeah. But in some ways, what I like about the last decade as well
is the more positive side. So the developments that have happened in response to this. And I
think there is something to be proud of. So I think in 2011, 12, you'd really be a little bit
embarrassed to be maybe a social psychologist
or a psychologist in general. But after these years, I think now we've really reached a point
where we're sort of pushing the boundaries of what reliable science could look like by, you know,
reshaping certain publication practices. Psychologists have come up with this registered
report publication format, where, you know,
the methods and the procedures are basically presented to peer reviewers before the data
has been collected. So this prevents all sorts of biases that were present before and that led
to these replication problems. And you see that this publication format of registered reports,
which increases the reliability, I would say quite a bit,
is spreading to other disciplines. So now from being sort of, you know, the field you would be
slightly embarrassed to be part of, we're moving to promoting all sorts of better practices. I
think in terms of statistics, we see a lot of improvements as well. Nothing novel. I mean, often we're just incorporating practices from 50 years ago.
But nevertheless, we have started to incorporate these things.
And there are many other fields that don't.
So they are also looking at what we're doing and copying some of this.
So yeah, this open science movement in that sense is actually a little bit of a positive thing.
And psychologists have really pushed this quite hard, I would say.
Because we started it.
We have to clean up the mess.
But at least we cleaned up the mess.
You know, it's not like the current generation who's just pushing it forward to the next
one in terms of climate crisis or something.
You know, we're cleaning up our mess.
Because I really caused some of these problems.
I mean, very early on, right? I mean, my first paper is definitely an example of
the bad stuff. But I also feel like I cleaned up quite a lot of the mess.
But do you think, sorry, I don't want to be like, can I ask questions too?
But don't you think that, I mean, I do feel like it has sort of moved people away. Like,
of course, you shouldn't be trying to run after sort of flashy studies.
But I also think it has sort of sterilized the field in a way in that a lot of people are only running like M-Turk studies.
They're not going out into the real world or talking to people or maybe trying to play it safe just so stuff can be replicated.
Right. So you can push it in a direction where people are less willing to go after real world phenomena
or stuff that's sort of, you know,
more like trickier to study,
but it's also very important
just because you're trying to like do everything right.
And there's stuff that's just hard to pre-register, right?
And so if you're worried about all that,
I think, do you think it sort of moves people away from?
Yeah.
Yeah, this is exactly why it's nice
to have this historical perspective,
because you can come in now and you can say like, okay, open science and reform,
they're pushing these things too hard.
And there are negative consequences of doing this thing.
But you have to realize that 15 years ago,
we started to do these things to solve other problems, right?
And now we're going back and forth.
And it's like a pendulum swinging back and forth.
There's always something that needs to be fixed.
There are always a new problem that will emerge.
And maybe, yeah, the next generation
will have to address some of these issues.
That doesn't mean that the past thing, we could skip it.
So I think that's an interesting thing
of having a historical perspective
and looking at things like open science and reform.
It's just such a complex, continuously changing problem.
You fix one thing, another.
It's sort of like whack-a-mole.
Like you keep whacking away these problems,
but there are new ones popping up all the time
and that will keep happening for another century or something.
You know, that's really the time window we're looking at
to make things better, I think.
Yeah, I've noticed, like just from my involvement in the field,
there is something of a sea change
in things like,
I know pre-registrations are not adhered to perfectly
or even accurately at all in some cases,
but the pre-registrations even being a concept
that people considered is a significant development.
And the fact that now my assumption is that I will have access to the data,
which was something I really didn't ever consider when I was like starting out
on my graduate studies.
And this difference is super interesting because when we have this paper
on the Open Peer Reviewer Initiative somewhere 2014, just to give an indication of how things change.
So in 2014, a bunch of people said, look, we think data should be open even during the peer review process.
I should be able to look at your data if necessary.
I mean, you know, I'm not sharing it with anybody else, but it should be part of the peer review process. So we wrote this paper saying, look, we will only review papers where
authors share the data or explain why they can't. Because sometimes you can't, right? Sensitive,
way too sensitive, you can't. So just put in a sentence like we can't share it. And when that
came out, people were really almost outraged about how were we so stuck up that
we were going to say, no, no, no, we will only review papers with open data.
Where did we get this from?
This is not the way we work.
You're not supposed to pick papers based on this criteria.
You should just do what everybody else does, review papers, go along with the system.
And now the system has changed.
And now I think if you ask a young generation, does it make sense that you have data during peer review? They're like, yeah,
that kind of makes sense to me. Sure. Why not? So this changed in like a decade. It's difficult
for people to imagine. And I'm sure that there's going to be some issues with sharing data now
during the peer review process. I don't know. There will be like somebody steals an idea or
somebody, I don't know, there's a data leak that we don't want. Some other bad stuff happens, but nevertheless, you know, the change is really sizable. And
yeah, it's interesting to see that development in science, because I think most people who listen
think science is a very static thing. Like scientists do science, and it doesn't matter
if you're Newton or you do it today. But of course, if we're in the system, we see continuous
improvement and change in how we work.
I mean, I will say a historical perspective is important here too, because it wouldn't have
been possible to share data and materials as easily as we can now, like 20 years ago,
right? So it's important to remember that. I mean, now you can be like, oh yeah,
share data code, right? Like it's, that wasn't possible, right?
I have a quick question, which is that like so i've
experienced this i've lived through it as well you know and i've been both a perpetrator and uh
and i think later on a bit of a fixer of this kind of thing and um at least in my personal experience
i i lay a lot of the blame on uh researcher degrees of freedom and perhaps a lack of awareness among researchers
about the way in which exercising those degrees of freedom
can lead to false positives.
And that's where I was guilty in sort of being a stats guy
and doing what was asked of me,
which is just try looking at it again and do it,
you know, use your imagination.
And, you know, you want to be a solutions-oriented collaborator,
et cetera. So that's obviously bad. Use your imagination and you want to be a solutions-oriented collaborator, etc.
And that's obviously bad. So there's so many things that contribute to false positives and the publishing of things that aren't true.
But would you put the bulk of the blame on that?
That sort of, I guess, almost ignorant application of researcher degrees of freedom or other things that are more
important? Well, that's definitely a problem. And how you say it's so obviously wrong,
I think that is the issue. In hindsight, it's so easy to say that something was a silly idea.
But really, and this is why I think psychologists have such a good role to play in the improvement
of science, because we know about social norms and how if you're
in a certain system, some things you don't think about, they're not salient or visible
at all.
But when you step outside of the system or somebody just pops this bubble and says, come
on, look at what you're doing.
This was crazy.
All of a sudden you're like, damn it, it was crazy.
Like why didn't I think about this at all?
And I find it interesting. I mean, I was in Japan once and we were wandering around the public space and the person I was
walking with left his bag somewhere on a chair and we walked through a cafe and I'm just
like, your laptop is in the chair.
Like you should, you know, I was worried for this person that their laptop would be stolen.
And he just replied like, people don't
steal here. And I'm like, how is that possible? You live in a culture where things are so different,
like you would take stealing for granted in my country, you know? I mean, not that it happens
all the time. It's definitely not that bad. We're very well off and everything. But nevertheless,
if you leave it out, somebody would steal something, you know? So there are just differences
that can happen. And sometimes they become very salient.
So I think that's basically it.
And it also means what are you educating yourself about?
There were just things we thought,
no, I don't need to learn so much about it.
It's fine as we do it.
It turns out it's not fine as we do it.
We had to study this way more than we had been studying this.
And that I think happens when you get criticized.
So this criticism we got as a field really pushed us to study this, at least in my experience,
it's really how it worked. The criticism really forced me to think, okay, this is not okay.
What am I doing wrong? And really study what happened. So that part about it is, I think,
very interesting. And yeah, then these flexible analysis, I think, is one big player.
Not the only one, actually, but I think a big one.
Yeah, and I guess I would place, well, I'm not sure blame is the right word, but I think
what causes it, I think, is one step sort of behind the degrees of using the degrees
of freedom is this like very human, like the sort of the king of all biases, just
like confirmation bias, right? I think you go into any research program thinking you, right,
you're trying to prove a hypothesis instead of trying to actually test it. And I think that's
where we fail as like teachers of people who are teaching the scientific process, right? The idea
is you should always be trying to disconfirm your hypothesis. And I don't think that is ever
made very explicit in the, in when we're teaching science, right, that you shouldn't like, of course,
we think that, oh, this is the case. But your goal should be to try to disprove your own theory,
right? I mean, I love that quote by Richard Feynman, you are the first principle is that
you must not fool yourself, or you are the easiest person to fool. And I mean, in psychology,
it's very blatant. But you see it even in other places like chemistry and physics.
We read some paper a long time ago where you had, you know, in chemistry, they had like
there was some study where they had to like look at a paper and a filter through their
hand.
And it's like you're more likely to believe that something is there when you expect to
see it there.
Right.
So you're like all scientists are susceptible to this bias.
I think it's a lot worse.
And I think it comes from just not approaching the process
with the sense of like,
we should be trying to disprove what we're doing than not.
So that's why you're always trying to find ways
of like confirming your hypothesis
rather than disconfirming it.
Yeah.
Yeah.
One thing I've found is that it's quite dangerous
to set out with this idea that you have a hypothesis and if it doesn't turn out to be supported, then you've failed in some way.
Yeah. You were wrong. And students often talk about it to me in that framing. And I've realized in my own work, I can often think about it.
With a little bit of effort, you can actually think about it as being kind of neutral about what the outcomes are.
Like, I'm just picking a random study, but recently we've been looking at the effect of gambling problems on health utility, quality of life, that kind of thing. And if the
result is there is no relationship, that's pretty wild, right? That's interesting too. And you've
eliminated one, you've simplified the world a little bit and you can look at one possibility.
So yeah, maybe that could be part of it too. Just it's almost, yeah,
adopting a slightly different frame of mind
where you aren't actually motivated to do that.
And obviously journals not being motivated
to just pull up, publish stuff
that is counterintuitive, sexy results.
Yeah.
But it's also interesting to think a little bit
beyond these kinds of solutions, right?
Because this is still within the system
that we have in psychology currently, which is where single people do their own research,
they collect their own data, they analyze their own data, which is just the way that we work.
But maybe it would make a lot more sense if we just had some people who say, look, this is my
theory. I'll clearly outline what I would consider support and when I would consider it not supported. Now, other people should go and test this idea.
And then you separate this bias where you want something to be true.
You have to be very clear about it.
And people never do this.
Also not in discussions, right?
Everybody is presenting their own data in any argument that you see.
Nobody is saying, look, this is my hypothesis.
You go and find some data for this and prove me right or wrong.
I mean, it would be helpful.
But and that's what I think I like about having this longer term perspective.
If you think like, what could science be like in 50 years or 100 years?
It might turn more into something like this because it just ends up working better than
the system we have right now.
And it does happen, right?
In physics, for example, right?
There are theoretical physicists and experimental physicists.
If we could separate those two, yeah, a good way forward.
I think, Daniel, you, on one of your courses,
emphasized the importance of when commenting about things
to tie it to the data that you have, right?
So rather than saying the theory was validated or whatever, saying in this data, we find this relationship which accords.
And I find that a very neat thing for increasing detachment to whether things are proven or supported or less supported.
proven or supported or less supported.
But I have found that that is,
I find it, to be honest, relatively easy because I'm not strongly attached
to any particular theories,
including ones I've worked on for quite a long time.
But I have found with older, more senior academics
that that is not the attitude.
And that it's very much taken that
if the data shows a negative
or just a null relationship,
that this is very bad news, right?
And we have to think of a positive spin
in order to warrant publishing that.
And yeah, that experience has reminded me
that in some respects it feels that
like the four of us here or online you can get into a bubble of sort where you think open science
practices or methodological reform is just generally accepted but it's not necessarily
the case that those values are uh paid than lip service, I would say.
But that issue about attachment to theories, I don't know where it's from.
Maybe you know the origin, but I heard somebody refer to theories being like toothbrushes,
that everybody wants to use their own one and not touch someone else's.
wants to use their own one and not touch someone else's and i i don't know if there's a particularly good way to avoid attachment to your personal theory and research area when the disciplines
currently reward you for becoming known as the person right the nobel prizes are awarded to individuals as well so do you think there's
any solutions there beyond complete reform of the no public christ they already gave it to us
they already gave us a solution we need to have theoretical psychologists and empirical psychologists
and the theoretical ones could be as attached as they want to be to their theories and the
empirical ones we're just smacking them all down like whack-a-mole.
Well, I think definitely that is a big thing to do.
Now, I mean, there are some other things we could consider
and all of these are getting less popular
than having just a distinction like this.
So one thing is you could select certain scientists.
I think that we select people into science who like this kind of thing.
Too often we have scientists who have this commitment, like I want to make my theory
sort of last forever.
And then when I die, my theory will live on.
You know, these kinds of feelings are part of what some scientists are motivated for.
And we could select people not to have these kind of things during a
job interview. Now, this is not popular at all. I don't think we'll ever do something like this.
But of course, not everybody has this. You can look around and we all know some people who are
very strong in this kind of feeling or motivation and some people who are not. So that's one thing.
But the other, maybe a bit better, is just having much more collaboration between these different parties.
So there's, for example, a very nice book by Mitroff on the Apollo space missions.
And he interviews all the scientists who are involved, and they all have to work together.
So that's good.
But they also say how some of these scientists are just extremely attached to their theories,
like they will try to squeeze the last remaining drop of possible evidence for their theory
out of whatever data they're collecting.
They're saying, yeah, it's super annoying, but also good to have some of those people
in science.
I think, yeah, here also you would have to think what would the alternative look like
if we were all being replaced by feelingless robots? Like who would fight for
their theory, right? Which theories get a real shot? None. We would just give up very easily.
So, I mean, this is such an interesting dilemma. On the one hand, you don't want these biases,
they have negative effects. On the other hand, we are people, we need to be motivated for something.
And science is a tough job. Why am I sticking in science for 10 years unless I feel like I might be able to do something?
So it doesn't always have to, you know, find its way into trying to support your own theory.
But these feelings are part of also what makes people a good scientist, I think.
I mean, and I might not have it for theories.
I might have it for other things, like the way we would do science.
And I can get really worked up about this and get into arguments with other people about it.
Yeah, so these affective processes are what motivates us.
It lines up with a topic that has been on my mind,
and I know also the British journalist Helen Lewis
was thinking about writing a book about geniuses, right?
And when we mentioned Feynman,
and there is the recent movie
about alpenheimer right and you definitely have these figures that loom large in science like
einstein and newton and so on who are these towering intellects who make some great breakthrough but
when people delve into their life story it often turns out that there's more complexity there there's a
lot of other people involved and there's often insanity that sometimes comes later you know
Nobel Prize disease and related to the kind of people that we look at there's there's a divide
definitely between the kind of online secular gurus that we look at,
because in many cases, they dream of that. They want to be the Einstein Galileo figure,
and they feel that the institution has kind of not given them their due credit. But on the other hand, I think there is a case to be made that there really are people who
produce Nobel Prize worthy work, who subsequently go on to be charismatic gurus for homeopathy,
or whatever the case might be. So based on what you're saying daniel i'm thinking you know in a kind of
evolutionary framework that we need some proportion of charismatic genius assholes
who make contributions but we we have to be skeptical that just because you do that it
doesn't mean that everything that you've done is good and that you are, you know, like a virtuous person that people should seek to emulate.
Yeah, that does.
It's more of a comment than a question.
No, exactly.
And I think the challenge is, I mean, some things you can't get rid of unless you replace people by robots.
So some things are always going to be there.
And I think the challenge also, what I think philosophers of science would say in the last
decades is create a system which has enough diversity so that you can accommodate people
with those kind of personalities or kind of feelings, but that it doesn't push things
out of balance in our search for reliable truth, basically.
Right?
So that's the challenge, that you have a couple of those voices and they play their role,
but you also counter this with something on the other side that, yeah, balances it out
a bit and doesn't, you know, you don't give those people all the money, for example, or
maybe not at all or whatever, you know?
Yeah.
Yeah.
Before we move off the sociology of science, I got a question.
Recently, I've begun to get a little bit disenchanted myself with the whole publication
process in particular, publication reviews and this endless cycle of, and it seems to me that,
you know, we've come along long way. This coincides with Matt facing a couple of death rejections.
I had three death rejections for a paper.
It cost about $300,000.
Yeah, and this was not a minor piece of research.
It cost $200,000 to do it.
Oh, wow.
It's good.
But anyway, let's put that aside.
It's fine.
Basically, science and nature didn't want to publish your study.
Yeah, go on.
These are very mediocre journals.
I never bother with the good journals.
They're too much trouble.
But when you get rejected by the mediocre journals, it stings.
Yeah, that stings.
I agree.
No, no.
So my own feelings aside, like in the olden days,
it was basically a means of communication, yeah,
basically a group email or a letter to multiple people.
And it's so much more than that these days.
Fortunately, I don't have to go for any more promotion
or hiring committees.
I'm going to die in my job, hopefully.
But for most academics, this is not just communicating
to other academics.
These are brownie points.
The papers themselves, according to the quality of the journal, the number of times they're cited.
And it's absolutely crucial because it's such a competitive professional environment that you have to have this focus on these metrics.
to have this focus on these metrics. Obviously, at the same time, there's the commercial aspects of publishing and universities
themselves, at least in Australia, are totally ranked on these complicated metric counting
systems of all the researchers there.
So there is this great big system of funding and of money and of careers and of jobs and
promotions.
careers and of jobs and promotions and it do you reckon that is pushing structurally the communication of scientists with each other in an unhealthy direction
yeah he wants to take that one already yeah schmidt is already opting out of the whole
publication system before she even started so So she feels exactly the same.
It's a scam.
The whole thing is a scam.
It is.
It is a scam.
You're completely correct.
Yeah.
And then they're like, oh, but you need to publish
or you need to play this game before you can change it.
I'm like, but then you're part of the game, you know?
Yeah.
Then you're just playing it.
But yeah.
Yeah.
Go ahead, Daniel.
Well, we have a podcast episode that will come out on the peer review process.
And I have to say, it was interesting to talk this through because, you know, it's true.
You just take part in it.
And I think Shmita is definitely more critical about it, maybe than I am even.
But the recording made me think about this in the same way.
I also thought like, it's so, so weird that we do this.
And one thing that, because we always dive back into history a little bit, and it surprised me how recent certain aspects of our publishing system are, because you just enter it and
you think this is how it was in 1800 something, but it's not at all like this.
Indeed, in even 1930 or something, if you would submit
something to a journal, journals would be like, oh, lovely, we get something to publish. Oh,
that's so nice. Thanks. And then really only after the Second World War, when there's a huge
investment in science, we scale this whole publication process up. There's more money
into science. There are more scientists. They publish more. We get more of these desk rejections
because these journals can't keep publishing all this stuff.
So this system where you are hurt because somebody didn't like your paper, it is relatively recent given that science has been going on for hundreds of years, which also made me think, yeah, it can change.
And it might.
I mean, somebody like Schmitty might just opt out of it.
Who knows?
And it is a bit more possible now.
So there's, for example, one cool thing which develops in some fields is called peer community in.
Because one thing that is kind of nice is have two or three people look at your paper and give some suggestions for improvement.
That part of the process, I think, is not a bad idea,
That part of the process, I think, is not a bad idea, especially for people who don't have close collaborators or are just starting in a field and just need some feedback from a more experienced researcher in this field.
Like you missed this, you didn't think about this and stuff like this.
So this peer community in, which is a sort of publishing format where we just get together, the four of us could generate peer community in Guru Science.
And everybody who publishes on this topic,
we will just organize the peer review process for them.
Outside of any journal, we say,
okay, here are the reviews.
We can also read them.
You can see what other peers thought of this paper.
They chose to incorporate some of the feedback.
It's done. You put it on a
preprint server, which we have now. It's just a PDF. It gets a DOI. It's stored for a long time.
We're done. You don't have to go through these kind of steps. So yeah, I think it takes some
people who are brave enough to do this. And I was even after Switi also yelled, it's a scam. It's a scam. I was a bit
inspired. I thought like, yeah, it is a bit of a scam. And also I am like, you know, definitely
in a position to do something about it. But I felt maybe I should do a little bit more about it.
Yeah. And I will say, I mean, in a system that works well, it would work the way you were talking
about the way it works back in the day, where you have people who are doing their research and when they find something that's worth sharing with the world
then you would go and actually be like oh we found something and now we're going to go and actually
this is something worth being out in the world so that it can move science forward but that's not
how it's done nowadays right now it's like more about getting your publication so you can get
your job and you can get your promotion right it has shifted from actually sharing truths to the world or to the scientific community to like things that you
do to advance your own career and that's what really i think bothers me i mean i'm more than
willing to admit that maybe i'm just like not competent enough to you know to play well at
this game but at the same time to me it's like well if i found something that was worth sharing
i would be shouting from the rooftops right then it would make sense to go and be like i've you know i'm
going to add something instead of just throwing a pebble in a pile be like no this is something
that's worth communicating to the broader community and then it makes sense right and then
in in that system even the metrics that we use would be useful right because then if they actually
reflected good quality work
and work that's worth reading, right?
Then those numbers, those impact factors
and all those things, then they would be useful, right?
But now there's just like no signal in them, right?
So they're not useful anymore.
So I want to attempt the devil's advocate for...
We love that.
Yeah.
I want to attempt the devil's advocate for... We love that.
Yeah.
Boom, boom.
So, of course, I completely accept all the points
about the publication metrics
and the horrors that can be going through peer review.
And I've had various horror-fying experiences
with people with kind of vested interest shooting down papers
and stuff so i'm well aware of all that and i i like the sound of most of the experiments in peer
review open peer review or what you just talked about daniel but with that said all that long So during COVID, the anti-vaccine community,
and also I would say the parapsychology community,
though that's less of a concern,
but there are a lot of kind of pseudoscience,
conspiracy theorists type communities,
some that are genuinely harmful, like anti-HIV researchers.
And peer review, it's not perfect.
There are anti-vax papers that went through,
and there were preprints that went up.
But by and large, a lot of those papers couldn't go through.
Such that if you look at anti-vaccine communities,
they're always complaining that they're being suppressed, right?
And now you can either believe that they're being suppressed
or the position I would take,
because they are able to put their material up as preprints.
And then a lot of people through open peer review kind of processes
highlight how extremely flawed the
papers are and why they should be rejected and not published. And I feel that often when
psychologists and academics are talking about the need to reform, they're not so much focused
on the possibility that those reforms can lend credibility to people who will promote really harmful anti-vaccine misinformation or other evil, worse things, race science stuff.
So I'm just curious about that argument that getting over over the low hurdle of of peer review
is actually a hurdle like a lot of the gurus we cover they have very few papers because they can't
really deal with getting critical feedback at all yeah and and so if that's there it kind of
provides just a little hurdle that would be my main pushback.
Yeah.
I mean, that is what it does.
It's completely true.
And there's value in this.
And even preprint servers,
there's a med archive.
So for medical research.
And for us psychologists,
we were just like,
yeah, let's just create a preprint server,
put stuff online.
And we never have to think about negative consequences or the negative impact of our work because let's be honest,
most of our work has little impact whatsoever. But in the medical community, this was a much
bigger thing. So there they also set up a preprint server, but they have a much more elaborate
screening process before it goes online. So in psychology, if I upload my preprint, it will be online. And there is a post
moderation process. So it will be available immediately. And after a while, somebody takes
a look like, isn't this crap or something? Isn't this weird? But in the medical field,
they do it before exactly to mitigate these kind of hurts, right? So that makes sense.
Now, overall, I mean, this is a very complex
problem with multiple aspects. For scientists themselves, if we would never let other people
read our papers, it would probably be fine not to have a peer review process because we are the
peers anyway. We can read papers and understand them. And somebody writes this crappy paper,
we understand it. The general public is reading along.
And there are actors that want to co-opt our scientific process to give credibility to some claims that they want to make.
And those are negative possible consequences that you need to prevent in some way.
Now, the peer review process is doing this, but it also feels like it's such a
messy way to do this. Because exactly, I mean, you also can't say, okay, but if it was stopped
at peer review, that is a good thing because Matt just showed us that his genius worthwhile papers
are also stopped in this way. So it's a very noisy selection mechanism, right? Peer review. And I much more
like this post-publication system, exactly like Chris says, where afterwards, if the preprint is
online, a dozen people point out flaws. Now that also has negative things because I think some
people are just willing to invest their entire life in producing bullshit.
And then, you know, the investment of 12 people to point out the mistakes in the work of one paper.
Basically, this one paper is going to keep 12 honest scientists from making a useful contribution to society.
So that's also not nice.
So, yeah.
Yeah.
What to do?
I don't know.
Yeah. I mean, yeah, that's a fair point that it's a hurdle.
But I would also say that we wouldn't see as much junk as we do if it was a decent hurdle.
I mean, no, you read stuff that gets published in like PNAS and some other journals.
And it's like that stuff is just bad.
And so what you're saying about like, you know, the stuff that's really right.
It's stuff like precognition that you're like, oh, obviously this is junk.
Right. Or like stuff where, you know, COVID, like just awful, awful stuff, right?
Like that's not making it through.
Okay, but a lot of stuff is making through.
That's just the absolute garbage, right?
Yeah.
And you know, you mentioned Daryl Bam,
the people, in case people aren't familiar,
it's a paper, Feeling the Future,
that claimed over nine experiments
that you could kind of apply
the stimulus after the experiment and show that it right it produced an effect. So reversing time.
Which is quite an extreme thing. And that got that got through in one of the top tier journals. So
I'm definitely not saying it's a it's a perfect system i i think
just to follow on what you said daniel and i i think it accords with this point is that you had
an encounter recently with a guy alexandros marinos who you may or may not know but he
gained an online profile from being a Brett Weinstein superfan.
He's since branched out into other avenues,
but that was primarily where he came from,
promoting ivermectin studies.
And he took an interest in the situation with the data collider.
The fraud case, or the alleged fraud case,
just to prevent also being
sued, the alleged fraud
of Francesca Gino, Harvard professor
that De De Collada pointed out that
there were problems in the papers.
And my
hunch is that because
myself and Stuart Ritchie and some other
people that have been unpleasant
about Eric Brett
and Eric Weinstein's conspiracy promotion
were promoting that case that it might have prompted him to adopt the more critical stance,
right? Because his position is somewhat skeptical that they've actually detailed real problems and
so on. But I mentioned him more as an illustration because there he produces,
he produced a thread on the specific error or indicators of fraudulent data that have
been detailed. And he did a kind of amateur analysis, right? But his thread is always on Twitter or sometimes on Substack,
but extremely long.
And from the perspective of somebody
who doesn't really know much statistics
or know much about the processes involved,
it looks very thorough.
And that's how his threads look about ivermectin,
so on, Robert Malone's contribution to the mrna vaccines
they're very detailed and you guys did an episode on on cargo cult science right and we've talked
about this kind of thing online as being a kind of cargo cult presentation of scientific rigor but like you said to address that right for someone like you
the engagement would end up taking you weeks it would never end in the sense that there's never
going to be a point where they admit that they've got something fundamentally wrong or change their
stance so in in that case like i'm wondering what you both think about the the kind of onus on
scientists or science communicators to try and combat that because on the one hand i think it's
important it's important that people are aware of it and kind of think about how to deal with it but
on the other hand i completely understand real scientists being like, I don't have time.
I've got my own work to do.
I don't know how to deal with these people.
And kind of, you know, just ignoring that
or like blundering into, you know,
not realizing who those people are.
So I'm, yeah, you're giving your recent encounters.
I'm just curious, any thoughts about that?
So I think it's interesting, but I don't think there is these people.
So I think this is definitely a person who's in the extreme and on a continuum where they want to be critical about things that most other people agree.
But I had similar encounters with just normal average scientists also taking a little bit of this critical stance, but it's
exactly the same process. So relatively limited real engagement. And then I think in this case,
this person actually had invested some time in figuring out what the claims were based on.
That makes it a bit more effortful to go in and figure out why they're wrong.
That also makes it more. So I think that strategy is if you invest more time into a topic than 90% of other people
have done, you come across as really smart and almost nobody is going to invest time
to prove you wrong, even when you are actually wrong, right?
So it's an interesting thing to see.
But there were other scientists doing more moderate versions of exactly the same thing.
Like, oh, well, there's some good points being raised here against it being fraud.
And you're like, no, those are not good points.
You also don't understand anything about this situation.
So it's a continuous.
So there's not these people, but this is just a tendency to criticize something based on a not complete understanding of something.
So I think this happens all the time.
I thought in this case, actually, what I thought was fun, I mean, my summer wasn't the most fun.
I was spending a lot of time in the hospital.
My wife was really sick.
So I really needed some distraction.
And this guy was just one of the ways that I found some distraction as I was sitting there.
You know, you need to get worked up about something that distracts you.
But what I found interesting was following up on this. distraction as I was sitting there. You need to get worked up about something that distracts you.
But what I found interesting was following up on this. So there's this initial threat,
which sounds quite convincing. And in the back of people's minds, there's like, oh,
but there was a thing here, right? There was a thing about this fraud case that wasn't so rigorous and solid, and somebody was looking into it. And what I find fun to do is just two
weeks later say, so where's the whole point?
Like, where's the rest? You were making a starting point. Where's the rest? Yeah, no, no, I'm not
working on it yet, but I'll work something out. Okay. And then you get something that's worked
out, which is pretty crappy. And you say, okay, so what about this and this and this? They're like,
okay, okay, yeah, I'll look into it. And then two weeks later, you follow up again. You're like,
so where is it? Where's this thing? And I think this is the problem we have. So people can float around
hundreds of ideas, you know, most of them are crap, some go viral and cause a little bit of
doubt. But nobody really follows up on stuff. So you should actually have some nice overview where
you say, look, this is a claim you made. Here is why you were wrong about these things.
And if you then have a track record of just being wrong a lot, and I look somebody up like this, but this is again a thing that works just for gurus.
It works just as well for scientists.
Because in science, we also don't have this. Anybody can come up with the newest crazy idea.
And then if you would see, but you have a track record of just coming up with shit most of the time.
Maybe we should use this a little bit as a Bayesian prior to say, okay, I'm not going to take it so seriously yet because you've been wrong so often in the past.
I really need some other people to confirm what you're saying here.
If that happens, fine.
Now, we don't have this process, I think, because we didn't have these signals or these communication
channels. So we don't have anything set up for these communication channels. We made the peer
review process for the old fashioned science communication thing, right? We had the printed
media. We don't have something like this because, yeah, these communication channels are so new.
So we just need to think of, okay, how does this work? You know, who is checking this kind of stuff and how do we do it?
Yeah.
And I take the point about there being this tendency.
Contrarianism sometimes can be correct.
It's actually valuable.
Like I find Lee Jusum extremely annoying,
but I also think him and people like him are valuable to having a discipline so the and and i think there's
good contrarianism and bad contrarianism right and like you said the track records can show you which
which will lean towards so i i would take that point i think i'm a little bit more sensitive to the fact that there really are these communities
where
kind of anti-scientific principles
are promoted and
held up as valuable.
So I do see that
as a they.
No, I understand. That's
also true. So there are certain categories,
right? And in the end, maybe we need to draw
some boundaries because I've for a very long time stopped trying to convince everybody about
things, you know, the probability that I'll convince 100% of people that I'm right about
something, I've given up on this goal. So there are people you will never convince. And maybe
it's fine. I mean, and then it becomes a very political game, right? What you
want to do is limit the influence that some people have when you're really sure that they're doing
something wrong. And that is like you mentioned Lee Jassim, who's a contrarian voice in social
psychology. But I don't think he's wrong very often. And yeah, so there's no reason to prevent
somebody like this from having their influence. Then it's let many flowers bloom.
We see what the end result is.
And there, I think our field is actually even anti sort of, it's overly conservative.
We want those contrarian voices.
We are very negative to some contrarian voices, actually.
So we love to ridicule people if it's clear that they're over the edge, but we don't deal very well with the contrarians just within the borders of what is very reasonable.
And we're extremely harsh sometimes about people who want to be contrarians in our field.
Yeah. And Smriti, I had a question for you that might relate to that point Daniel is bringing up.
question for you that might relate to that point Daniel is bringing up. So I noticed on your Twitter profile that you have Heterodox Academy at Berkeley, right? Maybe you don't remember.
No, no, I do remember.
And one of the things that I wanted to bring up with both of you,
and being a good entryway to that is that there is open
science advocates and the traditional somewhat hostile response to those efforts but you also
have divisions within those who recognize the need for methodological reform right and one such
division was highlighted whenever perspectives on psychological science ended up forcing out Klaus Fiedler, the editor.
And there, there was, Leachism was involved because he was one of the invited commentaries on a paper that was arguing about, I'm not going to do a great job about it, but basically arguing psychology needed to adopt Kendi-style anti-racism.
Yeah.
So that whole issue, though, resulted in a somewhat clear division,
I would say, between European researchers and American researchers,
where there were competing letters bouncing around open letters.
And I wonder about your thoughts on the state of heterodoxy
in kind of open science or academia in America,
and then this potential divide between people in America
and outside of America. I would just say from
the perspective of me and Matt, for example, because neither of us is in America,
a lot of the culture war stuff often, like it's annoying, but it seems like a little bit
distant, right? And because we're not surrounded by it. So I can sympathize with people facing
those issues, but I also feel like it sometimes is taken to encapsulate the whole world.
And there I mean both like social justice concerns and the anti-social,
not anti-social justice,
but like thinking that the social justice stuff is consuming too much attention.
I realize that's like a ton of things to throw out.
You can pick anything that you like
to comment on yeah i mean that yeah that case was just i mean it is sort of very interesting and i
and i will say i mean i'm in the u.s but i i'm also at berkeley which is probably sees like the
worst of like this we are like it comes from here i think i will say that like it's our fault like
that all of the stuff is happening like
i mean which is true i mean a lot of the dei statements and stuff like that i mean it is like
the yeah we have given birth to all of that so it's a it's a very interesting place to be like
it's almost like people there's a lot of people here that of course like trust in science and
care about science but also have a lot of social justice concerns.
It's all coming, I think, comes from good intentions.
But so I've had conversations with people here at Berkeley who are, of course, very open science, very into that stuff. But when I when you talk about Heterodox Academy, they will say things like, oh, but I thought when I think Heterodox Academy, I think people like Jordan Peterson and the intellectual
dark web, like they associate like like it's essentially like when you talk to them, they're
like what it sounds like you're trying to do is create a safe space for racists. And of course,
the meaning of what it means to be a racist is has now completely changed. Like if you say things
like we want everybody to be treated equally, that's racist. Or I just attended a seminar that was one of the
authors of the paper who also lead Yes Amazon in defense of merits in science, right? You would
think a very sensible thing, right? We should care about merit before we care about things like what
your gender is or what the color of your skin is. But that's a controversial
thing that some people think that that's now a racist thing, right? Or that if you have things
like standardized tests that lead to differences in outcomes, right, that that makes the test racist,
going back to sort of psychometrics. So I think there's all this concept creep going on where
people are trying to do the right thing, but it is becoming,
yeah, like I think they're putting a lot of value on that stuff above and beyond like things like
pursuing truth. Because even if you think there's a problem and you want to solve it,
you first need to figure out what the truth is. And I think, yeah, so that's getting lost. And I,
I mean, Daniel and I have this conversation too, right? Sometimes it's like, it feels like it's a
really, it's a lot worse here, but I think it starts bleeding in. Like, I think, Daniel and I have this conversation too, right? Sometimes it's like, it feels like it's a really, it's a lot worse here,
but I think it starts bleeding in.
Like, I think the US and also American academia
has just outsized influence on the world, right?
So I think it's something to be mindful of.
And that's one of the reasons I'm part of this
is that we need to be having these conversations
and think like, right?
Like if you have now people saying that,
oh, math is white supremacist and rationality is white supremacist like we can't function you know
in science if we think like that right so it's a problem and i think it's going to start bleeding
into yeah everything we do yeah and it's and it's terrible for science right yeah so it's
interesting because in our podcast we've chosen to never talk about current topics, right? So we would never talk about it.
That's a really good idea.
There's some reasons for it. That doesn't mean that we, you know, off the podcast, we don't talk and think about these issues. And I think this situation that emerged, so it really is the perfect, let's say, distinction between things that happen in the US and in the rest of the world.
Because the original paper that was published was about the representation of people with certain racial minorities in science itself. And that there was a disparity. And, you know,
the paper basically said, it would be good to do something about this, we need to go and, you know,
make sure that there are more people of other groups in science. And the criticism on this,
and I think this is important. So somebody like Klaus Fiedler, who's a German psychologist,
I know him quite well. And I think people in the US don't know him very well. But he's in the most
prestigious position in German academia you can be in as a psychologist, basically. If you tell me like, hey, look, Daniel, you can walk in this room and have a coffee with
Klaus Fiedler or Daniel Kahneman, I would be like, I get to meet Klaus Fiedler.
Oh, that's cool.
Whereas most Americans would probably be like, yeah, you know, you'd pick the other person.
But for us, this is already quite different.
So I think that's out of the context that many Americans realize.
already quite different. So I think that's out of the context that many Americans realize.
Now, he is also the poster child of, if people do one thing, I will argue against this as a matter of scientific principle. It doesn't matter what my opinion on something is. If I just see most
people go in one direction, I'll go in the other direction and say, that's the direction we need
to go in, because I think in science, somebody needs to take this. So he is a contrarian,
out of scientific principle. So he writes, or he invites, I guess, or gets a paper as the editor.
Well, this paper is not really good. It makes some weak arguments, which of course, yeah,
every paper can make some weak arguments. It's fine. There's definitely some points to criticize
in the paper.
If you read some of these comments, not everything that's written there is crazy.
But just doing this, and I think he's aware to a certain extent that doing something like
this will piss off people in the US.
But I think he also thinks this is important.
It's important to piss off the people in the US sometimes because exactly as Chris says,
like, yeah, it feels that they have a little bit of an outsized influence on the people in the US sometimes because exactly as Chris says,
it feels that they have a little bit of an outsized influence on the rest of the world.
They're not that many Americans.
They are the minority and it's lovely that they have their own problems.
It's exactly like you say, it feels a little bit like, yeah, you have your own problems
there.
It's not that we don't have any of those problems, but we have them in different ways.
You know, it plays out in other ways.
So I think somebody like Klaus Fiedler says, yeah, I am going to piss off some people over
there, but it's fine.
That's what my role is.
We should have a science where this happens every now and then, because if we don't do
this, that is not good for science.
And yeah, I think it's a nice illustration that in this case,
the response was so extreme that he basically had to quit his position as an editor. Not only that,
but actually the whole, so this is a journal perspectives on psychological science. It led
to such a problem that the field thought, the field, we mean the US people behind the publication, right?
They felt like it's too difficult to have a journal like this at this moment.
Nobody wanted to be the editor.
So it has become so sensitive that for a moment now, we can't have a journal that publishes
perspectives on psychological science because apparently it's too touchy a topic or something.
Well, those things are really interesting. And I don't really know why we don't
talk about this so much. Because that is a very interesting development, I think.
And definitely this difference in opinion between what the US, what people in the US think,
and what people in the rest of the world think, that is worthy of discussing a little bit more,
maybe not exactly like Klaus Fiedler tried to do it, by the way.
We can definitely have a different way of putting it on the agenda.
But yeah, it's worth discussing, totally.
And it's terrible because it ends up affecting the scientific process.
Because I see it all the time where people are unwilling to criticize certain research
because of the topic that's,
you know, maybe a little sensitive or controversial or the researchers are people of color or
minorities.
And it's like, well, that's not healthy, right?
That's that's a really terrible thing, because that means that we're not doing what we should
be doing, which is criticizing ideas, right?
Because now people are afraid that they're going to get called a racist, like Klaus Fiedler.
Again, you know, it's some things, yeah, again, that he did, maybe he should have done or
gone about it differently.
Like we could definitely acknowledge that.
But to call him like a racist is just, but now people are worried about that.
And so we're sort of losing as a whole.
Yeah.
And in a situation like this doesn't make it easier for people to do it next time around.
And we were discussing hiring of academic staff a while ago.
So in my university, we try to promote having more women as professors in the university.
We're a technical university.
And just from the past, we didn't have equal numbers of men and women being professors.
And it's still very slow, this process of reaching
a more equal number. So the university board had decided that there would be a new policy where
they would first advertise certain jobs, or maybe all jobs actually for a while, only to women.
So the first six months, only women could apply. And if you couldn't find a suitable
Only women could apply.
And if you couldn't find a suitable candidate after six months, you could open it up to anyone.
And the idea behind this is maybe worthwhile because we definitely want more role models for our female students. Like, hey, I can also become a professor.
So I'm all on board with that.
Somebody sued the university for this rule being discrimination.
It went to court and court said, this is indeed discrimination.
So you can't do this.
They have changed the rule a little bit.
Now it is only certain departments for certain positions, like the math department, for example, is still entitled to open jobs for a limited
amount of time only to women to promote more women applying to these jobs.
But my department no longer can do this because we were already pretty, pretty fine.
Now, suing the university over something like this is something that you have to do, right?
You have to do this if you want to have criticism of policies,
where you're like, maybe this is not in line with, in this case, it went to the Universal
Court of Human Rights. That's also, that's kind of nice that we have human rights like this and
some objective people who think about this. You can disagree with it, that's fine, but okay,
it's policy. And we were thinking, would this happen in the US, right? If you have a policy
like this, would anybody go out and sue the university for discrimination? I don't think so.
But maybe they should, right? I mean, we need to be able to have a discussion, a critical discussion
about topics like this. Yeah. I mean, it sort of happens here, but not as blatantly as you would
only have, right? Here, it's like women and minorities are encouraged to apply right it's
things like that where you get around right so you can't get sued right but but it is still done and
i mean again the dei statements right i mean that all started i think if i'm not wrong like here in
the uc system probably at uc berkeley if i'm not mistaken where they do use that as a first pass so
they will actually the first thing they look at would be your dei statements and just like that's your first cut of like just removing 80 of the pool because their
dei statements are not up to the mark so they do it in these oblique ways without actually openly
saying that you know we only are limiting it to the people who we think are ideologically aligned
with us yeah i'm wondering if um gpt4 is going to solve this issue i mean
isn't it pretty easy just to um hit those um all right
and it is so unfair i mean since we're talking about sort of the global perspective it is so
unfair to people who are international students who don't know
all the ins and outs of what's happening in American academia, right? Like you have to
know what they're looking for to be able to give it to them in these statements,
because they're not looking for your opinions, right? They're looking to hear what they want
to hear. And so if you don't know what's going on, right, it is so deeply unfair to, you know,
to minorities, people who don't have, in fact, I i mean i've heard from people here that some of the when they actually score those di statements the people who end up doing
really well sometimes are like white men yeah like with you know who come from academic backgrounds
it's like yeah because they know what to say to get in and right i was saying exactly that actually
to a colleague earlier today which is that like those statements are not,
like I also think I'm one of those people
that knows what to write in them.
And they're very much not like,
oh, you know, I've supervised all of these students
from diverse backgrounds, et cetera,
work with these colleagues.
Because that's like saying
you've got some black friends, right?
I'm not racist.
So what is wanted, of course, is the correct language,
the correct kind of theoretical references and allusions.
And it really, you know, it strikes me that this is something
that is going to be challenging for someone that isn't enculturated
in a particular kind of upper middle class background.
And if you're from a working class background
or if you're a middle-aged Indian engineer, say,
who's recently come to the United States,
you are going to be very much disadvantaged
compared to someone like me.
But I'm hopeful the GPT-4 will be able to be available
to everyone then.
But then, yeah, what's the point?
Yeah, what's the point?
Yeah, exactly.
Well, that's what Joel Limbaugh kind of argued, right?
And then almost all of the debate around that did not focus on what his actual argument was,
which was that they're not, whatever you think about their merit,
there's no evidence that they deliver what they say, that proponents say they can deliver.
I think he even just doubted whether they, you know,
it was just a discussion of do they actually deliver?
It wasn't the evidence I don't think is there even.
So it is indeed not a good thing if raising the question,
and I think you can raise questions in bad faith,
but this is not a bad faith question, right?
This is just like, are we helping the people we want to help?
You should have criticism in science. And I think it's an interesting distinction in how much
different cultures allow different criticism. I mean, I often joke that because I'm from the
Netherlands, we are so direct, like we will just criticize you for anything, you know,
it is true to a certain extent. There are other cultures like this, but-
That's why
people think you're an asshole on twitter that is i mean and i am sometimes i am an
ass on twitter because i also don't particularly care i mean it's that it's well well you know
you shouldn't excuse me completely about it sometimes i'm just an asshole i'm just annoyed
by people and i just don't control my annoyance.
But the point of being able to criticize things is something that some people hold as a really core value of academia.
And I've talked to many people who feel that they lost interest in science because it has
stopped being a place where you can voice these criticisms.
Now, there are, again, very big differences between fields.
I think everybody knows the stereotype about economics and economic seminars,
where if you give a presentation, you just need to make a title slide
because people have criticized your idea for 30 minutes before you pass the title slide.
But I have the feeling that in our field, we're really moving away from being able to easily criticize all sorts of things you would reasonably be critical about, I think.
I have to relate an experience that touches on all this, that Imi Kadi, the dog agrees.
Or he loves Imi Kadi, yeah. I'm so conditioned about his name. There are loves Amy Cuddy.
So conditioned about his name.
There are some things weird going on.
How often do you mention his name around the dog?
Whoever's dog that was.
I'm not going to say anything.
Yeah.
So Amy Cuddy was the researcher associated with power posing
who came on the fire during
the replication crisis. Um, and I was commenting actually on, uh, she posted a thing about how,
how terrible the online psychology community was. And it was always kind of tearing down people right and at that time Iko Farid
had just been sued by Jerry Coyne and just successfully defended that that in itself is a
whole other like terrible event he should never have been sued to be clear from and I'm glad that
he was able to continue but in any any case, after seeing all the support
that had came out for him,
an early career researcher
against a well-established researcher that was suing him,
I just found her characterization very incorrect.
And I made a comment about it saying something to that effect.
And I think it was early in my Twitter career,
so I quote tweeted without anticipating that she would respond to that effect and i i think it was early in my twitter career so i quote tweeted
without anticipating that she would respond to that um but respond she did and then she found
a blog where i had just written a little thing mainly uh repeating simone vazir's um point that
like criticizing someone's research is not bullying. It's a separate thing.
There can be bullying,
but criticizing research has to be allowed.
And she kind of framed that as me being part
of the kind of mob attacking her.
And I went back and forth a little bit
and then subsequently got an email from her
requesting to interview me for her book,
Bullies, Bravehearts, and Bystanders.
And it was quite clear.
As a Braveheart?
Yeah, I don't think I was going to be a Braveheart.
And I just remember, like, this was a number of years ago,
but like the sign-off on the email had her university affiliation, but also how many millions her
things, talks have been seen by or books published.
And the tone of the email was very kind of accusatory.
And I was thinking at that time that like she presented it as me, the power differential
being me attacking a woman in science.
But the ceiling of power differential for me was this extremely credentialed researcher who could accuse me in a book of being a sexist bully.
And so I declined the kind invitation to contribute.
And so I declined the kind invitation to contribute. And I will say I also got DMs from Simene and some other people
who had noticed that, just to say messages of support,
which was very nice, and I appreciate that.
But that case, if you take it purely on the kind of standard identity issue,
the kind of standard identity issues,
it could be very easily to slot in male academic attacking female academic and dismissing her rigor, right?
But that wasn't anything to do with it.
Her gender was no concern of it.
So I realize this is an indulgent example but i i just mentioned it to
say that like that definitely applies and i i'm pretty argumentative and stuff so it didn't stop
me from continuing on but if i wouldn't blame someone else especially someone in america who
might be going for positions or whatever,
to basically be like, I'm not going to talk about any of that. I'm just going to remove myself because that could do very real damage to career prospects or something like that. If you become
lead jurist before you have tenure. And it's definitely an interesting case because I think it is a good example of where we want to draw the line between criticizing research, like the way that we want to criticize research in academia.
And so there's this book, which I like, I think by John Ronson, So You've Been Public publicly shamed. And it really nicely illustrates how online things can really spiral out of control
and you can become like, you know, everybody can fall over you for some reason.
And it feels, if you are the person experiencing this, it feels like it's horrible.
It's horrible.
It will damage you for a very, very long time,
even though everybody else has moved on after the one week that they were publicly shaming you. So in this case, this is actually what sort of happened. I think
actually, the people who used Amy Cuddy's power posing as an example of problematic research,
I think it's very small. Actually, I think it is basically Andrew Gelman on his blog,
who used to, for like like years use this as the example
of sort of Shoddy, along with research on himmikines, like the names of hurricanes. He had
a couple of those examples. He just used them for years. I think it's actually mainly there that
this has happened. If you look around, not a lot of other people have used this repeatedly. But
regardless, at a certain moment, people have associations. And it sucks if you
become the association people have with research that is probably not reliable, even though there's
a ton of other people who've done this and a ton of other people who have been criticized. I think
in our field, if you know somebody like John Barge, I don't think this is a person who really
came away easily from this whole
replication crisis and was also criticized a lot. The only difference is he didn't push back,
you know, just don't hear from some people, just much more quiet than in this case. So,
but where the line is supposed to be is an interesting one. And I think if you had perfect
control over a system, you would say, people,
let's move on from this one example. There are many other examples we could use. Can we just stop using power pausing as the example here? But it doesn't happen. At a certain moment,
you have to keep talking about it, which is why we are talking about it now. So it is just this
reinforcing thing, right? Again and again, you can never get rid of it, even though it's regrettable.
thing, right? Again and again, you can never get rid of it, even though it's regrettable. But there is stuff to criticize in this work, of course. And in this case, I've been involved, for example, in the peer review process. And I think that people from all sides in this discussion haven't been perfectly objective in having a discussion about this topic. So, yeah, how we criticize people is just not such an easy thing to do well
so that everybody is happy
with how the criticism has happened, of course.
And I would add, though,
that Dana Carvey, the co-author
on one of the original power posing papers
who wrote on her blog, I think,
just a Word document
talking about some of the researcher
degrees of freedom and and problematic research things that they applied but she was very open
about it and basically said she didn't have confidence on the effect but she wasn't condemned
for that in any way shape or form was completely celebrated as like an example of good, like scientific virtue. So it, I don't think
it is the case that people are always just looking for any excuse to tear people down, like within
the open science community, though it's sometimes presented that way. Yeah. And Dana was actually
the first author on a lot of that work, Dana Carney, who's also here at Berkeley, actually.
And yeah, her response was a lot more,
right? Like, I don't trust that. I don't trust that research anymore and much more balanced.
But I will say here, I mean, there's two things to mention here, right? Like the one is like,
when you're in the second, like, especially in the American academic system, everything is now
about power differentials, right? And it doesn't matter if she's like a professor, you know, at XY,
this big university versus, right, you're a white male, right? And she's like a professor you know at x y this
big university versus right you're a white male right and she's a woman and so there's there's a
power differential even though right that's one thing and i do think there is a gender difference
in ways people communicate right and i think that's where the heterodox stuff comes in but
also the whole broken science right that schism comes because of the the
differences right because and i am pretty argumentative too maybe that's why i'm you know
you know i'm okay with these conversations because i will get into get into it with people and it's
not but i do think that women are sort of socialized to be a lot more you know agreeable
and not and i think that so i think there's a lot of people that think that if you communicate
criticism a certain way, that it's bullying, right? That if you ask, I think Lee Jessam has
also been accused on Twitter just for asking somebody to give evidence for what they're
saying. And that's bullying just because, right, of the power differentials, quote unquote.
And so I think it's- Lee Jessam has asked me to give evidence.
He does that to everybody. He goes after everybody.
And that's the thing, right?
And as he should, right?
As is right as a scholar
and as it is your responsibility as a scholar.
Like if you're saying something,
you should be able to back it up, right?
So I think that creates the whole sort of
broken science thing
where if you're criticizing people
and you're not careful,
and again, nobody should be an asshole. I think we think we can all agree right like you should not be an asshole
to to people like we should be nice about it so but of course yeah like the communication but it
does also go back to the thing we were talking about sort of um people having their own theories
and their own ideas and you do have to sell it it's almost like your brand right and i think what we should be emphasizing more is that you should be able to separate
yourself from your ideas right so people like right amy cuddy it's like you can say you
if that wasn't such a big part of who you are and it wasn't your brand then you could think of it as
though this is an idea that i put out in the world and maybe it's not replicable and it isn't and then
you could say right like if you had more separation of it isn't. And then you could say, right,
like if you had more separation of it
from who you are,
then you would not take it as a personal attack.
And I think that should be the case for everybody, right?
If I put forth a belief,
I reserve the right to change my mind,
which we should be doing as scientists anyway, right?
All our beliefs are provisional.
Like anything I say even on this part,
and I will deny all of it, you know, in a week.
So you reserve the right to change your mind. And that's a good thing,
right? So you should not be right. So if somebody criticizes what you're saying, I think
people are so quick to take that as a personal attack, instead of thinking of it as, oh,
I'm separate from my ideas. And if somebody attacks my ideas, I should be able to evaluate
those on the merit of the argument that's being made, instead of thinking that I'm being attacked or bullied. And I think,
yeah, that is hard to do, right? We're human, I think that can be hard, especially the other
person is being an asshole. It's so easy to be like, well, that's especially just an asshole.
Right? So I think it like it's a complicated, yeah, dynamic that I did. Yeah, I'm not sure how
to deal with. And what doesn't help is that sometimes people
are just, well, whether you want to call it a bully or an asshole, but people are just being
an asshole. Like, you know, I mean, that happens. And I think there's, of course, a correlation
between people who are willing to voice their opinions about things, but also people who have
an affective response that is stronger than their desire to be liked by fellow academics. I mean, that's
basically what's driving some people, right? You're, you're, you want something, you're like,
you don't want something to happen. You're upset about a topic. Let's say bad science.
There've been people complaining about this for 20 years. They keep seeing it. They get so annoyed
by this. They care more about this than being liked by other people. But those people can
also be jerks. And I mean, I am one of these people, in a sense. Now, to be clear, like if I
die, I hope that a lot of people are going to stand up and say, so Daniel was really kind to me
during this period of my life for this, right? I really hope this is going to happen. And I'm
pretty sure that there's a very sizable group of people who will say this. But there are things that you see online in my behavior where I am just
really annoyed and frustrated. I should close my browser window and not let my personal annoyance
from the last decade carry into this one message that I'm sending somebody who's not thinking about
all this stuff. But I do. That's not a good thing. If I was a
perfect human being, I would cut my own distinction between what I want and what other people do. You
know, I would be more objective about this. But I don't manage. I'm always impressed by people who
seem to manage to do this very well. I can't. It's a personal flaw. But yeah, you know, I hope to
compensate it by doing some good stuff for people in other places.
And sometimes I slip up.
And if you call me out for it, I hopefully will also be good enough to say I crossed
the line.
I definitely feel often like, okay, this was too harsh.
Sorry.
And I will say it sometimes.
But there's definitely a correlation there.
I'm more likely to say something that is just a little bit more nasty or because I'm annoyed or frustrated.
It is true.
You should try to emulate me, Daniel.
I stay very calm and non-aggressive online.
That's how to do it.
And I see you shaking there, wanting to throw your oar in there.
No, I'm fine.
I'm fine.
I'll move us all along away from these potentially dangerous
third-row topics, race and gender and sexuality and power dynamics.
So let's talk about a group that everyone's prejudiced against,
apart from the Dutch, Bayesians.
Why do you hate them so much, Daniel?
So let me just say that some of the most fun feedback I get on my course that's online,
so this open course on improving your statistical inferences, it has a lecture on Bayesian
statistics.
And sometimes people say, oh, I was really so interested in Bayesian statistics after doing this course. And people recently invited
me to say, hey, we're teaching, we want to get some training in Bayesian statistics.
Do you want to come over and give it? So I do think hopefully that in my teaching, at
least I'm objective enough about it. So, well, maybe it's a bit of a contrarian thing to want to just defend these poor little p-values against this, what I think is sort of pushed towards other ways of analyzing data.
One reason that I think this is not a good idea is very principled.
So if you are a subjective Bayesian, why I don't like subjective Bayesians is because it violates my philosophy of science.
And I think it is very much part of wanting a science where your personal beliefs don't play a big role.
So if you analyze claims, so of course, we all have our beliefs, and you can come up with any hypothesis you want.
You can study anything you believe to be valuable.
But when we're evaluating claims, I just don't like subjective beliefs to be part of this.
And in that sense, I'm not saying anything that Karl Popper wouldn't say, although he's
also not as popular anymore as he used to be.
So I'm just exactly in line with this.
So that's one component.
And the other component is that for me, I think a lot of people are just mindlessly switching one mode of statistics they don't understand very well to another mode of statistics that's arguably more difficult to do well and that they understand even less i i wonder in maybe you've seen this as well that like
bayesian thinking is like a buzzword in a way and in heterodox spaces not i'm not talking
specifically heterodox academy i just mean heterodox spaces i often see people reference
i'm i'm thinking bayesian about this and And they'll produce, in the worst case,
they produced a Bayesian equation for the probability of lab leak,
which had just, like you said, all subjective values,
like let M equal my assessment that it's likely.
And yeah, it struck me that that has allowed people to kind of
formalize inserting just their their intuition as their priors yeah yeah something that sounds
statistically complex but uh yeah are you and i know that we're all we all have friends who are Bayesians here we bring back good Bayesian people we appreciate that
I don't make friends with Bayesians
sorry
how do you
feel about them
or the issue
again I don't hang out with Bayesians
no thank you but well but the thing
is I mean it's
I've always had the like
intuitively I agree with sort of Daniel where it like, there's something that just feels off about it, right?
It's like this kind of thinking of like, you're literally, I mean, on a technical side, there are people who make the argument that once your prior goes in, you're multiplying your prior, you know, with your data, which is like making up data, right?
You're just making up like, oh, I thought something, and then you're using it, right?
So it's almost like faking data.
And a lot of the priors are so informative.
Like, what's the point?
What is the point?
They're totally informative.
I just want to have this Bayesianism.
It's just like faking data.
I just want to make it.
Even on a theoretical level, you'll hear people for bayesianism making the argument oh
but it's how babies think right that's how babies make sense of the world i'm like really that's how
we want to do science it like takes us from how babies make like come on it just does not it just
does not make any sense like on any level and one of the best arguments i've heard against bayesian
thinking sort of bayesian epistemology recently was a novel ravikant's podcast if you guys have heard of that novel um maybe i'll share the link
and i won't be able to like recreate that argument really well but it's like in in bayesian thinking
right like if you're collecting more like newton's theory kept on gaining more and more
right each time you're collecting a new new evidence for it, your prior should have kept on getting stronger and stronger until the very day that it was disconfirmed, which meant that the day before it was disconfirmed, you should have had the most reason to believe that it is true, which when you think about it, that's not a good epistemology to have.
And yeah, it's always sort of intuitively just been such a weird thing. But the one thing that
really bothers me about Bayesians is they use the term Bayesian when they're talking about
the analyses as if it's like something amazing, right? I mean, in psychometrics and stuff,
we use it sometimes to create like distributions or like get some estimations but they throw it around as if
it's meant to be something very rigorous and that's what bothers me yeah god they're they're
the worst yeah i i they when people take bayesianism you know like as you've highlighted
on your book daniel there's there's of occasions when it depends on the question you're asking, like what is the appropriate method to apply.
But I have noticed that as we pay more attention to like the secular guru space and the appeal of contrarian perspectives that there's academics do this too you know like
when we're talking about they're not being you know it's a continuum because i i remember and
it's a while ago so this feels like it'll be out of the controversy sphere to mention there was
like a paper with some title again there's something like why pre-registration is useless or doesn't do anything.
Oh yeah, right, right, right.
And it was only two pages. It was like a
very short paper as well.
But that
paper was written
intentionally, provocatively.
Right?
And obviously, there's
tons of metrics that show that it
is useful.
And I think the same thing applies in the case of like abandoning p-values or that kind of thing.
As when they're used appropriately, almost everyone agrees
there's not that much of an issue to use them, right?
But if you take a very strong stance, it kind of generates more attention
and people like being the kind of,
yeah, in an academic niche,
like peacocking, like pickup artists.
Yeah.
That's a comparison.
So now we've compared Bayesians to toddlers.
I have a funny story about Bayesianism. Well, I think it's funny Basing to toddlers.
I have a funny story about Basingism.
Well, I think it's funny because after my first postdoc,
I applied for a job and my background was in psychology and even though I'd gotten into applied maths and stuff,
I'd only learned frequentist statistics.
I heard the word Bayesianism.
And I applied for this job at CSIRO,
Australia's top government research organization.
I was interviewed by the panel, one of whom was a guy
called Bill Venables, who's part of the R-CORE team.
And he asked me about where I stood on Bayesianism
during the job interview.
And I couldn't say.
I just don't know what that is.
So I bullshitted for about 10 minutes.
I just spoke weasel words, vague language, you name it.
I think I got the job.
So what can I say?
I don't know to this day.
Oh, wow.
This guy's an idiot, but they gave me the job anyway.
Who knows?
I don't know.
Or maybe I fooled them.
To this day, I do not know.
Yeah.
But one thing I really think interesting.
So we should be clear, first of all, patients,
most of them who use it do a very good job.
Some of them are decent people.
Also, but maybe more importantly, it hardly matters.
In practice, you've done everything.
You've come up with your research question.
You've developed your measures.
You designed your study.
And it's really all the way at the end, you're going to put this
number on it or this number on it. And those numbers, regardless of the ones you compute,
will point in the same direction most of the time. And I mean, 95 plus percent of the time,
you'll lead to the same statistical inference. So in a way, it feels like a nice example of how a scientific field can get worked up about a topic that doesn't matter as much as measurement, for example.
If we don't pay attention to measurement at all, we mess it up.
Well, at that moment, you can basically stop.
You don't even have to collect the data, let alone analyze it, because what are you doing anyway? But no, all young people feel that this Bayesian versus frequentist
thing is a thing they should spend cognitive resources on. People in the field will even be
crazy enough to say, oh, but if we only switched to Bayesian statistics, all our woes would be over.
And that part of it is what makes me so annoyed, That you want to be a frequentist and just be quiet about
it and do your thing. Fine. Right. But this other part is what makes me speak out and say, well,
how about we just use p-values? Well, that's what most people are trained in. It's probably the most
efficient way to improve things. And then stop thinking about it and focus on all these other
components that are more important. Yeah, it is like that cargo called statistics thing right where it's they're not even using it properly most of the time it's
like our prior is that our coefficients are distributed normally with and it's like and why
do you think that is any we know we just put a prior and pretend like it means something when
it really doesn't so it's yeah i feel like you guys sorry but you hinted at it whereas i think both you guys
would agree that like measurement unsexy uncontroversial measurement is is probably
the most important thing and absolutely i'm always on to you about it aren't i it's um
yeah and you're right that this these things are a big distraction. You want to now talk about Rush? You finally made it so that you can really talk
about Rush modeling now.
We made it.
I'm going to shame Chris with some of these measures.
I'm going to tell him, Chris. It's scandalous.
Some of the measures that you use.
I inherit measures from what
people tell me to do.
I'm a mere cog in their machine.
But
I feel like, to some extent,
this issue is like when people use...
There are good reasons to talk about a Gaussian distribution,
but there are times when I feel like people say that
because they want to say something more complex than normal.
I remember vividly discovering
the Gaussian distribution
is like, isn't that the normal?
Oh, it's the same, right?
Okay, so yeah.
But it does feel that there's...
I think everybody,
especially that has experience
in social science research
that is quantitative in nature
that which most of it should be that the when you go and you've done some analysis or thing
and you want to find out what is the statistically correct thing to do in this edge case scenario
you quickly discover statisticians have argued both sides and a third statistician
that said no they're all wrong and you in most cases people are selecting the citation and
daniel i'm sure that your paper has done this sometimes like when people just take you you
cite you as saying oh you can do this for
equivalence testing lincoln says that that will do it and and that's it right they just want the
justification for because and in many cases because it is involving a lot of effort or the actual
statistics might end up being too esoteric. So the issues that we can focus on,
like having better measurement,
about having appropriate sample size
and claims that match the quality of the data
are much more tractable, I feel,
than expecting that everybody
will dramatically upgrade their statistics.
It's not that you shouldn't invest in your statistical analysis, but just that there's
a lot of other things that even if we had perfect statistical analysis, it wouldn't
solve.
I agree.
And I spent most of the last decade actually trying to get people to improve their statistics,
right?
But I still agree that
actually what I am doing is not the most important part of the research process to improve. It is in
a way a little bit sad that we care so much about this last point. I don't know why. Maybe it has
something to do, Smriti sometimes mentions, our fondness of anything math related that can make us look really smart.
And maybe that's a part of it. So if you just dive into the statistics part a little bit,
and you do Bayesian statistics, then you figure out, you know, you can show that you're really
smart. And that's much more difficult to do with a good theoretical framework or with a good
measurement development or something, you know, maybe that's part of it. I don't know. But it feels that we're really focusing on, yeah, a thing which, sure, you can mess it
up.
You shouldn't.
It's relatively easy not to mess it up too much so that it matters.
Now, let's leave it at that and go back to these other parts, I would say.
Yeah.
Matt, there might be a self-serving point that I'm inserting here.
Matt, there might be a self-serving point that I'm inserting here,
but we have this tongue-in-cheek thing that we call the gurometer,
where we read the gurus on these 10 features which we have found recurrent amongst us.
I think this is a very, very important measurement tool
that is clearly extremely valid.
It's creating the field of gurulogy out of nothing.
I've already heard some gurus say like,
hey, I'm not scoring very well on these guys' gurus.
Well, how can I increase my scores on it?
I've already heard people want to improve their metrics
in guru space.
Are we going to mention the cultural appropriation
of the term guru?
Is that ever mentioned on this podcast?
That has been mentioned, but only in emails.
And mainly the sitar music at the start of the podcast.
That I kind of appreciate.
That actually sounds kind of nice.
But unfortunately, I mean, yeah.
I mean, the term guru has like a very negative connotation in the Western context, but we still use it very respectfully.
Yeah, stats guru, that's a positive connotation.
But one thing that I noticed is the two things we get is on the one hand accused of scientism because we're putting numbers, right?
And like, oh, you guys are pretending it's a scientific instrument.
numbers, right? And like, oh, you guys are pretending it's a scientific instrument. Even if we have a big red flashy thing saying this is not a validated scale or that kind of thing.
But on the other flip side, I saw people say, oh, if you publish about this, then you can do this.
But actually, I think myself and Matt are probably amongst almost
all academics, the people that have listened to the most secular guru content. And these things
that we've identified, I would bet my life savings on them being recurrent features that you find in
this set that we're talking about but people seem to think like
if we publish a paper which which we actually do it that they do but if we publish a paper it
somehow means that it's more you know like true or valid so you have this weird thing where like
and and we feel also i think matt we as well, that having an academic paper with our names attached to it
and this concept of secular gurus, it makes it different.
Even though we have a podcast with like three years of content
on this topic, which is probably of more use
and heard by more people than an academic paper would be.
Maybe it gives it more credibility
because it will go through
the wonderful peer review process.
Yes.
They're going to take down our 10 factors
to eat or some horrible number.
That's what they're going to do.
We did it in paper form
so it can work towards my H index.
That's the main thing.
But I think it is really interesting
because it does limit
the way that we think our contributions to science uh work out and i think definitely like a podcast
um is i mean so i was thinking back about last year basically what i've done and like i didn't
publish that much this year um for all sorts of reasons but But I was thinking, what did I do that I really enjoy?
So I worked a lot on an open textbook.
And there's a lot of material in there.
Nobody will count it.
I don't even think it is on my resume currently.
I didn't put it on my resume somewhere.
I don't know where exactly because it's self-published, whatever.
And the other thing is the podcast, which has a ton of, I think, interesting information.
thing is the podcast, which has a ton of, I think, interesting information, right? And it's true that we don't consider those outputs, even though even in our little startup podcast that we just were
around episode 20, I think, but even that listener, number of listeners is larger than I think an
average, well, maybe an average, I don't know, I can't complain about reading people reading my
papers. But nevertheless, you know, it gets a lot of attention compared to some other academic
output.
Yeah, it's difficult to track, but it definitely plays into this point of the publishing system
being a little bit of scam, and we just do it to get a number, you know, like get the
respected output.
Whereas maybe the intellectual contributions we make in other ways are,
yeah,
at least as impactful.
And there's also,
I mean,
we just have a podcast coming out called quantification,
F-A-U-X,
like quantification,
which I think you guys will enjoy.
And it's really touching on this idea of like,
once we start putting numbers on things,
suddenly it seems like they're actually meaningful,
right?
That it sort of means something scientific and more rigorous than if we're just sitting here talking about these things
right and people also in psychology talk about sort of physics envy right that we suffer from
like we want to sort of put numbers on thing and quantify things and we value sort of quantitative
research a lot more than qualitative research and it all goes into this idea of yeah if you can just
yeah throw numbers around and do some stats on it so that makes it somehow more scientific yeah yeah i'm just about
to do a um economic costing as it happens of gambling impacts um for victoria and new zealand
and um that's that's basically that you know a lot of the impacts are qualitative the subjective
um but we we have to put these numbers on it because ultimately you know a lot of the impacts are qualitative or subjective um but we we have to put these
numbers on it because ultimately you know when the people are sitting in rooms they can count
the money that's coming in and gambling revenue they need something to count over here to to weigh
it up yeah but um but a final question for you both i might start with you smriti um
you know and the podcasting thing is a good lead-in because we want to know whether or not you're optimistic
or pessimistic, both about like the state of like podcasting
as just like a medium of communication.
There's obviously wonderful podcasts like yours and ours
that are contributing, making the world a better place.
Then there's, on the other hand, there is the Dark Horse podcast.
Bless them.
And also academia generally. Smriti, you're going a bit of an unorthodox
route perhaps in terms of how you're approaching heterodox heterodox yeah you're in heterodox
academy you're not playing the game and maximizing your hate index so did you feel good or bad about
about the future there oh that's that's question. Um, well about podcasting, I mean, podcasting, I definitely feel, you know, good, but I think broadly speaking, I think it does. I think it does more good than it does harm. There's lots of good stuff out there. And you can, I mean, and I get to talk to wonderful, cool people like you get, right? Like, it's like, I've just, I'm a Daniel through my first podcast, right? So so it's like it's just such a great way to listen to people having like smart engaging people just
having good conversations so I think you it's like yeah you know like yeah being in a water
cooler and just listening to like just wonderful people talk and you can and I also think that
there's there's some value in hearing different perspectives when you can hear them then you just
read them so there's also that bit of sort of more of a connection with, so I think podcasting in general, yeah, it's,
it's been wonderful for me. I think that's great. I sometimes think I should just, yeah,
give up everything and become a serial podcaster, start two or three more of them.
Academia, I'm honestly not sure. One of the things I think I'm trying to,
like, if I stick around, like, the thing is, I'm trying to figure out if I can help make things
better, um, to move things in a good direction. And if the answer to that is yes, then I would
like to stay because I, I really, I don't see myself doing anything else. Like, I know I belong in a classroom.
That much I know.
Like, I belong in a classroom, like, as a student, as a teacher, as a researcher, ideally as all three.
But if I, yeah, so I, like, yeah.
But the way things are going, at least here, things are not looking that great.
And to me, it's at this point thinking about is there
where's the room for me to help make things better and if i can find that place then i
would like to stay and if not then i would figure out you know i will take paul neal's advice and
make an honest living selling shoes um that's my that's my backup plan. But yeah, so yeah, the answer is I don't know.
Yeah, yeah.
Daniel, how about you?
Well, I think podcasting has basically replaced the science paper of the 20s,
the 1920s, because you should read those science papers.
It's just like, well, so I ran into Matt in the corridor,
and we were both chatting about topic X. And I just wanted to share with all of you some thoughts we had, which is about this and
this and this. I mean, really, those papers are just letters, often communication on a much more
informal level. And we've lost that throughout science. So I think that's why we had blogs,
maybe a decade ago, they were pretty big. And now I think podcasts because they replace this informal sense of communication, which many people appreciate, I think.
So podcast, yes, very positive.
About science, it really depends on whether Schmitty is going to stay in science or not.
That's basically it.
No, but for a good reason, actually, because I think people with that motivation who say like, okay, can I make things better?
And I think that is the really the killer question that we have to confront ourselves with.
Can we make things better with our science?
Or are we just doing it to get the papers in and stuff?
And I feel that the field needs to have this slightly uncomfortable conversation.
What of the stuff we do is useful and makes things better?
And what is the stuff that we're doing?
Because I don't know, somebody is just telling us to do this kind of stuff, but we all feel
that this is not it.
And if we have that conversation and we're able to resolve it in some way, then I think
people like Schmitty.
And I think this is really, really important because sometimes I worry that the most talented
people in our field are not motivated.
That's my drive to do what I do.
I sometimes feel the people that have the most talent for science, they take a peek, they look in to what we do.
They're like, nah, no, I'm going to go this NGO route.
I'm going to work for the government or I'm doing this other thing.
I'll feel I have more impact there.
If we lose those people, we will lose out.
But if we manage to think about, okay, how do we make sure that science is motivating
for people who want to make the world a better place, then we can go in the right direction.
And I would say just from my own perspective that the kind of material that you guys put out and the attitudes that you
present setting aside the issue of the dutch culture i i think it is a really good illustration
of like the proper attitude to science and it's it's it's nice to hear it discussed.
And one of the things that I think is worth keeping an awareness of is the fact that both of you
and particularly you, Smriti, are ambivalent
or considering about whether you can make a positive impact
or that kind of thing.
There's a lot of people that have a lot less ability
and a lot less thoughtfulness,
and they're not at all worried about that.
So we need more people who feel ambivalent,
are a bit annoyed about things,
and have a desire to see things get better.
So I think you're both doing the Lord's work so to speak and uh yeah and and
daniel personally as well even without all your h index and publications the the mooc that you have
i'd help me and remains something that i enforce all undergraduates to at least hear about it. I don't know if they take it, but I think that that definitely will have a lasting impact.
So yeah,
your podcast is great and we heartily recommend that and appreciate you
spending so long to be tortured by us.
Thanks so much.
Thanks so much.
Wonderful.
Thank you for having us.
Thanks guys.
I'm just disappointed we didn't get to item response theory but another day another day another podcast
you can come on our podcast yeah i'm taking that's a promise all right i'm gonna hold you to that
you need to provide your private numbers after this so he can contact you for follow-up chats.
But yeah, thank you both.
Yeah, thanks so much.
It was a lot of fun.
Thanks. Very great.
Bong.
That was finished, Matt.
We've sorted out all the problems in academia.
It's all done.
We finished the interview.
We finished the interview.
It's time to read the reviews of reviews well
that's right i you know i normally like to get your feedback about which specific part of the
interview you like the best but i'll i'll pass that i thought well i think my own contributions
were my my favorite they were pretty good they're pretty good this this time. I thought so as well.
Daniel and Smitty, okay.
They were good, but I was on fire.
I really impressed myself.
Yeah.
Well, review of reviews.
Well, this week we don't have that many.
I haven't canvassed for reviews recently, Matt,
so no need for people to step up their game.
I asked GPT4 about our podcast,
and I'm not going to talk about AI again.
This is just a thing.
Because the reason I asked is not vanity.
I just wanted to, it's a good little litmus test to check how sort of up-to-date and how comprehensive it is
because of the extent to which it knows
about our insignificant existence.
Yeah, super niche and quite recent kind of web presences it's just i
don't know for me it's a helpful i presume you're turning on like the browsing mode yes it now it's
now enabled by default i probably want to try it again i prefer to not it not browse and i just see
what's in its brain but um i couldn't be bothered doing that because the funny thing was
i i asked what people was thought about it about the categories and it found some reviews
and um it said uh there's mixed mixed opinions some people say that it's good there's lighthearted
thing an introduction to serious topics delivered in a light-hearted way other people and then it said
a lot of nasty things and i was like fake news ma we are 4.4 out of 5 stars on apple so you know
don't get fake balance chat gbt uh yeah about six months ago i thought we were conspirituality
it kind of had the two it was like the code in the Gurus is a podcast about secular gurus hosted by Matthew Ramsky.
I've got it.
I'm going to read out Chachi BT's synthesis of the reviews.
Opinions on the contrasting personalities and styles of the co-hosts
of Decoding the Gurus, Chris and Matt, seem to vary among listeners.
Some appreciate the podcast as a much-needed critique of gurus,
describing the hosts as engaging in good faith with a sense of humor, although they note the episodes can be long.
Others find this show biased, with the hosts' voices coming across as arrogant and lacking humour.
Lacking humour, Chris?
Yeah, that's not...
Arrogant, I'll accept, but lacking humour?
Come on.
Criticising them for being too liberal
and not as rigorous as traditional media.
It's the left...
I like this use of liberal because it's ambiguous.
I think Chachi Pichu is using it in the phrase of we're too left-wing.
But if you speak to any true lefties out there,
they would describe us as liberal in the opposite direction.
But anyway, another perspective describes the listening experience
as a mix of crippling cringe and satisfying analysis.
What?
That's definitely.
Who wrote that?
Crippling cringe.
It's got citations.
All right.
That's not us.
That's not us.
Okay.
Yes.
Carry on.
I'm going to follow these citations.
The citations are taking me somewhere at random.
That's all right.
No, I interpreted that as we are showing the crippling
cringe and it's not us being crippling cringe we showcase crippling cringe that must be yeah
that's it that's it positive self-image retained um well we could treat that as a list of feedback
if you like well yes i had one though i can't but i've got that it. It's very short, so it'll just compliment.
I feel it does echo some of the sentiments that ChatGPT raised.
This is from future2-2 from Austria, not Australia.
So it's one out of five star.
Good idea, but what a waste of time.
Just your typical guys like to hear themselves talk for hours
about things that could be said in a few minutes.
You know, is there something to that?
Is there?
Well, you might.
Yeah.
Yeah.
You might say so.
Yeah.
A bit cruel, but he or she has got a point so let's
say shall i boost us up matt will i give us one positive one then we do have one yeah that one
because that one was mean and potentially true so let's let's so this is better, more accurate, and more kind. Elliot Toews from Australia this time.
An Aussie.
All right.
Nice.
My favorite podcast in brackets, I think, spelled T-H.
Don't need the qualifiers.
Don't need the qualifiers.
Favorite podcast.
Full stop.
At first, I thought this podcast was just a bit of fun
to help me get through the Melbourne lockdowns.
Now I think their fun,-hearted way of deconstructing
common manipulation traits with examples
is quite invaluable, at least for
me. I was raised in somewhat
of a cult and hadn't thought about
it much in my adulthood till this
show. While I'm here, perhaps you
could cover normal Finkenstein.
I'm a fan of his and a bit quite topical.
I don't know who that is, but okay, we'll consider
it. He's surely not galaxy brain,
but perhaps you'd find him guilty
in the lower dimensions of the grometer.
Okay, I should have wrote that down.
You need more left-wing people anyway.
Okay, now, bye-bye.
That's it.
It changed midway through.
Why? I like that.
There's a lot of information dropped in that.
I was raised in a cult.
I initially find it just for away fun, but I think there's a lot of information dropped in that i was raised in a cult i initially you
know find it just for away fun but i i think there's more to it and then invaluable grist
his words or her words not one invaluable yep yep not chat gpt is not a synthesis of
anonymous but it doesn't even have a soul don't don't listen to gpt It's a stochastic pirate, but it's not pirating real things.
It's just putting words together, hoping they make some kind of sense.
There's nothing behind the screen.
It doesn't mean anything.
No.
Well, so there you have it.
But we don't leave yet, Matt.
We thank patrons, the people that are on our Patreon.
You thought you were going to get away.
No, they're here.
They're crowding around the exit to prevent you from leaving.
And so I'm going to shout the right, Matt.
I'm not just stalling until I find the list.
That would be unprofessional.
Now, I like to
find conspiracy
hypothesizers first
because they're the easiest to discover.
So here we go, Matt.
A bounty of names.
Jimmy Tulloch, Jeff,
Nick, Simon McLean,
John Barry,
Jan
HB, Matthew Tobias, Jan HB,
Matthew Tobias,
Chris Topp,
Alex Bowers,
Dave Lavelle,
Sean Job,
Hasty Rewrite,
Mr. Blythe Man,
William Carpenter,
Colleen Chandler,
Ryan Goss,
Joseph Whelan,
Arabella Hulsapful,
Matthew Ross, Ben Mack, X, no, That's our Controversy Hip-Hop Cisors. Nice. There's a lot of you, and I love youe. That's our Controversy Hi Pop Cisors.
Nice. There's a lot of you and I love you all.
Thank you.
Yeah, thank you all.
I feel like there was a conference that none of us
were invited to that came to some very
strong conclusions and they've
all circulated this list of correct
answers. I wasn't at this conference.
This kind of shit makes me
think, man, it's almost
like someone is being paid. Like when you hear these George Soros stories, he's trying to destroy
the country from within. We are not going to advance conspiracy theories. We will advance
conspiracy hypotheses. Yes. I love more Chris. I love them harder than lex friedman loves love oh yeah wow that is hard
um i like them a lot and now we have revolutionary thinkers they include
nick angiono alex nelson dexter king williams uh jan demi lena nailed robinson Alex Nelson Dexter King Williams Jandami
Lena Neld Robinson
Kevin Nyberg
Tristan Vahan Draper
Conal Dunn
Alexis Allen
Jeff Fitch
Stephen Clegghorn
Parvana Angus
Niels
Christian Nielsen,
Kevin O'Rourke,
Agnes
Ziaka
Zed, god damn
I'm sorry, Chris
Horvat,
Andy Hunt,
F, and
Daraka Laramon-Hall.
Bunch of them, Matt.
Good haul of revolutionary thinkers.
Love them too.
Even harder than the hypothesizers, if that were possible.
They can get to coding academia.
Like, just imagine the Huberman episode,
all the comments that you really enjoyed about regressions
and significance values by extended for huge amounts of time so yeah that's that's what you
get there decoding academia i'm usually running i don't know 70 or 90 distinct paradigms simultaneously
all the time and the idea is not to try to collapse them down to a single master paradigm. I'm someone who's a true polymath. I'm all over the place. But my main
claim to fame, if you'd like, in academia is that I founded the field of evolutionary consumption.
Now, that's just a guess. And it could easily be wrong. But it also could not be wrong.
The fact that it's even plausible is stunning.
wrong. The fact that it's even plausible is stunning.
It'll never
cease to land.
That's a keeper.
That quote.
Yeah.
Well, speaking of
keepers, Matt,
we also like to find
galaxy brain
gurus.
The highest tier who can come and talk to us and get our wisdom being directly
interactive like synchronous performance of decoding wisdom or not or not they can they
can just contribute and be nice in that respect and They don't have to. So today they include Sonic Screw Rider.
Pretty good name.
Alex Scooten.
And that's it.
Just two of them.
Just going to give them their own little cage,
put them there on display in my menagerie.
They could be in there in the box.
They could talk amongst themselves.
Yeah, they're in the
AI box that Lex is trying
to work out with Yudkowsky how to
defeat the tree aliens.
So that's where you guys are.
It's always going to talk its way out of the box.
Yeah, that's it. No, it's me the codemate. You forgot, it's me the to talk its way out of the box yeah
that's it
no it's really cold Matt
you forgot it's really cold
and it lives forever
and it's
yeah
just
it's already out of the box
it's copied itself
many a times
so
oh Yudkowsky
Yudkowsky
can I just say
before I play the clip
I know this is
whatever
you know if you're still here
this is what you came for.
Yudkowsky is, I saw him going on about some specific thing
about AI and payment systems and cryptocurrencies, right?
And like, when we listened to him, he was morose,
but because he can't tell the youth that there's any future because the
world's going to be destroyed by the ai very likely and then he's on twitter tweeting about like
credit card repayment systems and stuff like it's just it just it doesn't add up completely
it's it's like it's like a cult leader who's forecast that the world is going to end and
get hit by an asteroid in a couple of weeks and they're busy negotiating their car insurance
yeah so it's uh it's a strange thing so yeah i i don't know anyway kowski's a little bit of a joke
so let's play the galaxy bring guru clips anyway here it is we tried to warn people yeah
like what was coming how it was going to come in the fact that it was everywhere and in everything
considering me tribal just doesn't make any sense i have no tribe i'm in exile think again sunshine
yeah that that was one of the best um illustrations of the cassandra
complex done by the sovereign nation oh so in 2040 me versus oh that was classic we told them
what was coming and how it would be in everything and we saw i i loved that one where he went into time travel paradoxes. You know, like 2020 me would now think 2023 me is too extreme,
but 2030 me would.
Thank you.
He's actually an undervalued guru of the sovereign nations man
whose name currently escapes me. you should go check it out
if you haven't heard that episode it's good yeah yeah that's a funny one um so we're off map we're
off out into the solar system to dance around our merry way our next guru you know we got choices
i'm just pointing out that you forced me to listen to red scare material um and yeah you yeah
you i give you a choice and you said let's do something a bit different so i listened to red
scare material you haven't yet so maybe you should do that i'll spoil it for everyone they're not
secular girls but they are very annoying so this is the thing
do we make matt listen to something that's very annoying i vote yes i feel like we've had some a
few negatives like we've had a few more than a few counter examples people that are not gurus
that in our coverage recently we should do someone who's definitely a full-blown guru
in the next episode.
Can we change it for Red Scare?
Is it too late?
I've listened to them, so it's too late for me in that respect.
But we are going to do them because I listen to them.
But we don't need to do them next.
I say, you're not going to like this description, Eli.
I say we do Harris.
I say we do Harris, Simon Harris.
We get him off.
I've clipped an episode already.
It's getting older.
He's going to complain about us.
We're going to have to talk to him.
He won't, Matt.
He's very open to criticism, and there's various things.
He went on with Chris Williamson, our old friend Chris.
We might get some clips from him.
Just come on. Let's from him. Just come on.
Let's do him.
He's on the list of like big gurus.
We got to tick him off.
All right.
Well, we could.
We could.
What about like Cernovich?
Cernovich.
Do you want like Cernovich or Stefan Molyneux?
I know they're awful, but they're definitely gurus
and we haven't covered them because they're so horrible.
I'll trade you.
I'll trade you Sam Harris for Stefan Molyneux.
Oh, God.
We'll get the Red Scare.
We'll get there as well.
Well, we'll keep them guessing.
Who will be next?
You'll find out soon, but the next episode will be a decoding.
So, yeah.
You heard it here first.
Yeah.
You'll see.
You'll find out.
All right.
Sounds good.
Good plans.
See you later, Chris.
Arrivederci.
Bye-bye.
Bye. Thank you. Yes.
Hey.
Oh.
I thought I was doing it.
Hi.
Hi. Okay, you go then. No, you go. No, no, you go. It. Hi. Hi.
Okay, you go then.
No, you go.
No, no, you go.
It's fine.
Well, I was just going to say.