That Neuroscience Guy - How Research Reallly Works
Episode Date: January 16, 2022Scientific research is a detailed process we use to answer important questions, and it's our best method for understanding the world around us. In today's episode of That Neuroscience Guy, we discuss ...how scientific research is done, why we trust it, and how you can know the difference between good and bad research.Â
Transcript
Discussion (0)
Hi, my name is Olof Kregolsen, and I'm a neuroscientist at the University of Victoria.
And in my spare time, I'm that neuroscience guy. Welcome to the podcast.
Research has a massive impact on our daily lives. Say you're going to take up a new exercise program
or a new diet. Well, how do you
know it really works? Like, are you just going off someone's opinion or is there research that
shows that it actually works? In the case of the COVID pandemic, you know, as we know, there's been
a big debate about vaccines and whether vaccines work or not, or whether they're effective.
Well, they are because there's research.
And throughout this podcast, all of the topics I've covered, I've always framed it as research says, because that's what I do. I go and I do the research and I read what research is there.
And there's a difference between doing research and reading research. Doing research just means
you're looking into something something but actually reading the scientific
research that's there now why does that matter why do we care about scientific research well
in the case of say vaccines you want to know whether it works or not you want to know the
truth and that's the whole point of science in research. So on today's podcast, I'm going to walk you through how research is actually done and why it's done the way it is.
So today, the neuroscience of research.
Research begins with reading, and I mean a lot of reading.
In the old days, we'd say that research began at the library, but in this day and age,
of course, everyone accesses things online. So you find research articles. So before you even
begin a study, before you even want to attempt to look at something, you need to know what's
being done before. So you read, and you read, and you read, and you read. I wouldn't want to estimate how many research
papers I've read over the course of my life, but let's just say it's a lot.
And why do you do all of this reading? Well, you need to know what's been done before. Like,
say you want to ask a research question, you want to look into something, you know,
and you want to look into something, you know, does eating a low sugar diet reduce weight?
Well, there's no point running that study if that study's already been done. So you need to understand what's been done before. And concurrent with that, you need to know what has not been done.
So what are the gaps in the research? Maybe people have looked at low sugar diets and their
impact on weight loss, but maybe they have looked at only certain types of sugars or in certain
situations. So by reading, you can understand what's been done and what hasn't been done.
And there's nothing wrong with replicating something that's already been done. But in terms
of the ethics of research, in general, we want to do research
that's original. We don't want to waste people's time and we don't want to waste the taxpayer's
money. And by understanding what's being done and what hasn't been done, that puts you in a place
where you know what needs to be done. So what is the missing study? What is the question that
needs to be asked? So what do you read? Well, I can tell
you, you don't read Wikipedia or Facebook or these other kinds of things. And you don't read CNN or
Fox News. What you do is you read primary research articles. These are articles published by
scientists in scientific journals. Now, I'll apologize for the scientific community in general.
Most of these articles are horribly written. And they're written that way because it's a
bunch of people that are writing for a very specific audience, and they haven't really
been trained to communicate effectively. In fact, one of my favorite books that I'd highly recommend
is a book called Don't Be Such a Scientist. And it's about a former scientist who's taking a stand about the way that scientists communicate.
And I agree with him 100%. It's a great read, even if you're not a scientist by training.
Now, why primary research articles? Well, we mentioned this in a previous podcast,
and I'll get to this at the end, but here's the preview.
Basically, a primary research article, let's say I run my study on low-sugar diets.
Well, to submit it to an academic journal, so I don't submit it to Sports Illustrated or Time Magazine, I send it to an academic journal.
Well, it goes through what's called a peer review process.
And what that means is other scientists read your science and say, yes,
this has been done appropriately, or no, there's problems with this. So without that peer review
process, it's very, very, very hard to decide whether science is good or not. You could imagine
you ran that low sugar study just on your own, but you might make a few mistakes. And that's what
I'll talk about coming up. So you've done all your reading and you've read primary research articles. All right, these
are, like I said, articles that are actually scientific studies. They're not reviews of a
bunch of scientific studies. They're not textbooks. They're the actual scientific articles that are
published. And as I've said, that allows you to know what has been done and what hasn't been done. That leads
you to the ability to formulate a research question. So what is it that you're interested in?
Well I want to know whether a low sugar diet is going to lead to weight loss. So that's my
research question. Now that leads to the next stage of the planning phase. So the reading is
the first stage. The second stage is study design. How will you answer the research question? How will I decide if low
sugar diets lead to weight loss? Will I use surveys? Will I measure some data like people's
weight on a scale? Will I get people to have a journal of what they ate? You know, in my case,
it probably wouldn't help with a low sugar diet. But, you know,
in my case, a lot of our research questions use neuroimaging. So we use EEG and fMRI and other
tools to peer inside the brain and see what's going on. So you have to decide on a method,
like how are you going to ask the question? What are you going to measure? What is the thing you're
going to measure? Now, there's a whole other type of research called qualitative research. Most of
what I'm talking about here is quantitative research, which is what I specialize in. Qualitative
research is more like polls about, you know, you ask open-ended questions about, well, how do you
feel about the COVID pandemic, as opposed to a quantitative study where you're measuring someone's
change in, say, depression using some form of numerical scale. Now, again, this is where the
reading pays off because you need to understand the methods that others have used, right? If you
ask a question and you want the scientific community to accept it, generally you want to
use a method that's approved of and that people will sign off on. And this also comes from training,
which is the podcast about being a neuroscientist.
So you need a scientifically validated methodology.
Why?
Because you need to understand the strength and limitations of your technique,
what it can do and what it cannot.
For example, take a simple scale.
A simple scale does return your weight,
but it doesn't tell you anything about body composition. For example, the amount of fat that you have, your percent body fat. So there's a
limitation to scales because someone might gain weight, but is it because they're gaining fat or
because they're gaining muscle? But there's a strength to the scale. It's simple. You can get data quickly and relatively accurately. So again, reading allows you to determine these methods. So when you're designing
your study, you pick the appropriate tool. And this is really important because it helps against
biasing results and you're doing things that are approved. You're not using some technique that
no one agrees on. I'll give you another example of this. Imagine you want to
measure heart rate as a part of your research. Well, there's a whole bunch of different ways to
measure heart rate. You know, most of us wear some sort of wrist device these days that attempts to
guess our heart rate. Well, it's actually not very accurate. The gold standard is to still put some
electrodes over the heart and actually see the contraction of the heart muscle. But of course,
there's a problem with that. It's hard to run with electrodes on your chest. Now, with that being
said, Garmin and some other companies do make chest straps, and that's why runners prefer chest
straps. They're more accurate than wrist-mounted devices. So you decide on a technique, how you're
going to measure what you want to measure. So the second thing you have to do is decide who you're going to test. Are you going to compare two groups, group A versus group B?
Or do you just want to look at one group and see how that group changes over time?
So you might measure them in January, February, and March and see if there's a change.
Now, this is another part of the study design process, and it's tied to your research question.
You know, if your research question.
You know, if your research question is, I want to know if depression is different during COVID between the United States and Canada, that's what we call a between design, and there's two groups.
But you also could imagine saying, well, I want to look at how depression due to COVID changes
over time across the next six months, and that's what we call a within design. And that's a
different type of design. It all comes down to the research question that you're trying to answer.
And there's a right way to do this and a wrong way to do this. And this is why, you know, training is
a part of this. So you've got your research question, you decide on a methodology, you decide
on a design, you're going to measure multiple groups or one groups over time.
And then the last step of planning, and there's a lot more to it than this, I'm giving you the crash course, is how are you going to analyze your data? Okay, you know, what data are you getting?
And then how are you going to analyze it? So let's say I want to compare depression between Canada
and the US. Well, I've decided to use a survey. Well, how am I going to analyze that survey data?
What statistics will I use to show that there are differences? And you do this in advance to reduce
bias. You know, if you do this after the fact, you might start looking through different statistics
until you find a statistic that works for you. So what we do as scientists is we call this shot
before we actually run the study.
Now, once you've designed the study, you can finally formulate specific hypotheses.
Science isn't really grip it and rip it.
You just don't go out there and just say, okay, let's see if depression differs.
You'd come up with a very specific hypothesis. Now, if I stick with that depression example, I might say,
I believe people are less
depressed in Canada than the United States because there's less people and they have more space
between them, so they're less likely to fear COVID. And that hypothesis becomes very specific,
as you say, and this will be seen on Survey X, where Canadians will have lower scores than Americans. Now, once you've got that
hypothesis, you're ready to go, and that's where you get into data collection. You just go out and
collect your data. So, using the design you've decided upon, using the methods you've decided
upon, you go get the data you need to do the statistics to test your hypothesis.
Now, I'm going to spend a fair bit of time on results,
and I'm not going to talk much about data collection, because data collection really
depends on what you're doing. If you're doing surveys, for instance, you might mail them out
or send them out over the internet, or you might show up at people's doors. If you're measuring
brainwaves, you typically bring people into a lab unless you use mobile technology, but you go collect your data.
Now, results. This is the key part, I guess. When scientists say something is true,
like what's the basis for that claim? Well, obviously it comes from the data.
And what we do is we always start with the notion that this has happened by chance.
All right. Or that perhaps the result was due to other factors.
Let's come up with a different example.
Let's say that we assume that people in the United States are taller than people in Canada,
and we only look at one person.
Well, what if that one person's Shaquille O'Neal,
and the one person you pick in Canada is me?
Well, you would clearly say that people are taller in the United States
because Shaq's a lot taller than I am.
So you have to ask yourself, is this true or did it happen by chance?
So one of the ways we protect against this is by sampling.
And what we do is we sample randomly, typically.
We want to pick random people from the population.
We don't want to be biased. We don't want to only pick people from Los Angeles or Toronto.
We want to pick people from all across the country, all genders, all races, all age groups,
unless, of course, our research question is focused on a very specific group.
And we want to use large samples. This is
probably the biggest thing to appreciate, that for scientific research to be valid, you want to use
large samples. If you've only got a sample size of five, and I'm going to make fun of a lot of diets
because a lot of diets are based on sample sizes of four or five, you can't draw conclusions because
it could have happened by chance.
Now, that's important, and this is probably the most important part of understanding research.
Remember, what we want to know is truth about a population.
If we go back to that depression example,
we want to know, are Canadians less depressed than Americans?
But we can't measure every Canadian, and we can't measure every American.
So we're going to pull out a sample and from that sample, we're going to draw conclusions about the population.
Now, for that to be fair, like I said, we sample randomly and we get large sample sizes.
All right. And that's a crucial thing. If you want to make, you know, if you're going to evaluate
science, look at the sample size, you know, depending on the type of data, if you're going to evaluate science. Look at the sample size. Depending on the type of
data, if you're doing a brain imaging study and there's less than 30 people, I raise my eyebrows.
If you're doing these large-scale survey studies and there's less than 1,000 people, I kind of
worry. It really depends on how you're collecting your data. So then how do you actually draw the
conclusion? Let's say you've tested a thousand Americans on
a depression scale and a thousand Canadians. Well, that's where the statistics come in.
And like I said, I've pointed this out at the outset. All right. Now, one thing you can do is
just compute the average depression score. You could compute the average depression score for
your thousand Americans and you can compute your average depression score for your 1,000 Americans, and you can compute your average depression score for your 1,000 Canadians. And let's say they're different. Let's say the average for Canadians is 5.2 and
the average for Americans is 6. Well, are those true numbers truly different? Now, they are,
of course, literally. But the question is, are they statistically? Now, one way that we do this
is what's called a confidence interval. We compute a statistic called a confidence interval,
and it's basically an error region around that.
So we believe the average is 5.2 plus or minus 2.
All right?
You might have heard of this when people do some surveys
where they say we believe the margin,
this is true, but the margin of error is this.
So 60% of people like candidate X with the error of plus or minus five.
So that means that it could be between 55 and 65. Well, this is a confidence interval.
And if our depression confidence interval, say, was 5.2 average with a range from 1.2 to 9.2,
then it would be hard to argue that's different than a depression score with an average of six.
So confidence intervals are one way we do this, and that's what they're all about.
The confidence interval really reflects that we believe the true value is somewhere between that range,
but we only tested a certain number of people, so we don't know exactly.
Now remember, if you test everybody, the confidence interval is zero.
You know the truth, but you can't do that. The other way that scientists do this is through
what are called inferential statistics. Some of you might have heard of the infamous p-value.
Basically, inferential statistics could be a podcast in themselves, but the idea is you
compute a statistic that allows you to compare two or more groups or two or more time points.
And if that statistic is below a certain value, you say, yes, these groups are different or yes, these time points are different.
And if it isn't, you say they're the same.
And again, the reasoning behind this is because all you have is a sample.
You don't have the population, but you're trying to draw conclusions about the population.
So confidence intervals, inferential statistics,
and there's a whole bunch of other statistics coming out.
Right now, I'm into Bayesian statistics,
which is a new way to do this
because people have been sort of knocking p-values around.
Now, that's quite a bit in there,
and I know we've gone a bit long this week,
but it's an important topic. But for you as a consumer of science, I'll give you some key points.
First of all, always look at the sample size. That's probably the most important thing.
Have they tested enough people that you believe that that might generalize to the population?
people that you believe that that might generalize to the population. Have they used random sampling?
All right. You know, attitudes about COVID, for instance, differ depending on where you are in a given country. So have they truly randomly sampled the population or is the sample a bit
biased? Do you believe their methods? Have they grounded it in scientific research or is it an
opinion? These are things
that you have to evaluate. And probably the one I'll leave you with last is replication.
All right. Replication, replication, replication. Has someone else run the same study and found the
same results? This is why I find the whole anti-vaccine argument so hard to understand
because the research on vaccines has been replicated countless times
and their positive impact and that they work.
And there's no science that shows that vaccines don't work.
And that, I guess, is the true last step.
Peer-reviewed publication.
Never forget that.
I know it's hard to read, but don't read Facebook or CNN or Fox News or Wikipedia.
Go to a library and find actual scientific journals and look at the science that's the
truth about the issue you're interested in. Thanks for listening. I hope you found that
interesting. I just thought I had to share that because right now, all I hear is, oh, there's research showing this
and there's research showing that.
And most of the time when I look at it,
the research isn't what I would call real research.
Now, hey, please subscribe to the podcast.
It really helps us out.
You can follow me on Twitter if you want updates
about the Kregolson Lab and neuroscience in general.
It's at that NeuroSci guy.
We do have our YouTube channel.
There's actually quite a bit of content there
about today's topic.
That's thatneuroscienceguy.
Please email us for ideas for season three.
We've only got a couple of episodes left
and then we're going to stop at episode 21 of season two.
We'll take a break and we'll gear up for season three.
Thatneuroscienceguy at gmail.com.
We really want to know what you want to know about in terms
of the neuroscience of daily life. And finally, thatneuroscienceguy.com. Right now, it's just a
link to my personal website, but for season three, we're going to build out our website and add some
really cool surprises. Thanks again for listening. My name is Olof Kregolsen, and I'm that neuroscience
guy. See you on the next podcast.