The Taproot Podcast - 💔Evidence Based Practice is Broken, How to Fix Research - GetTherapyBirmingham.com
Episode Date: March 26, 2022Address: 2025 Shady Crest Dr Suite 203 Hoover, AL 35216 Email: Admin@GetTherapyBirmingham.com Maps: https://goo.gl/maps/cnverPNUPuxiPkbc8 Podcast: https://gettherapybirmingham.podbean.com/ Phone: (205...) 598-6471 Fax: 205-634-3647 Embrace the power of self-discovery and healing: 💫💚 #SelfGrowth #phd #insurance #therapy #academia #healthcare #trauma #Brainspotting #EMDR #psychotherapy #self #growth #change #research #academicjournal #impactfactor
Transcript
Discussion (0)
Evidence-based practice is broken. Let's fix it.
The following is a quotation from Wikipedia.
The McNamara fallacy, named for Robert McNamara, the U.S. Secretary of Defense from 1961 to 1968,
involves making a decision based solely on quantitative observation and ignoring all others.
The reason often given for this perspective is that other observations
cannot be proven. The fallacy refers to McNamara's belief as to what led the United States to defeat
in the Vietnam War, specifically his quantification of success in the war, e.g. in terms of enemy body
count, while ignoring all other variables. End quotation. I remember going into my first day of research class in my master's
program at the University of Alabama when I was getting my social work degree, and we sat and we
learned the evidence-based practice system that psychology runs on as a profession. Put simply,
evidence-based practice is the system by which clinicians make sure that the techniques that
they are using are backed by science.
Evidence-based practice means that psychotherapists only use interventions that research has proven are effective.
Evidence is determined by research studies that test for measurable changes in a population
given a certain intervention in therapy.
What a brilliant system, I thought.
I then became enamored with research journals.
I memorized every methodology by which research was conducted.
I would peruse academic libraries at night for every clinical topic that I encountered.
I would select studies that used only the best methodologies
before I would believe that their findings had merit.
I loved research and the evidence-based practice system.
I was so proud to be a part of a profession that took science so seriously and used it to improve the quality of care that I
gave patients. There was just one problem. The more that I learned about psychotherapy, the less
helpful I found research. Namely, the more that I did psychotherapy, the less evidence-based practice I found myself using,
simply because many of what were evidence-based practices just didn't work.
Every expert that I encountered in the profession didn't use methods that I kept reading about in research.
In fact, there were actually psychological journals from the 1970s that I found more helpful than modern evidence-based practice-obsessed publications.
They would come up in digital libraries when I searched for more information about the intervention that my patients liked.
Moreover, I found that all of the most popular and effective private practice clinicians
were not using the techniques that I was reading about in the scientific literature.
So what gives?
Psychological trauma and the symptoms and conditions that psychological trauma causes,
PTSD, dissociative disorders, and panic disorders, for example, are some of the most difficult
symptoms to treat in psychotherapy and one of the most common reasons that patients present
to receive care. It therefore follows that patients with disorders caused by psychological
trauma would be one of the most studied populations in research. So what are the two most commonly researched interventions for trauma?
Prescribing medication and CBT, or cognitive behavioral therapy. One of the things that some
of the leading experts in trauma across the world agree on is that CBT and medication don't actually
process trauma at all. Instead, they assist patients in managing the
symptoms that trauma causes. As a trauma therapist, it is my goal to help patients actually process
and eliminate psychological trauma. Teaching patients to drug or manage symptoms might be
necessary periodically, but it shouldn't be the goal of treatment. I'm mixing metaphors,
but this image might help clarify these treatment modalities for those unfamiliar.
Imagine that psychological trauma is like an allergy to a cat.
Once you have an allergic reaction to the cat, a psychiatrist would give you an allergy medication like Benadryl.
A CBT therapist would teach you how to change your behavior based on your allergy.
They might tell you to avoid cats or wash your hands after touching a cat. A therapist practicing brain-based medicine, or somatic-focused trauma treatment,
would give you an allergy shot to help you learn to be immune to cats.
The CBT patient never gets to know a cat's love.
They never get to be around cats.
They're taught how they can avoid cats and manage the symptoms that cats cause.
I don't have time to explore here why therapy gives patients scripted ego management strategies like CBT took over the perspiration in the 1980s. If you're interested,
there's an article on our blog at gettherapybirmingham.com that describes why corporate
health care and corporate academia like CBT and had pushed it on the profession in the 80s and 90s.
Suffice it to say here that the insurance and American healthcare companies pay for much of the research that is conducted,
and they like to make money. So CBT and prescribing drugs are two of the easiest
ways for those institutions to accomplish those goals. Many of the most effective ways to treat
trauma use the body and the deep emotional brain or fight or flight system in the subcortical brain
to assist patients in processing and permanently releasing psychological trauma.
You will rarely find these modalities used in a larger hospital setting. Unlike CBT,
these modalities accomplish things in a way that is not manualizable. A clinician is using their own brain.
They're not following a formula.
They cannot be reduced to,
if they say this, then you say that, type of script.
Instead, somatic therapies often use a therapist's intuition
and make room for the patient to participate in the therapeutic process
with their own insight and their own intuition.
CBT, on the other hand, is a formula that a therapist is performing correctly or incorrectly
based on their adherence to a manual.
Right now, hospitals are rushing to program computers to do CBT so that they can reduce overhead.
As a therapist, that thought is scary to me.
Think of a therapy experience that's like a self-checkout machine
at a Walmart. If myself and most of the leading voices of the profession agree that the newer
brain-based and body-based therapy modalities are the future of trauma treatment, then why hasn't
research caught up to that? To stop this article from becoming a book, I will break down the failure
of modern research to back the techniques that actually work in psychotherapy in a couple of points. Point number one, it's
expensive. Research studies cost tons of money and they take tons of time. Researchers have to plan
studies and get the studies cleared with funders, ethics boards, university staff, etc. They then
have to screen participants and train and pay staff. The average study costs
around $45,000, but they can cost a lot more than that. I would love to do a study myself on some of
the therapy modalities that we use at Taproot Therapy Collective, but unfortunately I have to
pay my mortgage. Studies get expensive. When you are studying things that have a lot of moving parts
and a lot of variables, those studies get more expensive as
each variable is something that you have to control for. Things like therapy modalities that actually
work to treat trauma are unscripted, and as a clinician is using their intuition, conventional
wisdom, and also a relationship with a patient to heal, then these variables get exponentially more so than in a
scripted or formulized modality of therapy. Someone has to pay for those studies, and some of those
someones usually aren't giving you money without an agenda. So giant institutions that are one of
the most likely to benefit from researching things like prescribing drugs and CBT,
fund a lot of these studies.
They are also the ones that are more likely to control who gets to research what.
Sedative drugs that are prescribed to treat trauma work essentially like alcohol.
They dull and numb a person's ability to feel.
Antidepressants reduce hopelessness and obsession.
And while this might help manage symptoms, it doesn't help
patients process trauma or have insight into their psychology. Antidepressants and sedatives
also block the healthy and normal anxieties that poor choices should cause us to feel. Despite this,
drugs are often prescribed to patients that have never been referred to therapy.
A doctor spends five minutes with someone. They say, okay, you have anxiety. They
give them the medication. If the person had gone to therapy, maybe they would have noticed that
their relationships were not healthy or that there's domestic violence in their house. There's
some reason for that anxiety to be there. When you don't take time to figure out what's going on in
someone's environment and you just use the DSM-5 to indicate which pill to prescribe, within a
five to 15 minute appointment,
the average psychotherapy appointment, the average psychiatry appointment in America is less than 5 minutes,
then you're not figuring out what the anxiety is there and if it actually is a healthy thing that should be listened to.
For all the rigorous and ethical standards that modern research mandates,
it doesn't specify who pays the bills for studies. So even though there's an ethics board that sees if the study is ethical,
the system is not ever examined. Drug companies conduct the vast majority of research studies
in the United States, and those same drug companies also like to make money. Funnily enough,
most of the research drug companies perform tends to validate the effectiveness of their own products.
Does anyone remember all the 90s cigarette company research that failed to prove that cigarettes were dangerous?
They had all of these studies funded.
They still passed an ethics board review.
And somehow, the research that cigarette companies were funding just failed to find any link between cancer and
cigarettes. Maybe we should distribute research money to the professionals who are actually
working clinically with patients instead of career academics who do research for a living.
At the very least, keep it out of the hands of people who have a conflict of interest with the
results. And this leads me to my next point. We only research to prove things that we want to know. The thing that
got left out of research 101 class was that research usually has an agenda. Even if the
science is solid in the study, there are ideological reasons that someone might want to do the study.
For example, the DARE program was something that was funded in the 80s to keep kids
off drugs. Well, they did scientific studies to see how well the program was working and if it
could be improved, and everyone who encountered the DARE program was much more likely to try drugs.
Even though that research was done, the DARE program still stuck around
for 10 more years after 10 more studies said the same thing
giant institutions don't like to be told that their programs need to change they wield an
enormous amount of power over what gets researched and they tend to research things that would
validate the decisions that they make even if those decisions are bad. Even if the ethics in the individual research studies validating those decisions is good,
it doesn't make the decision a better decision.
If you want to research an effective guide for clinicians to use evidence interventions,
then you have to research all of the modalities of psychotherapy in equal measure.
When the vast majority of research is funneled into the same areas,
then those areas of medicine become better known clinically regardless of their validity. When very few
models of therapy are researched, then those few models appear falsely to be superior. Easier and
cheaper research studies are going to be designed and completed much more often than research studies that are more complicated.
Even when an institution or monetary control of research is not the issue, the very nature of
research designs means that it is trickier to research things like patient insight than it is
to research something like hours of sleep. One of those is a subjective target and the other one is
an objective number. This leads me to my next point.
Objective is not better, because people are not robots.
CBT was designed by Aaron Beck to be a faster and data-driven alternative
to the subjective and lengthy process of Freudian psychoanalysis.
Beck did this by saying that patients had to agree on a goal
that was measurable with a number like hours of sleep or the times that I drank alcohol,
and then complete assessments to see if the goal was being accomplished.
Because of this, CBT is inherently objective and research-based.
CBT is therefore extremely easy to research.
This approach works when it works, but a person's humanity is not always reducible to a number. I once heard a story from a colleague who was seeing a patient who had just completed a CBT
with another clinician to reduce the patient's marijuana use. The patient, who appeared to be
very high in the session, explained that his CBT clinician had discharged him
after he cut back from six joints to only one joint per day. The patient explained proudly
that he had simply begun to roll joints
that were six times larger than the ones he normally smoked. The amount of marijuana was
the same, even though the number in CBT was changing. The story is funny, but it also shows
you the irony of a numbers-based system invading a very human type of medicine. Squeezing people
and behavior into tiny boxes means that you miss the whole person. Patients with complex
symptoms and presentations of PTSD and trauma are often excluded from research studies because they
don't fit the criteria of having only one measurable symptom that needs to be reduced for
the study. Discarding the most severe and treatment-resistant cases means that researchers
are often left with only the easiest cases of PTSD to treat. This in turn falsely
inflates the perceived efficacy of the model that you are researching. Additionally, these studies
usually exclude people who drop out of therapy early. If a patient quits receiving therapy,
that isn't counted as a failure on the part of the therapist. In my experience, people who leave
therapy have been failed by their clinicians. The clinician and therapist have failed to engage the patient, and that is not the patient's fault.
So this falsely inflates the efficacy of the models that discount patients that don't continue to come to a treatment that they feel is not helping them.
It is my belief that it is the therapist's job to engage the patient,
and that people typically continue to come to therapy when they realize that it is helpful.
And when they are discovering that it is not helpful, they do not stick around for the length of a study, sometimes many months.
Trauma patients often quickly know whether or not a treatment is something that is going to help them or whether or not that information that the therapist is giving them is something that they've already heard and that they've already tried.
Trauma affects the subcortical regions of the brain underneath the prefrontal and mid-cortex, the same regions that
newer brain-based medicine is targeting. CBT is a cognitive-based intervention that measures and
seeks to modify cognition in the prefrontal cortex. Clinical research stays away from measuring
subcortical activation and patients' subjective feelings in favor of
measuring cognition and behavior. Newer models of therapy like brain spotting and sensory motor
therapy are able to deliver results to a patient in a few sessions instead of a few months.
Brain spotting changed my life, but after completing the brain spotting therapy, I didn't
intellectually know anything different. Brain spotting did not impart intellectual or cognitive knowledge. I was able to notice instead how my
body responded to my emotions. I was able to feel my emotions differently. I was able to recognize
them and regulate them. I was also able to release stored emotional energy that had previously caused
me distress in certain situations. Brain spotting did not significantly change my behavior,
and it would be difficult to quantify how my life changed with an objective number.
These kinds of subjective and patient-centered results are difficult for our modern evidence-based
practice system to quantify. Research hesitates to measure things like insight, body energy,
happiness, self-actualization. However, these messy and human concepts are the ones that
are the most important in therapy. We need to learn to research concepts about humanity and
connection if we're going to improve a profession that is based on these ideas. Next point. Once a
research study is completed, the way that it is delivered to the professional community is through a research journal, publish or perish.
Modern research journals focus on cold, data-driven outcomes and ignore things like impressionistic case studies and subjective patient impressions of a modality.
The decision to do this means that modern research is useless to most practicing clinicians.
Remember when I said that I read academic journals from the 70s and the 80s?
I did that because those papers actually discuss therapy techniques,
style, and research that might help me understand a person.
Recent research articles look more like Excel spreadsheets.
The corporatization of healthcare not only changed hospitals,
it changed universities as well.
The people designing and running research studies and publishing those papers have a PhD. Academia is an extremely competitive game. Not
only do you have to hustle to get a PhD, you have to keep hustling once you have it. How do you
compete with other academics once you get your PhD? The answer is that other people have to cite
your research in their research for you to raise your status of yourself
as an academic or your academic journal that you're publishing in. Now most people cite your
the amount of times that a publication has its articles cited by other publications or is called
a impact factor and the amount of times that an author or a professor's
articles get cited is measured with something called an H-index or RCR. In my opinion, many of
the journals and academics with very low scores by these metrics have some of the best information.
Just because their articles aren't being cited in larger studies doesn't mean that the research
isn't good. The modern research system focusing on these metrics that drive careers and journals and universities' success as a number has definitely
not resulted in the creation of some page-turning academic papers. In fact, this competitive academic
culture has led to modern journals being garbage that creates careers for the people who write them and not change in the
clinical profession. Academics research things that will get cited, not things that will help
anyone, and certainly not anything that anyone wants to read. Often the abstract for a modern
research paper begins like this. In order to challenge the prevailing paradigm, we took the
data from seven studies and extrapolated against our filter in order to refine the data to compare End quote.
They are papers that are written to get cited, but not to be read by humans.
They are the modern equivalent of those webpages that are supposed to be picked up by Google,
based on SEO terms, but not actually read by a person.
When you start to write research articles with the only
goal for other people to cite them, you're incentivized to cite an enormous amount of
studies, bring together a large amount of information, and then compare it together
so that somebody else will see your article as something that they need to cite.
This doesn't always lead to anything that anyone needs to know.
The next point. good psychology thrives in
or anything that is going to be helpful to a therapist sitting in a office with a person.
Next point, good psychology thrives in complexity. Do you remember in middle school that counselor
that sat there and said, I understand
how you are feeling with a dull blank look in her eyes? And you remember how that didn't work?
Good therapy is about a clinician teaching a patient to use their own intuition and the
clinician using their own intuition. It is not about memorizing phrases and cognitive suggestions.
The best modalities are always a way of understanding and conceptualizing
patients that allow a therapist to apply their own intuition. A modality becomes easier to study,
but less effective when it strips out all of the opportunity for a person to have a personality,
individuality, and unique life experience that a clinician might need to make a genuine connection
with the person. Research studies are deeply uncomfortable with not being able to control for every variable.
When you write a treatment case
without explaining everything as an objective number,
it's called a case study,
which modern research has moved away from.
However, the therapy modalities
that strip the amount of control from a clinician
and remove the clinician's intuition could be done
by a computer, and that is why it is not okay to research more abstract, less definable properties
that are still helpful and observable. For example, let's say that this is the research finding.
Clinicians who introduce patients to the idea that emotion is experienced somatically first,
then cognitively secondarily in the first session had less patients drop out after the first session.
Or, clinicians that use a parts-based approach to therapy, Jungian, IFS, voice dialogue, brain spotting, etc.,
were able to reduce trauma symptoms faster than cognitive and mindfulness-based
practices alone. So those hypothetical statements, if those statements are true,
why does it matter how the clinician is implementing those conceptualizations? Why
does it matter that we don't have a measurable number in the methodology if we have a measurable number that the modalities
are effective. If we know that certain strategies of conceptualization are effective, then why does
research need to control for how those conceptualizations are applied? If clinicians
who conceptualize cases in a certain way tend to keep patients, then why does it matter if we can't
control for all the other unique variables that the clinician introduces into treatment? With a big enough sample, we can still see what types of training
and what modes of thinking are working better. Modern research has become more interested in
why something works instead of being content to simply find out that it works. If a patient and
clinicians with trauma all favor a certain modality, then why does it matter if we can't extrapolate and control for all the variables present in those successful sessions?
The sessions were still a success.
Research has stayed away from modalities that regulate the subcortical brain and instead emphasized more measurable cognitive variables simply because it is harder to measure the variables that make therapy effective. This is a whole other article, but the American medical
community has become fixated on managing symptoms instead of curing or preventing actual illness.
Research has become hostile to variables that contain affective experience or clinical complexity
or challenging the existing institutional status quo. The concept of evidence needs to be expanded to include scientifically
plausible working theories that have been validated by clinicians in the field using
their intuition and patients who have experienced the modalities and gotten better. This is
especially important regarding diagnoses that are difficult to broadly generalize,
like dissociative and affective disorders. In conclusion, psychotherapy is a
modality that is conducted between humans, and it is best learned about and conveyed in a medium
that considers our shared humanity. The interests of the modern research-conducting institution
and research publishing bodies largely contradict the interests of psychotherapy as a profession.
The trends in modern evidence-based practice make it exceptionally poor at evaluating the techniques and practices that are actually helping patients in the field or that are popular with trauma-focused clinicians.
If you enjoyed this article, please check out more on our website at gettherapybirmingham.com.