Are We Seeing All That's There?

Aug 22, 2021

 

When I’m teaching or consulting in this field, a discussion of published research is almost inevitable.  After all, we so often talk about being “evidence-based,” so it makes sense that people will want to know about that evidence and what it says.  “What does the science say?” and “Follow the evidence!” are common refrains among fitness and health professionals.

There is a common misconception among such professionals and enthusiasts that what we see is all there is.   We often lapse into a sort of intellectual laziness about evidence, assuming that the published literature serves as an accurate representation of all the research that was actually done.  Unfortunately, this does a disservice to the psychological, financial, and even *political* quirks of academia.

Enter the subject of publication bias.  If you’ve never heard of this, read on.  If you *have* heard of it, read on anyway, because I have some points to make that you still may not have considered.

 

So cutting to the chase:

Perhaps the most common expression of this phenomenon is "positive results bias" -- so much so, that "publication bias" is often taken to mean specifically and only that.  For those unfamiliar, this refers to the tendency for journals to accept and publish positive results preferentially.  In a way-oversimplified sense, you can think of this as the tendency for an academic journal to give visibility to a study that noticed a relationship between variables, a difference between groups, or some other statistical effect that the authors were looking for.

 


Sometimes we get so focused on what is in front of us, we neglect what is NOT in front of us... evidence-wise, I mean.  You see, the microscope is like a metaphor.  I'm clever.

 

By contrast, these journals tend to reject papers that find negative or "null" results (e.g. a lack of relationship between two variables or a lack of a difference between an intervention and a placebo), even if all other aspects of the research are up to snuff.  In other words, the papers are not rejected on their merits, but rather on the basis of their lack of producing something exciting or "novel" (I put that word in quotation marks because it actually calls for its own discussion on what people actually mean when they say it).  The reasons behind this run deeper than simple “cool factor,” though, deriving at least in part from certain pressures relating to financial success and prestige (for both journal and author).  A 2012 paper by Joober and colleagues covering some of these factors in more detail can be found here.

 

This is a problem, because it leads to a false sense of what "the research" or "the science" actually tells us.  I think this is particularly harmful when we are trying to look at the sum of the research and form an overarching conclusion through a systematic review or meta-analysis.  In the latter case, we are trying to combine the results of multiple studies (maybe a handful, maybe dozens or more) so we can speak to overall trends. 

The subject of meta-analysis brings me to another kind of publication bias that I find quite harmful.  I'm going to call it "anti-replication bias" (for lack of a better term that someone else may have come up with).  This is basically where -- often for the same "everything must be novel" reasons I touched on earlier -- journals are hesitant to put out literature that rehashes previous research.  When something has already been done, there tends to be an urge to move on to the next thing, even if there isn’t any other research available to corroborate the findings of the paper that was just published.

Many have spoken about the crisis of a lack of replicability of studies, including the very well known John Ioannidis (see here for one of his brief but popular pieces on the topic).

While the point of this blog isn’t to get into the finer points of meta-analyses (nor am I necessarily qualified to do that anyway), it is worth mentioning that a major hurdle for meta-analysis is the “heterogeneity” of the included studies – their dissimilarity, in other words.  The replication issue that I just mentioned means that there is an incentive to submit papers that are more methodologically different from each other, and that in turn means that heterogeneity is harder to avoid in a given field where you might wish to compare studies and combine their effects to get an overall picture of what is going on.  In simple terms, the more alike the studies in the meta-analysis are in their methodologies and various constraints, the more likely it is that we can combine and quantify their results, since it can be argued that those studies are all generally looking at the same thing (or something similar enough for our purposes).

So consider two exercise studies.  One involved resistance training that was self-selected, so no specific protocol for the exercise itself was provided in the paper.  The other involved resistance training on a similar sample (people drawn from a similar population) that was dosed in a very specific way with lots of detail (e.g. types of exercise, executed with a specific technique, with a specific tempo, with a certain length/quantity of exposure, etc.)  How do we know that the training in Study #1 was similar enough to that in Study #2 for the results to be compared?

A meta-analysis would seek to combine the results of these studies, but since it is not clear whether the actual exercise condition was the same (or very nearly so) in both studies, it would probably not be appropriate to combine the statistical effects of each.  There is too much heterogeneity.

Hopefully the problem is apparent.  A shortage of replication studies in the published literature makes it harder to find papers with similar enough methods that we can combine their effects statistically.  Replicating previous work, while scientifically responsible, isn't sexy from a publication standpoint.  When students as well as seasoned researchers are competing for citations, prestige, and money (even indirectly), these people can feel a lot of pressure to put out novel research instead of hanging back to verify that a previous finding is legitimate.

This hurts the field in the long run and is probably to blame for a lot of the false positives and confusing contradictions that pop up in the literature.  If you want to learn more about that, though, I suggest diving into what others have written on the topic (check the links I provided earlier and see what they're citing to get an idea of some of the problems that we're facing, and then see where some additional Google searches might take you).  Also, for those interested in reading a more detailed description of what a meta-analysis is and some major considerations for those doing them, here is one of many papers published on the subject.

 

I'm inserting another picture to break things up a bit.  Books are nice, right?

 

Now to be clear: the threshold as far as what is or isn’t “similar enough” is a hotly debated topic, and different researchers will draw different lines.  Some would undoubtedly call me a fool for even having the opinion that I just stated, and maybe they’re fair for doing so.  I just want people to recognize the *possibility* that we have an additional problem and to think hard about the lack of replication as they try to sift through what *has* been published and form an opinion about the overall state of the research.  If you’re at least aware of the possibility of an issue, you’ll be more likely to avoid, mitigate, or even outright solve it.
 

 

- G


ALSO -- If you enjoyed this topic and want to explore things like it further (or.. you know... stuff that's actually science-y and more directly related to training), be sure to check out our membership options HERE.  We have weekly Q&A roundups, short special topic videos, full-length course lectures, and even a discussion forum where you can talk with other members about this stuff -- or toss your questions directly at Alex and me!

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Cras sed sapien quam. Sed dapibus est id enim facilisis, at posuere turpis adipiscing. Quisque sit amet dui dui.

Call To Action

Stay connected with news and updates!

Join our mailing list to receive the latest news and updates from our team.
Don't worry, your information will not be shared.

We hate SPAM. We will never sell your information, for any reason.