Posted January 3, 2011

Don't Believe the Research

I was chatting with a friend of mine from Texas who is in the process of setting up a studio, and one of his taglines was “Research-Supported Methods.” I asked if he did anything that wasn’t backed by clinical research, and he said he wanted to try to avoid it, because the proof wasn’t there and he wanted to maintain as high a credential as possible, to which I responded that he would be easily 10 years behind what is happening right now across the fitness industry.

I’m sure this is going to tick a few of my scholastically-minded readers off, but it needs to happen every once in a while. Research sucks. Now don’t get me wrong, I love pouring through volumes of scientific literature as much as the next guy (Re: I’d rather listen to entire Miley Cyrus albums being sung by Justin Bieber and Celine Dion in a duet fashion), and I can tell the difference in how to utilize an ANOVA versus MANCOVA statistical analysis just like everyone else, but research is horridly behind the times, and often not looking at anything worth looking at.

Case in point: Arthur Jones, the guy who invented the Nautilus company and many of the machines that have populated the commercial gyms across the world, a Forbes 400 member while he was alive, was notorious for his hatred of electromyographic studies. His rationale was a simple one: how the hell could a surface electrode pick up ONLY the signal being produced by the muscle under 7 or 8 layers of tissue? He would then grab a cadaver leg that was hooked up to EMG electrodes and move the leg around, showing that this passive “movement” was producing an increased signal due to the friction between those layers of tissue.

The moral of this story: if the differences are small, are there really any differences due to the tested phenomenon, or is it a matter of extraneous variables not accounted for? For instance, can we assume that the fascial network, which is peisoelectric and can conduct and produce charge, was not affected by a change in position or possible compression or distraction of localized tissues that might make up the difference in measurements observed? Can we assume that neural alterations between positions wouldn’t result in a greater or lesser electrical discharge? Can we assume Doc Brown didn’t gun the DeLorean to 88 at the exact moment measurements were taken and thus affecting the outcome? I don’t think we can.

Now Jones wasn’t just a business man and a hater of science. He formed a lot of the training techniques used by many bodybuilders through the 80’s, and a lot of the strength training theorems du jour. A lot of the stuff he was working on back in the early early 80’s is just beginning to be tested in the research literature, even though it was common knowledge to himself and a lot of bodybuilders of the time.

Let’s face it, a lot of research is behind the times, no matter how ahead they want to say they are. Most of the research being done in the field of strength and conditioning, performance, rehabilitation, etc, are looking at methods that have been used in the field in order to validate them scientifically. This means that what the people in the field have been doing for a few years are just being tested now. For crying out loud, the first published study on the stability ball and core function was put out with Dr. Stuart McGill as a co-author in the Journal of Physical Therapy in 2000, a full 10 years after it was first admitted to the population as a whole. Research is coming out in droves right now talking about training on labile surfaces, even though the industry has already ended the honeymoon period with this piece of technology and is giving it the silent treatment while watering the lawn in the summer with a garden hose instead of going back into the house to talk it out. In other words, strength coaches are done with the ball, and many have even said they wouldn’t have anything to do with them. Ever.

ARVE Error: need id and provider

Another big stumbling block for a lot of studies is ethics. Stupid concept. They had it right back in the sixties when they would test people on shit and not even tell them they were getting tested. Like the one medical study back in the sixties that irradiated new-born babies just to see what the long-term side effects would be. If we tried to do that today, there would be so many panties in a bunch that the world would probably stop working for a few minutes. But how else did we get the microwave people?? HOW ELSE DID WE GET THE KENMORE MICROWAVE??

As a result, a lot of the current research has to use inferential conclusions about their studies. Take, for instance, a researcher looking at the role of joint laxity during the throwing motion. The only true way to measure this would be to have electrogoniometers strapped to the joint capsule itself, inside the shoulder instead of on the surface. To do this would require surgical implantation, which would cause a bit of harm to the person, but it would be a hell of a lot more accurate than testing a shoulders mobility in a passive setting in just one direction. Do we know whether the shoulder starts in antetorsion and moves into retrotorsion, vice versa, or something really cool that no one has even figured out yet?? We can’t know without direct measurements.

If we look at anything designed for performance enhancement or aesthetics, whenever a new product (supplement or drug) is ready to hit the market, it typically goes through some specific trial pathways, as discussed by Tim Ferriss in his new book The 4 Hour Body:

Race horses -> bodybuilders or chronic illness patients (muscle wasting conditions like AIDS, muscular dystrophy, etc) -> elite athletes -> rich and famous -> average people.

Research is typically following this point in order to prove or dis-prove whether a specific concept is working or not, which means by the time it gets to the researchers, it’s been thought of for at least a decade or more, and proven time and time again, just not in a peer-reviewed segment.

Now, there is definitely a role for research, however I always view new research with caution, especially when I look at what their conclusion is. When we look at the outcome, we always have to ask whether the conclusion is supported by the methods used in the study, the statistical analysis, and most importantly, whether the meaning is relevant. Additionally, would this intervention help to progress a thought process, disprove it, produce measurable results for an “Average Joe,” or assist in creating a discussion about future research? In other words, does that study on hip mobility and squat depth mean that I can take a client out on the gym floor, follow the recommendations listed, and see a MEASURABLE increase in their performance of specific tasks, strength, improvement of body composition, or other health-related variables? If not, I don’t really care. Michael Boyle came out this year to say that squatting will be dead soon, so how long before the rest of research looks into the possibility of single-leg training being more impactful on sport performance than 2-leg activities, therefore making an entire generation or trainers shun the squat rack?

Sure, research helps to direct the flow of training and what we know works versus what is just hearsay and conjecture, but it shouldn’t be viewed as the sole gold standard. If that was the case, we would have everyone still performing as many crunches in a minute as possible to test their abdominal endurance, since that has the larger body of evidence than the studies showing that crunches will ruin your spine. (Note: they won’t if you do them right). Research helps to guide what we know versus what we believe, but we have to remember the field of sports and exercise science is still in its’ infancy, so a lot of the research out there does not match up to what is known in coaching or training circles. Use research whenever possible, but fill in the gaps with personal knowledge, field work, and collaborations.

3 Responses to Don't Believe the Research