Tuesday, February 03, 2015

The biggest fail of science

The Dilbert cartoonist writes:
What’s is science’s biggest fail of all time?

I nominate everything about diet and fitness.

Maybe science has the diet and fitness stuff mostly right by now. I hope so. But I thought the same thing twenty years ago and I was wrong.

I used to think fatty food made you fat. Now it seems the opposite is true. Eating lots of peanuts, avocados, and cheese, for example, probably decreases your appetite and keeps you thin.

I used to think vitamins had been thoroughly studied for their health trade-offs. They haven’t. The reason you take one multivitamin pill a day is marketing, not science.

I used to think the U.S. food pyramid was good science. In the past it was not, and I assume it is not now.

I used to think drinking one glass of alcohol a day is good for health, but now I think that idea is probably just a correlation found in studies.

I used to think I needed to drink a crazy-large amount of water each day, because smart people said so, but that wasn’t science either.

I could go on for an hour.
I agree with this. No other area of science has failed so badly. For decades, reputable authorities have been telling us what is supposedly scientifically correct about diet and fitness, and they have been wrong with most of what they say.

It is hard to explain how so much money could get such poor results. I think that the problem is that most of the bad advice has come from physicians. They are not the best source for 3 reasons.

1. Diet and fitness are out of their expertise. They take medical school classes in diagnosing disease and cutting up cadavers, but not in diet and fitness.

2. Physicians do not have a scientific mindset. Science is all about doing experiments, and physicians do not believe in experimenting on their patients.

3. Physicians are not independent thinkers. The medical world is extremely hierarchial, and physicians very much believe in following official policy and in having everyone follow orders as well.

Dilbert relates this to a more general public distrust in science on other issues like climate change.

3 comments:

Anonymous said...

This is all true, but it should be noted nutrition science failed because of the Frequentist statistical methods that were used in the studies.

Frequentist statistical methods have screwed up a lot of other subjects. pretty much everyone it's touched in fact, from economics to the psychology, neither of which can be said to have increased their predictive capabilities within living memory.

I nominate Frequentist statistics as the biggest fail.

Roger said...

Good point. I would say that the frequentist methods are fine as long as they are used properly, as the math itself is correct. But you are right that they have screwed up other subjects.

Anonymous said...

Roger,

Most people think that, but it's simply not true. Already by the 50's and 60's statisticians had discovered a mass of (theoretical) problems with p-values and Confidence Intervals (the primary tools used in all those 'scientific' papers which turn out to be wrong far more than they're right). They only really work in very simple cases where they happen give answers operationally identical to the Bayesian answer.

These problems aren't merely faulty application. They are problems of principle and are inherent in Frequentist Statistics even if performed correctly.

See for example here:

http://bayes.wustl.edu/etj/articles/confidence.pdf

Note the example on page 196 (truncated exponential distribution). It's an innocent looking real problem, yet the correctly calculated Confidence Interval only includes impossible parameter values.

Let me say that again: we can deduce (prove) from the same assumptions used to calculate the Confidence Interval, that the true parameter can't possibly be in the Confidence Interval.

Then look at the following analytical discussion explaining why and when this phenomenon happens. It's a direct consequence of failure to adhere to the sum and product rules or probability theory (i.e. Bayesian Statistics).

Again, these are not errors of application, they are inherent and deep problems of principle.

Frequentist statistics was kept around despite these well documented theoretical flaws for purely philosophical reasons. Basically, Frequentists couldn't understand philosophically what Bayesians were doing so they rejected their methods. Anti-Bayesian sentiments hit their peak sometime around 1940-1970 or so.

The only difference between now and 50 years ago is that back then people could only point out theoretical problems with Frequentist methods. Since then, it's become clear to everyone that Frequentist statics is a massive practical failure as well.

Most heavy statistics laden researcher papers are wrong.

Every branch of science that relies on classical statistics as their main tool has stagnated. Just like Economics and Psychology, their predictive ability hasn't improved in half a century despite hundreds of thousands of peer reviewed research papers, and massive research spending that dwarfs everything that came before.