Archive for the ‘Observational Studies’ Category

As the nutrition world implodes, there are a lot of accusations about ulterior motives and personal gain. (A little odd, that in this period of unbelievable greed — CEO’s ripping off public companies for hundreds of millions of dollars, congress trying to give tax breaks to billionaires — book authors are upbraided for trying to make money). So let me declare that I am not embarrassed to be an author for the money — although the profits from my book do go to research, it is my own research and the research of my colleagues. So beyond general excellence (not yet reviewed by David Katz), I think “World Turned Upside Down” does give you some scientific information about red meat and cancer that you can’t get from the WHO report on the subject.

The WHO report has not yet released the evidence to support their claim that red meat will give you cancer but it is worth going back to one of the previous attacks.  Chapters 18 and 19 discussed a paper by Sinha et al, entitled “Meat Intake and Mortality.”    The Abstract says “Conclusion: Red and processed meat intakes were associated with modest increases in total mortality, cancer mortality, and cardiovascular disease mortality,” I had previously written a blogpost about the study indicating how weak the association was. In that post, I had used the data on men but when I incorporated the information into the book, I went back to Sinha’s paper and analyzed the original data. For some reason, I also checked the data on women. That turned out to be pretty surprising:

Sinha_Table3_Chapter18_Apr22

I described on Page 286: “The population was again broken up into five groups or quintiles. The lower numbered quintiles are for the lowest consumption of red meat. Looking at all cause mortality, there were 5,314 deaths [in lowest quintile] and when you go up to quintile 05, highest red meat consumption, there are 3,752 deaths. What? The more red meat, the lower the death rate? Isn’t that the opposite of the conclusion of the paper? And the next line has [calculated] relative risk which now goes the other way: higher risk with higher meat consumption. What’s going on? As near as one can guess, “correcting” for the confounders changed the direction….” They do not show most of the data or calculations but I take this to be equivalent to a multivariate analysis, that is, red meat + other things gives you risk. If they had broken up the population by quintiles of smoking, you would see that that was the real contributor. That’s how I interpreted it but, in any case, their conclusion is about meat and it is opposite to what the data say.

So how much do you gain from eating red meat? “A useful way to look at this data is from the standpoint of conditional probability. We ask: what is the probability of dying in this experiment if you are a big meat‑eater? The answer is simply the number of people who both died during the experiment and were big meat‑eaters …. = 0.0839 or about 8%. If you are not a big meat‑eater, your risk is …. = 0.109 or about 11%.” Absolute gain is only 3 %. But that’s good enough for me.

Me, at Jubilat, the Polish butcher in the neighborhood: “The Boczak Wedzony (smoked bacon). I’ll take the whole piece.”

Wedzony_Nov_8

Boczak Wedzony from Jubilat Provisions

Rashmi Sinha is a Senior Investigator and Deputy Branch Chief and Senior at the NIH. She is a member of the WHO panel, the one who says red meat will give you cancer (although they don’t say “if you have the right confounders.”)

So, buy my book: AmazonAlibris, or

Direct:  Personalized, autographed copy $ 20.00 free shipping USA only.  Use coupon code: SEPT16

 

Carrot_Nation-3c

I was walking on a very dark street and I assumed that the voice I heard was a guy talking on a cell phone. Apparently about a dinner party, he was saying “Remember, I don’t eat red meat.” Only a few years ago, that would have sounded strange. Of course, a few years ago a man talking to himself on the street would have been strange. He would have been assumed to be deranged, more so if he told you that he was talking on the telephone. But yesterday’s oddity pops up everywhere today. Neo-vegetarianism affects us all. It’s all described very well by Jane Kramer’s excellent review of veggie cookbooks in the April 14 New Yorker,

“…from one chili party to the next, everything changed. Seven formerly enthusiastic carnivores called to say they had stopped eating meat entirely…. Worse, on the night of that final party, four of the remaining carnivores carried their plates to the kitchen table, ignoring the cubes of beef and pancetta, smoky and fragrant in their big red bean pot, and headed for my dwindling supply of pasta. “Stop!” I cried. “That’s for the vegetarians!”

Illustration by Robin Feinman. Reference: http://en.wikipedia.org/wiki/Carrie_Nation.

(more…)

“…789 deaths were reported in Doll and Hill’s original cohort. Thirty-six of these were attributed to lung cancer. When these lung cancer deaths were counted in smokers versus non-smokers, the correlation virtually sprang out: all thirty-six of the deaths had occurred in smokers. The difference between the two groups was so significant that Doll and Hill did not even need to apply complex statistical metrics to discern it. The trial designed to bring the most rigorous statistical analysis to the cause of lung cancer barely required elementary mathematics to prove his point.”

Siddhartha Mukherjee —The Emperor of All Maladies.

 Scientists don’t like philosophy of science. It is not just that pompous phrases like hypothetico-deductive systems are such a turn-off but that we rarely recognize it as what we actually do. In the end, there is no definition of science and it is hard to generalize about actual scientific behavior. It’s a human activity and precisely because it puts a premium on creativity, it defies categorization. As the physicist Steven Weinberg put it, echoing Justice Stewart on pornography:

“There is no logical formula that establishes a sharp dividing line between a beautiful explanatory theory and a mere list of data, but we know the difference when we see it — we demand a simplicity and rigidity in our principles before we are willing to take them seriously [1].”

A frequently stated principle is that “observational studies only generate hypotheses.” The related idea that “association does not imply causality” is also common, usually cited by those authors who want you to believe that the association that they found does imply causality. These ideas are not right or, at least, they insufficiently recognize that scientific experiments are not so easily wedged into categories like “observational studies.”  The principles are also invoked by bloggers and critics to discredit the continuing stream of observational studies that make an association between their favorite targets, eggs, red meat, sugar-sweetened soda and a metabolic disease or cancer. In most cases, the studies are getting what they deserve but the bills of indictment are not quite right.  It is usually not simply that they are observational studies but rather that they are bad observational studies and, in any case, the associations are so weak that it is reasonable to say that they are an argument for a lack of causality. On the assumption that good experimental practice and interpretation can be even roughly defined, let me offer principles that I think are a better representation, insofar as we can make any generalization, of what actually goes on in science:

 Observations generate hypotheses. 

Observational studies test hypotheses.

Associations do not necessarily imply causality.

In some sense, all science is associations. 

Only mathematics is axiomatic.

 If you notice that kids who eat a lot of candy seem to be fat, or even if you notice that candy makes you yourself fat, that is an observation. From this observation, you might come up with the hypothesis that sugar causes obesity. A test of your hypothesis would be to see if there is an association between sugar consumption and incidence of obesity. There are various ways — the simplest epidemiologic approach is simply to compare the history of the eating behavior of individuals (insofar as you can get it) with how fat they are. When you do this comparison you are testing your hypothesis. There are an infinite number of things that you could have measured as an independent variable, meat, TV hours, distance from the French bakery but you have a hypothesis that it was candy. Mike Eades described falling asleep as a child by trying to think of everything in the world. You just can’t test them all. As Einstein put it “your theory determines the measurement you make.”

Associations predict causality. Hypotheses generate observational studies, not the other way around.

In fact, association can be strong evidence for causation and frequently provide support for, if not absolute proof, of the idea to be tested. A correct statement is that association does not necessarily imply causation. In some sense, all science is observation and association. Even thermodynamics, that most mathematical and absolute of sciences, rests on observation. As soon as somebody observes two systems in thermal equilibrium with a third but not with each other (zeroth law), the jig is up. When somebody builds a perpetual motion machine, that’s it. It’s all over.

Biological mechanisms, or perhaps any scientific theory, are never proved. By analogy with a court of law, you cannot be found innocent, only not guilty. That is why excluding a theory is stronger than showing consistency. The grand epidemiological study of macronutrient intake vs diabetes and obesity shows that increasing carbohydrate is associated with increased calories even under conditions where fruits and vegetables also went up and fat, if anything went down. It is an observational study but it is strong because it gives support to a lack of causal effect of increased carbohydrate and decreased fat on outcome. The failure of total or saturated fat to have any benefit is the kicker here. It is now clear that prospective experiments have, in the past, and will continue to show, the same negative outcome. Of course, in a court of law, if you are found not guilty of child abuse, people may still not let you move into their neighborhood. It is that saturated fat should never have been indicted in the first place.

An association will tell you about causality 1) if the association is strong and 2) if there is a plausible underlying mechanism and 3) if there is no more plausible explanation — for example, countries with a lot of TV sets have modern life styles that may predispose to cardiovascular disease; TV does not cause CVD.

Re-inventing the wheel. Bradford Hill and the history of epidemiology.

Everything written above is true enough or, at least, it seemed that way to me. I thought of it as an obvious description of what everybody knows. The change to saying that “association does not necessarily imply causation” is important but not that big a deal. It is common sense or logic and I had made it into a short list of principles. It was a blogpost of reasonable length. I described it to my colleague Gene Fine. His response was “aren’t you re-inventing the wheel?” Bradford Hill, he explained, pretty much the inventor of modern epidemiology, had already established these and a couple of other principles. Gene cited The Emperor of All Maladies, an outstanding book on the history of cancer.  I had read The Emperor of All Maladies on his recommendation and I remembered Bradford Hill and the description of the evolution of the ideas of epidemiology, population studies and random controlled trials. I also had a vague memory, of reading the story in James LeFanu’s The Rise and Fall of Modern Medicine, another captivating history of medicine. However, I had not really absorbed these as principles. Perhaps we’re just used to it, but saying that an association implies causality only if it is a strong association is not exactly a scientific breakthrough. It seems an obvious thing that you might say over coffee or in response to somebody’s blog. It all reminded me of learning, in grade school, that the Earl of Sandwich had invented the sandwich and thinking “this is an invention?”  Woody Allen thought the same thing and wrote the history of the sandwich and the Earl’s early failures — “In 1741, he places bread on bread with turkey on top. This fails. In 1745, he exhibits bread with turkey on either side. Everyone rejects this except David Hume.”

At any moment in history our background knowledge — and accepted methodology —  may be limited. Some problems seem to have simple solutions. But simple ideas are not always accepted. The concept of the random controlled trial (RCT), obvious to us now, was hard won and, proving that any particular environmental factor — diet, smoking, pollution or toxic chemicals was the cause of a disease and that, by reducing that factor, the disease could be prevented, turned out to be a very hard sell, especially to physicians whose view of disease may have been strongly colored by the idea of an infective agent.

Hill_CausationThe Rise and Fall of Modern Medicine describes Bradford Hill’s two demonstrations that streptomycin in combination with PAS (para-aminosalicylic acid) could cure tuberculosis and that tobacco causes lung cancer as one of the Ten Definitive Moments in the history of modern medicine (others shown in the textbox). Hill was Professor of Medical Statistics at the London School of Hygiene and Tropical Medicine but was not formally trained in statistics and, like many of us, thought of proper statistics as common sense. An early near fatal case of tuberculosis also prevented formal medical education. His first monumental accomplishment was, ironically, to demonstrate how tuberculosis could be cured with the combination of streptomycin and PAS.  In 1941, Hill and co-worker Richard Doll undertook a systematic investigation of the risk factors for lung cancer. His eventual success was accompanied by a description of the principles that allow you to say when association can be taken as causation.

 Ten Definitive Moments from Rise and Fall of Modern Medicine.

1941: Penicillin

1949: Cortisone

1950: streptomycin, smoking and Sir Austin Bradford Hill

1952: chlorpromazine and the revolution in psychiatry

1955: open-heart surgery – the last frontier

1963: transplanting kidneys

1964: the triumph of prevention – the case of strokes

1971: curing childhood cancer

1978: the first ‘Test-Tube’ baby

1984: Helicobacter – the cause of peptic ulcer

Wiki says: “in 1965, built  upon the work of Hume and Popper, Hill suggested several aspects of causality in medicine and biology…” but his approach was not formal — he never referred to his principles as criteria — he recognized them as common sense behavior and his 1965 presentation to the Royal Society of Medicine, is a remarkably sober, intelligent document. Although described as an example of an article that, as here, has been read more often in quotations and paraphrases, it is worth reading the original even today.

Note: “Austin Bradford Hill’s surname was Hill and he always used the name Hill, AB in publications. However, he is often referred to as Bradford Hill. To add to the confusion, his friends called him Tony.” (This comment is from Wikipedia, not Woody Allen).

The President’s Address

Bradford Hill’s description of the factors that might make you think an association implied causality:

Hill_Environment1965

1. Strength. “First upon my list I would put the strength of the association.” This, of course, is exactly what is missing in the continued epidemiological scare stories. Hill describes

“….prospective inquiries into smoking have shown that the death rate from cancer of the lung in cigarette smokers is nine to ten times the rate in non-smokers and the rate in heavy cigarette smokers is twenty to thirty times as great.”

But further:

“On the other hand the death rate from coronary thrombosis in smokers is no more than twice, possibly less, the death rate in nonsmokers. Though there is good evidence to support causation it is surely much easier in this case to think of some features of life that may go hand-in-hand with smoking – features that might conceivably be the real underlying cause or, at the least, an important contributor, whether it be lack of exercise, nature of diet or other factors.”

Doubts about an odds ratio of two or less. That’s where you really have to wonder about causality. The progression of epidemiologic studies that tell you red meat, HFCS, etc. will cause diabetes, prostatic cancer, or whatever, these rarely hit an odds ratio of 2.  While the published studies may contain disclaimers of the type in Hill’s paper, the PR department of the university where the work is done, and hence the public media, show no such hesitation and will quickly attribute causality to the study as if the odds ratio were 10 instead of 1.2.

2. Consistency: Hill listed the repetition of the results in other studies under different circumstances as a criterion for considering how much an association implied causality. Not mentioned but of great importance, is that this test cannot be made independent of the first criterion. Consistently weak associations do not generally add up to a strong association. If there is a single practice in modern medicine that is completely out of whack with respect to careful consideration of causality, it is the meta-analysis where studies with no strength at all are averaged so as to create a conclusion that is stronger than any of its components.

3. Specificity. Hill was circumspect on this point, recognizing that we should have an open mind on what causes what. On specificity of cancer and cigarettes, Hill noted that the two sites in which he showed a cause and effect relationship were the lungs and the nose.

4. Temporality: Obviously, we expect the cause to precede the effect or, as some wit put it “which got laid first, the chicken or the egg.”  Hill recognized that it was not so clear for diseases that developed slowly. “Does a particular diet lead to disease or do the early stages of the disease lead to those peculiar dietetic habits?” Of current interest are the epidemiologic studies that show a correlation between diet soda and obesity which are quick to see a causal link but, naturally, one should ask “Who drinks diet soda?”

5. Biological gradient:  the association should show a dose response curve. In the case of cigarettes, the death rate from cancer of the lung increases linearly with the number of cigarettes smoked. A subset of the first principle, that the association should be strong, is that the dose-response curve should have a meaningful slope and, I would add, the numbers should be big.

6. Plausibilityy: On the one hand, this seems critical — the association of egg consumption with diabetes is obviously foolish — but the hypothesis to be tested may have come from an intuition that is far from evident. Hill said, “What is biologically plausible depends upon the biological knowledge of the day.”

7. Coherence: “data should not seriously conflict with the generally known facts of the natural history and biology of the disease”

8. Experiment: It was another age. It is hard to believe that it was in my lifetime. “Occasionally it is possible to appeal to experimental, or semi-experimental, evidence. For example, because of an observed association some preventive action is taken. Does it in fact prevent?” The inventor of the random controlled trial would be amazed how many of these are done, how many fail to prevent. And, most of all, he would have been astounded that it doesn’t seem to matter. However, the progression of failures, from Framingham to the Women’s Health Initiative, the lack of association between low fat, low saturated fat and cardiovascular disease, is strong evidence for the absence of causation.

9. Analogy: “In some circumstances it would be fair to judge by analogy. With the effects of thalidomide and rubella before us we would surely be ready to accept slighter but similar evidence with another drug or another viral disease in pregnancy.”

Hill’s final word on what has come to be known as his criteria for deciding about causation:

“Here then are nine different viewpoints from all of which we should study association before we cry causation. What I do not believe — and this has been suggested — is that we can usefully lay down some hard-and-fast rules of evidence that must be obeyed before we accept cause and effect. None of my nine viewpoints can bring indisputable evidence for or against the cause-and-effect hypothesis and none can be required as a sine qua non. What they can do, with greater or less strength, is to help us to make up our minds on the fundamental question – is there any other way of explaining the set of facts before us, is there any other answer equally, or more, likely than cause and effect?” This may be the first critique of the still-to-be-invented Evidence-based Medicine.

Nutritional Epidemiology.

The decision to say that an observational study implies causation is equivalent to an assertion that the results are meaningful, that it is not a random association at all, that it is scientifically sound. Critics of epidemiological studies have relied on their own perceptions and appeal to common sense and when I started this blogpost, I was one of them, and I had not appreciated the importance of Bradford Hill’s principles. The Emperor of All Maladies described Hill’s strategies for dealing with association and causation “which have remained in use by epidemiologists to date.”  But have they? The principles are in the texts. Epidemiology, Biostatistics, and Preventive Medicine has a chapter called “The study of causation in Epidemiologic Investigation and Research” from which the dose-response curve was modified. Are these principles being followed? Previous posts in this blog and others have have voiced criticisms of epidemiology as it’s currently practiced in nutrition but we were lacking a meaningful reference point. Looking back now, what we see is a large number of research groups doing epidemiology in violation of most of Hill’s criteria.

The red meat scare of 2011 was Pan, et al and I described in a previous post, the remarkable blog from Harvard . Their blog explained that the paper was unnecessarily scary because it had described things in terms of “relative risks, comparing death rates in the group eating the least meat with those eating the most. The absolute risks… sometimes help tell the story a bit more clearly. These numbers are somewhat less scary.”  I felt it was appropriate to ask “Why does Dr. Pan not want to tell the story as clearly as possible?  Isn’t that what you’re supposed to do in science? Why would you want to make it scary?” It was, of course, a rhetorical question.

Looking at Pan, et al. in light of Bradford Hill, we can examine some of their data. Figure 2 from their paper shows the risk of diabetes as a function of red meat in the diet. The variable reported is the hazard ratio which can be considered roughly the same as the odds ratio, that is, relative odds of getting diabetes. I have indicated, in pink, those values that are not statistically significant and I grayed out the confidence interval to make it easy to see that these do not even hit the level of 2 that Bradford Hill saw as some kind of cut-off.

TheBlog_Cause_Pan_Fig2_

The hazard ratios for processed meat are somewhat higher but still less than 2. This is weak data and violates the first and most important of Hill’s criteria. As you go from quartile 2 to 3, there is an increase in risk, but at Q4, the risk goes down and then back up at Q5, in distinction to principle 5 which suggests the importance of dose-response curves. But, stepping back and asking what the whole idea is, asking why you would think that meat has a major — and isolatable role separate from everything else — in a disease of carbohydrate intolerance, you see that this is not rational, this is not science. And Pan is not making random observations. This is a test of the hypothesis that red meat causes diabetes. Most of us would say that it didn’t make any sense to test such a hypothesis but the results do not support the hypothesis.

What is science?

Science is a human activity and what we don’t like about philosophy of science is that it is about the structure and formalism of science rather than what scientists really do and so there aren’t even any real definitions. One description that I like, from a colleague at the NIH: “What you do in science, is you make a hypothesis and then you try to shoot yourself down.” One of the more interesting sidelights on the work of Hill and Doll, as described in Emperor, was that during breaks from the taxing work of analyzing the questionnaires that provided the background on smoking, Doll himself would step out for a smoke. Doll believed that cigarettes were unlikely to be a cause — he favored tar from paved highways as the causative agent — but as the data came in, “in the middle of the survey, sufficiently alarmed, he gave up smoking.” In science, you try to shoot yourself down and, in the end, you go with the data.

First published in October of 2011, this post announced a Q&A on line with Harvard’s Eric Rimm to answer question about the School of Public Health’s new  “Healthy Eating Plate,” its own version of nutritional recommendations to compete with the USDA’s MyPlate. A rather  limited window of one hour  was allotted for the entire country to phone in our questions.  Unfortunately HSPH was not as good at telecommunications as it is at epidemiology and the connection did not start working for a while.  The questions that I wanted to ask, however, still stand and this post is a duplicate of the original with the notice about the Q&A removed.  Harvard has been invited to participate in a panel discussion at the Ancestral Health Symposium, and we will see how these questions can be answered.

— adopted from Pops (at Louder and Smarter), the anonymous brilliant artist and admitted ne’er do well.

One of the questions surrounding USDA Nutrition Guidelines for Americans was whether so-called “sunshine laws,” like the Freedom of Information Act, were adhered to. Whereas hearings were recorded, and input from the public was solicited, there is the sense that if the letter of the law was followed, the spirit was weak.  When I and colleagues testified at the USDA hearings, there was little evidence that their representatives were listening; there was no discussion. We said our piece and then were heard no more.  In fact, at the break, when I tried to speak to one of the panel, somebody came out from backstage, I believe unarmed, to tell me that I could not discuss anything with the committee.

Harvard School of Public Health, home of  “odds ratio = 1.22,” last month published their own implementation of the one size-fits-all approach to public nutrition, the”Healthy Eating Plate.”  Their advice is full of  “healthy,” “packed with” and other self-praise that makes this mostly an infomercial for HSPH’s point of view. Supposedly a correction of the errors in MyPlate from the USDA, it seems to be more similar than different. The major similarity is the disdain for the intelligence of the American public. Comparing the two plates (below), they have exchanged the positions of fruits and vegetables.  “Grains” on MyPlate is now called “Whole Grains,” and “Protein” has been brilliantly changed to “Healthy Proteins.”  How many NIH grants were required to think of this is unknown.  Harvard will, of course, tell you what “healthy” is:, no red meat and, of course watch out for the Seventh Egg.

 

 

 

 

 

 

 

So here are the  questions that I wanted to ask:

  1. Dr. Rimm, you are recommending a diet for all Americans but even within the pattern of general recommendations, I don’t know of any experimental trial that has tested it.  Aren’t you just recommending another grand experiment like the original USDA recommendations which you are supposedly improving on?
  2. Dr. Rimm, given that half the population is overweight or obese shouldn’t there be at least two plates?
  3. Dr. Rimm, I think the American public expects a scientific document.  Don’t you think continued use of the words “healthy,” “packed with nutrients,” makes the Plate more of  an informercial for your point of view?
  4. Dr. Rimm, the Plate site says “The contents of this Web site are not intended to offer personal medical advice,” but it seems that is exactly what it is doing. If you say that you are recommending a diet that will “Lower blood pressure; reduced risk of heart disease, stroke, and probably some cancers; lower risk of eye and digestive problems,” how is that not medical advice?  Are you disowning responsibility for the outcome in advance?
  5. Dr. Rimm, more generally, how will you judge if these recommendations are successful? Is there a null hypothesis? The USDA recommendations continue from year to year without any considerations of past successes or failures.
  6. Dr. Rimm, “healthy” implies general consensus but there are many scientists and physicians with good credentials and experience who hold to different opinions. Have you considered these opinions in formulating the plate? Is there any room for dissent or alternatives?
  7. Dr. Rimm, the major alternative point of view is that low-carbohydrate diets offer benefits for weight loss and maintenance and, obviously, for diabetes and metabolic syndrome. Although your recommendations continually refer to regulation of blood sugar, it is not incorporated in the Plate.
  8. Dr. Rimm, nutritionally, fruits have more sugar, more calories, less potassium, fewer antioxidants than vegetables.  Why are they lumped together? And how can you equate beans, nuts and meat as a source of protein?
  9. Dr. Rimm, looking at the comparison of MyPlate and your Plate, it seems that all that is changed is that “healthy” has been added to proteins and “whole” has been added to grains.  If people know what “healthy” is, why is there an obesity epidemic? Or are you blaming the patient?
  10. Dr. Rimm, you are famous for disagreeing on lipids with the DGAC committee yet your name is on their report as well as on this document is supposed to be an alternative.  Do we know where you stand?
  11. Dr. Rimm, the Healthy Plate “differences” page says “The Healthy Eating Plate is based exclusively on the best available science and was not subjected to political and commercial pressures from food industry lobbyists.” This implies that the USDA recommendations are subject to such pressures.  What is the evidence for this? You were a member of the USDA panel. What pressures were brought to bear on you and how did you deal with them
  12. Dr. Rimm, the Healthy Plate still limits saturated fat even though a study from your department showed that there was, in fact, no effect of dietary saturated fat on cardiovascular disease.  That study, moreover, was an analysis of numerous previous trials, the great majority of which individually showed no risk from saturated fat. What was wrong with that study that allows you to ignore it?

*Medicineball, (colloq) a game that derives from Moneyball, in which an “unscientific culture responds, or fails to respond, to the scientific method ” in order  to stay funded.

“These results suggest that there is no superior long-term metabolic benefit of a high-protein diet over a high-carbohydrate in the management of type 2 diabetes.”  The conclusion is from a paper by Larsen, et al. [1] which, based on that statement in the Abstract, I would not normally bother to read; it is good that you have to register trials and report failures but from a broader perspective, finding nothing is not great news and just because Larsen couldn’t do it, doesn’t mean it can’t be done.  However, in this case, I received an email from International Diabetes published bilingually in Beijing: “Each month we run a monthly column where choose a hot-topic article… and invite expert commentary opinion about that article” so I agreed to write an opinion. The following is my commentary:

“…no superior long-term metabolic benefit of a high-protein diet over a high-carbohydrate ….” A slightly more positive conclusion might have been that “a high-protein diet is as good as a high carbohydrate diet.”  After all, equal is equal. The article is, according to the authors, about “high-protein, low-carbohydrate” so rather than describing a comparison of apples and pears, the conclusion should emphasize low carbohydrate vs high carbohydrate.   It is carbohydrate, not protein, that is the key question in diabetes but clarity was probably not the idea. The paper by Larsen, et al. [1] represents a kind of classic example of the numerous studies in the literature whose goal is to discourage people with diabetes from trying a diet based on carbohydrate restriction, despite its intuitive sense (diabetes is a disease of carbohydrate intolerance) and despite its established efficacy and foundations in basic biochemistry.  The paper is characterized by blatant bias, poor experimental design and mind-numbing statistics rather than clear graphic presentation of the data. I usually try to take a collegial approach in these things but this article does have a unique and surprising feature, a “smoking gun” that suggests that the authors were actually aware of the correct way to perform the experiment or at least to report the data.

Right off, the title tells you that we are in trouble. “The effect of high-protein, low-carbohydrate diets in the treatment…” implying that all such diets are the same even though  there are several different versions, some of which (by virtue of better design) will turn out to have had much better performance than the diet studied here and, almost all of which are not “high protein.” Protein is one of the more stable features of most diets — the controls in this experiment, for example, did not substantially lower their protein even though advised to do so –and most low-carbohydrate diets advise only carbohydrate restriction.  While low-carbohydrate diets do not counsel against increased protein, they do not necessarily recommend it.  In practice, most carbohydrate-restricted diets are hypocaloric and the actual behavior of dieters shows that they generally do not add back either protein or fat, an observation first made by LaRosa in 1980.

Atkins-bashing is not as easy as it used to be when there was less data and one could run on “concerns.” As low-fat diets continue to fail at both long-term and short-term trials — think Women’s Health Initiative [2] — and carbohydrate restriction continues to show success and continues to bear out the predictions from the basic biochemistry of the insulin-glucose axis  [3], it becomes harder to find fault.  One strategy is to take advantage of the lack of formal definitions of low-carbohydrate diets to set up a straw man.  The trick is to test a moderately high carbohydrate diet and show that, on average, as here, there is no difference in hemoglobin A1c, triglycerides and total cholesterol, etc. when compared to a higher carbohydrate diet as control —  the implication is that in a draw, the higher carbohydrate diet wins.  So, Larsen’s low carbohydrate diet contains 40 % of energy as carbohydrate.  Now, none of the researchers who have demonstrated the potential of carbohydrate restriction would consider 40 % carbohydrate, as used in this study, to be a low-carbohydrate diet. In fact, 40 % is close to what the American population consumed before the epidemic of obesity and diabetes. Were we all on a low carbohydrate diet before Ancel Keys?

What happened?  As you might guess, there weren’t notable differences on most outcomes but like other such studies in the literature, the authors report only group statistics so you don’t really know who ate what and they use an intention-to-treat (ITT) analysis. According to ITT, a research report should include data from those subjects that dropped out of the study (here, about 19 % of each group). You read that correctly.  The idea is based on the assumption (insofar as it has any justification at all) that compliance is an inherent feature of the diet (“without carbs, I get very dizzy”) rather than a consequence of bias transmitted from the experimenter, or distance from the hospital, or any of a thousand other things.  While ITT has been defended vehemently, the practice is totally counter-intuitive, and has been strongly attacked on any number of grounds, the most important of which is that, in diet experiments, it makes the better diet look worse.  Whatever the case that can be made, however, there is no justification for reporting only intention-to-treat data, especially since, in this paper, the authors consider as one of the “strengths of the study … the measurement of dietary compliance.”

The reason that this is all more than technical statistical detail, is that the actual reported data show great variability (technically, the 95 % confidence intervals are large).  To most people, a diet experiment is supposed to give a prospective dieter information about outcome.  Most patients would like to know: if I stay on this diet, how will I do.  It is not hard to understand that if you don’t stay on the diet, you can’t expect good results.  Nobody knows what 81 % staying on the diet could mean.  In the same way, nobody loses an average amount of weight. If you look at  the spread in performance and in what was consumed by individuals on this diet, you can see that there is big individual variation Also, being “on a diet”, or being “assigned to a diet” is very different than actually carrying out dieting behavior, that is, eating a particular collection of food.  When there is wide variation, a person in the low-carb group may be eating more carbs than some person in the high-carb group.  It may be worth testing the effect of having the doctor tell you to eat fewer carbs, but if you are trying to lose weight, you want them to test the effect of actually eating fewer carbs.

When I review papers like this for a journal I insist that the authors present individual data in graphic form.  The question in low-carbohydrate diets is the effect of amount of carbohydrate consumed on the outcomes.  Making a good case to the reader involves showing individual data.  As a reviewer, I would have had the authors plot each individual’s consumption of carbohydrate vs for example, individual changes in triglyceride and especially HbA1c.  Both of these are expected to be dependent on carbohydrate consumption.  In fact, this is the single most common criticism I make as reviewer or that I made when I was co-editor-in chief at Nutrition and Metabolism.

So what is the big deal?  This is not the best presentation of the data and it is really hard to tell what the real effect of carbohydrate restriction is. Everybody makes mistakes and few of my own papers are without some fault or other. But there’s something else here.  In reading a paper like this, unless you suspect that something wasn’t done correctly, you don’t tend to read the Statistical analysis section of the Methods very carefully (computers have usually done most of the work).  In this paper, however, the following remarkable paragraph jumps out at you.  A real smoking gun:

  • “As this study involved changes to a number of dietary variables (i.e. intakes of calories, protein and carbohydrate), subsidiary correlation analyses were performed to identify whether study endpoints were a function of the change in specific dietary variables. The regression analysis was performed for the per protocol population after pooling data from both groups. “

What?  This is exactly what I would have told them to do.  (I’m trying to think back. I don’t think I reviewed this paper).  The authors actually must have plotted the true independent variable, dietary intake — carbohydrate, calories, etc. — against the outcomes, leaving out the people who dropped out of the study.  So what’s the answer?

  • “These tests were interpreted marginally as there was no formal adjustment of the overall type 1 error rate and the p values serve principally to generate hypotheses for validation in future studies.”

Huh?  They’re not going to tell us?  “Interpreted marginally?”  What the hell does that mean?  A type 1 error refers to a false positive, that is, they must have found a correlation between diet and outcome in distinction to what the conclusion of the paper is.  They “did not formally adjust for” the main conclusion?  And “p values serve principally to generate hypotheses?”  This is the catch-phrase that physicians are taught to dismiss experimental results that they don’t like.  Whether it means anything or not, in this case there was a hypothesis, stated right at the beginning of the paper in the Abstract: “…to determine whether high-protein diets are superior to high-carbohydrate diets for improving glycaemic control in individuals with type 2 diabetes.”

So somebody — presumably a reviewer — told them what to do but they buried the results.  My experience as an editor was, in fact, that there are people in nutrition who think that they are beyond peer review and I had had many fights with authors.  In this case, it looks like the actual outcome of the experiment may have actually been the opposite of what they say in the paper.  How can we find out?  Like most countries, Australia has what are called “sunshine laws,” that require government agencies to explain their actions.  There is a Australian Federal Freedom of Information Act (1992) and one for the the state of Victoria (1982). One of the authors is supported by NHMRC (National Health and Medical Research Council)  Fellowship so it may be they are obligated to share this marginal information with us.  Somebody should drop the government a line.

Bibliography

1. Larsen RN, Mann NJ, Maclean E, Shaw JE: The effect of high-protein, low-carbohydrate diets in the treatment of type 2 diabetes: a 12 month randomised controlled trial. Diabetologia 2011, 54(4):731-740.

2. Tinker LF, Bonds DE, Margolis KL, Manson JE, Howard BV, Larson J, Perri MG, Beresford SA, Robinson JG, Rodriguez B et al: Low-fat dietary pattern and risk of treated diabetes mellitus in postmenopausal women: the Women’s Health Initiative randomized controlled dietary modification trial. Arch Intern Med 2008, 168(14):1500-1511.

3. Volek JS, Phinney SD, Forsythe CE, Quann EE, Wood RJ, Puglisi MJ, Kraemer WJ, Bibus DM, Fernandez ML, Feinman RD: Carbohydrate Restriction has a More Favorable Impact on the Metabolic Syndrome than a Low Fat Diet. Lipids 2009, 44(4):297-309.

…the association has to be strong and the causality has to be plausible and consistent. And you have to have some reason to make the observation; you can’t look at everything.  And experimentally, observation may be all that you have — almost all of astronomy is observational.  Of course, the great deconstructions of crazy nutritional science — several from Mike Eades blog and Tom Naughton’s hysterically funny-but-true course in how to be a scientist —  are still right on but, strictly speaking, it is the faulty logic of the studies and the whacko observations that is the problem, not simply that they are observational.  It is the strength and reliability of the association that tells you whether causality is implied.  Reducing carbohydrates lowers triglycerides.  There is a causal link.  You have to be capable of the state of mind of the low-fat politburo not to see this (for example, Circulation, May 24, 2011; 123(20): 2292 – 2333).

It is frequently said that observational studies are only good for generating hypotheses but it is really the other way around.  All studies are generated by hypotheses.  As Einstein put it: your theory determines what you measure.  I ran my post on the red meat story passed April Smith  and her reaction was “why red meat? Why not pancakes” which is exactly right.  Any number of things can be observed. Once you pick, you have a hypothesis.

Where did the first law of thermodynamics come from?

Thermodynamics is an interesting case.  The history of the second law involves a complicated interplay of observation and theory.  The idea that there was an absolute limit to how efficient you could make a machine and by extension that all real processes were necessarily inefficient largely comes from the brain power of Carnot. He saw that you could not extract as work all of the heat you put into a machine. Clausius encapsulated it into the idea of the entropy as in my Youtube video.

©2004 Robin A. Feinman

 The origins of the first law, the conservation of energy, are a little stranger.  It turns out that it was described more than twenty years after the second law and it has been attributed to several people, for a while, to the German physicist von Helmholtz.  These days, credit is given to a somewhat eccentric German physician named Robert Julius Mayer. Although trained as a doctor, Mayer did not like to deal with patients and was instead more interested in physics and religion which he seemed to think were the same thing.  He took a job as a shipboard physician on an expedition to the South Seas since that would give him time to work on his main interests.  It was in Jakarta where, while treating an epidemic with the practice then of blood letting, that he noticed that the venous blood of the sailors was much brighter than when they were in colder climates as if “I had struck an artery.” He attributed this to a reduced need for the sailors to use oxygen for heat and from this observation, he somehow leapt to the grand principle of conservation of energy, that the total amount of heat and work and any other forms of energy does not change but can only be interconverted. It is still unknown what kind of connections in his brain led him to this conclusion.  The period (1848) corresponds to the point at which science separated from philosophy. Mayer seems to have had one foot in each world and described things in the following incomprehensible way:

  • If two bodies find themselves in a given difference, then they could remain  in a state of rest after the annihilation of [that] difference if the  forces that were communicated to them as a result of the leveling of  the difference could cease to exist; but if they are assumed to be indestructible,  then the still persisting forces, as causes of changes in relationship,  will again reestablish the original present difference.

(I have not looked for it but one can only imagine what the original German was like). Warmth Disperses and Time Passes. The History of Heat, Von Baeyer’s popular book on thermodynamics, describes the ups and downs of Mayer’s life, including the death of three of his children which, in combination with rejection of his ideas, led to hospitalization but ultimate recognition and knighthood.  Surely this was a great observational study although, as von Baeyer put it, it did require “the jumbled flashes of insight in that sweltering ship’s cabin on the other side of the world.”

It is also true that association does imply causation but, again, the association has to have some impact and the proposed causality has to make sense.  In some way, purely observational experiments are rare.  As Pasteur pointed out, even serendipity is favored by preparation.  Most observational experiments must be a reflection of some hypothesis. Otherwise you’re wasting tax-payer’s money; a kiss of death on a grant application is to imply that “it would be good to look at.…”  You always have to have something in mind.  The great observational studies like the Framingham Study are bad because they have no null hypothesis. When the Framingham study first showed that there was no association between dietary total and saturated fat or dietary cholesterol, the hypothesis was quickly defended. The investigators were so tied to a preconceived hypothesis, that there was hardly any point in making the observations.

In fact, a negative result is always stronger than one showing consistency; consistent sunrises will go by the wayside if the sun fails to come up once.  It is the lack of an association between the decrease in fat consumption during the epidemic of obesity and diabetes that is so striking.  The figure above shows that the  increase in carbohydrate consumption is consistent with the causal role of dietary carbohydrate in creating anabolic hormonal effects and with the poor satiating effects of carbohydrates — almost all of the increase of calories during the epidemic of obesity and diabetes has been due to carbohydrates.  However, this observation is not as strong as the lack of an identifiable association of obesity and diabetes with fat consumption.  It is the 14 % decrease in the absolute amount of saturated fat for men that is the problem.  If the decrease in fat were associated with decrease in obesity, diabetes and cardiovascular disease, there is little doubt that the USDA would be quick to identify causality.  In fact, whereas you can find the occasional low-fat trial that succeeds, if the diet-heart hypothesis were as described, they should not fail. There should not be a single Women’s Health Initiative, there should not be a single Framingham study, not one.

Sometimes more association would be better.  Take intention-to-treat. Please. In this strange statistical idea, if you assign a person to a particular intervention, diet or drug, then you must include the outcome data (weight loss, change in blood pressure) for that person even if the do not comply with the protocol (go off the diet, stop taking the pills).  Why would anybody propose such a thing, never mind actually insist on it as some medical journals or granting agencies do?  When you actually ask people who support ITT, you don’t get coherent answers.  They say that if you just look at per protocol data (only from people who stayed in the experiment), then by excluding the drop-outs, you would introduce bias but when you ask them to explain that you get something along the lines of Darwin and the peas growing on the wrong side of the pod. The basic idea, if there is one, is that the reason that people gave up on their diet or stopped taking the pills was because of an inherent feature of the intervention: made them sick, drowsy or something like that.  While this is one possible hypothesis and should be tested, there are millions of others — the doctor was subtly discouraging about the diet, or the participants were like some of my relatives who can’t remember where they put their pills, or the diet book was written in Russian, or the diet book was not written in Russian etc. I will discuss ITT in a future post but for the issue at hand:  if you do a per-protocol you will observe what happens to people when stay on their diet and you will have an association between the content of the diet and performance.  With an ITT analysis, you will be able to observe what happens when people are told to follow a diet and you will have an association between assignment to a diet and performance.  Both are observational experiments with an association between variables but they have different likelihood of providing a sense of causality.