“These results suggest that there is no superior long-term metabolic benefit of a high-protein diet over a high-carbohydrate in the management of type 2 diabetes.”  The conclusion is from a paper by Larsen, et al. [1] which, based on that statement in the Abstract, I would not normally bother to read; it is good that you have to register trials and report failures but from a broader perspective, finding nothing is not great news and just because Larsen couldn’t do it, doesn’t mean it can’t be done.  However, in this case, I received an email from International Diabetes published bilingually in Beijing: “Each month we run a monthly column where choose a hot-topic article… and invite expert commentary opinion about that article” so I agreed to write an opinion. The following is my commentary:

“…no superior long-term metabolic benefit of a high-protein diet over a high-carbohydrate ….” A slightly more positive conclusion might have been that “a high-protein diet is as good as a high carbohydrate diet.”  After all, equal is equal. The article is, according to the authors, about “high-protein, low-carbohydrate” so rather than describing a comparison of apples and pears, the conclusion should emphasize low carbohydrate vs high carbohydrate.   It is carbohydrate, not protein, that is the key question in diabetes but clarity was probably not the idea. The paper by Larsen, et al. [1] represents a kind of classic example of the numerous studies in the literature whose goal is to discourage people with diabetes from trying a diet based on carbohydrate restriction, despite its intuitive sense (diabetes is a disease of carbohydrate intolerance) and despite its established efficacy and foundations in basic biochemistry.  The paper is characterized by blatant bias, poor experimental design and mind-numbing statistics rather than clear graphic presentation of the data. I usually try to take a collegial approach in these things but this article does have a unique and surprising feature, a “smoking gun” that suggests that the authors were actually aware of the correct way to perform the experiment or at least to report the data.

Right off, the title tells you that we are in trouble. “The effect of high-protein, low-carbohydrate diets in the treatment…” implying that all such diets are the same even though  there are several different versions, some of which (by virtue of better design) will turn out to have had much better performance than the diet studied here and, almost all of which are not “high protein.” Protein is one of the more stable features of most diets — the controls in this experiment, for example, did not substantially lower their protein even though advised to do so –and most low-carbohydrate diets advise only carbohydrate restriction.  While low-carbohydrate diets do not counsel against increased protein, they do not necessarily recommend it.  In practice, most carbohydrate-restricted diets are hypocaloric and the actual behavior of dieters shows that they generally do not add back either protein or fat, an observation first made by LaRosa in 1980.

Atkins-bashing is not as easy as it used to be when there was less data and one could run on “concerns.” As low-fat diets continue to fail at both long-term and short-term trials — think Women’s Health Initiative [2] — and carbohydrate restriction continues to show success and continues to bear out the predictions from the basic biochemistry of the insulin-glucose axis  [3], it becomes harder to find fault.  One strategy is to take advantage of the lack of formal definitions of low-carbohydrate diets to set up a straw man.  The trick is to test a moderately high carbohydrate diet and show that, on average, as here, there is no difference in hemoglobin A1c, triglycerides and total cholesterol, etc. when compared to a higher carbohydrate diet as control —  the implication is that in a draw, the higher carbohydrate diet wins.  So, Larsen’s low carbohydrate diet contains 40 % of energy as carbohydrate.  Now, none of the researchers who have demonstrated the potential of carbohydrate restriction would consider 40 % carbohydrate, as used in this study, to be a low-carbohydrate diet. In fact, 40 % is close to what the American population consumed before the epidemic of obesity and diabetes. Were we all on a low carbohydrate diet before Ancel Keys?

What happened?  As you might guess, there weren’t notable differences on most outcomes but like other such studies in the literature, the authors report only group statistics so you don’t really know who ate what and they use an intention-to-treat (ITT) analysis. According to ITT, a research report should include data from those subjects that dropped out of the study (here, about 19 % of each group). You read that correctly.  The idea is based on the assumption (insofar as it has any justification at all) that compliance is an inherent feature of the diet (“without carbs, I get very dizzy”) rather than a consequence of bias transmitted from the experimenter, or distance from the hospital, or any of a thousand other things.  While ITT has been defended vehemently, the practice is totally counter-intuitive, and has been strongly attacked on any number of grounds, the most important of which is that, in diet experiments, it makes the better diet look worse.  Whatever the case that can be made, however, there is no justification for reporting only intention-to-treat data, especially since, in this paper, the authors consider as one of the “strengths of the study … the measurement of dietary compliance.”

The reason that this is all more than technical statistical detail, is that the actual reported data show great variability (technically, the 95 % confidence intervals are large).  To most people, a diet experiment is supposed to give a prospective dieter information about outcome.  Most patients would like to know: if I stay on this diet, how will I do.  It is not hard to understand that if you don’t stay on the diet, you can’t expect good results.  Nobody knows what 81 % staying on the diet could mean.  In the same way, nobody loses an average amount of weight. If you look at  the spread in performance and in what was consumed by individuals on this diet, you can see that there is big individual variation Also, being “on a diet”, or being “assigned to a diet” is very different than actually carrying out dieting behavior, that is, eating a particular collection of food.  When there is wide variation, a person in the low-carb group may be eating more carbs than some person in the high-carb group.  It may be worth testing the effect of having the doctor tell you to eat fewer carbs, but if you are trying to lose weight, you want them to test the effect of actually eating fewer carbs.

When I review papers like this for a journal I insist that the authors present individual data in graphic form.  The question in low-carbohydrate diets is the effect of amount of carbohydrate consumed on the outcomes.  Making a good case to the reader involves showing individual data.  As a reviewer, I would have had the authors plot each individual’s consumption of carbohydrate vs for example, individual changes in triglyceride and especially HbA1c.  Both of these are expected to be dependent on carbohydrate consumption.  In fact, this is the single most common criticism I make as reviewer or that I made when I was co-editor-in chief at Nutrition and Metabolism.

So what is the big deal?  This is not the best presentation of the data and it is really hard to tell what the real effect of carbohydrate restriction is. Everybody makes mistakes and few of my own papers are without some fault or other. But there’s something else here.  In reading a paper like this, unless you suspect that something wasn’t done correctly, you don’t tend to read the Statistical analysis section of the Methods very carefully (computers have usually done most of the work).  In this paper, however, the following remarkable paragraph jumps out at you.  A real smoking gun:

  • “As this study involved changes to a number of dietary variables (i.e. intakes of calories, protein and carbohydrate), subsidiary correlation analyses were performed to identify whether study endpoints were a function of the change in specific dietary variables. The regression analysis was performed for the per protocol population after pooling data from both groups. “

What?  This is exactly what I would have told them to do.  (I’m trying to think back. I don’t think I reviewed this paper).  The authors actually must have plotted the true independent variable, dietary intake — carbohydrate, calories, etc. — against the outcomes, leaving out the people who dropped out of the study.  So what’s the answer?

  • “These tests were interpreted marginally as there was no formal adjustment of the overall type 1 error rate and the p values serve principally to generate hypotheses for validation in future studies.”

Huh?  They’re not going to tell us?  “Interpreted marginally?”  What the hell does that mean?  A type 1 error refers to a false positive, that is, they must have found a correlation between diet and outcome in distinction to what the conclusion of the paper is.  They “did not formally adjust for” the main conclusion?  And “p values serve principally to generate hypotheses?”  This is the catch-phrase that physicians are taught to dismiss experimental results that they don’t like.  Whether it means anything or not, in this case there was a hypothesis, stated right at the beginning of the paper in the Abstract: “…to determine whether high-protein diets are superior to high-carbohydrate diets for improving glycaemic control in individuals with type 2 diabetes.”

So somebody — presumably a reviewer — told them what to do but they buried the results.  My experience as an editor was, in fact, that there are people in nutrition who think that they are beyond peer review and I had had many fights with authors.  In this case, it looks like the actual outcome of the experiment may have actually been the opposite of what they say in the paper.  How can we find out?  Like most countries, Australia has what are called “sunshine laws,” that require government agencies to explain their actions.  There is a Australian Federal Freedom of Information Act (1992) and one for the the state of Victoria (1982). One of the authors is supported by NHMRC (National Health and Medical Research Council)  Fellowship so it may be they are obligated to share this marginal information with us.  Somebody should drop the government a line.

Bibliography

1. Larsen RN, Mann NJ, Maclean E, Shaw JE: The effect of high-protein, low-carbohydrate diets in the treatment of type 2 diabetes: a 12 month randomised controlled trial. Diabetologia 2011, 54(4):731-740.

2. Tinker LF, Bonds DE, Margolis KL, Manson JE, Howard BV, Larson J, Perri MG, Beresford SA, Robinson JG, Rodriguez B et al: Low-fat dietary pattern and risk of treated diabetes mellitus in postmenopausal women: the Women’s Health Initiative randomized controlled dietary modification trial. Arch Intern Med 2008, 168(14):1500-1511.

3. Volek JS, Phinney SD, Forsythe CE, Quann EE, Wood RJ, Puglisi MJ, Kraemer WJ, Bibus DM, Fernandez ML, Feinman RD: Carbohydrate Restriction has a More Favorable Impact on the Metabolic Syndrome than a Low Fat Diet. Lipids 2009, 44(4):297-309.

Comments
  1. David M says:

    In the current context of dietary carbohydrate in studies the terms ‘high carbohydrate’ and ‘low carbohydrate’ are essentially meaningless. In setting out a recommended daily carbohydrate intake of 300 grams government food guides have arbitrarily established this level as ‘normal’ while implying carbohydrate is essential to human nutrition, or at least, important. Thus studies employing any carbohydrate level below 300 grams per day can be described as ‘low carbohydrate’. From a perspective of science this makes a mockery out of scientific protocols.

    I strongly suspect that this strategy is intentional and that it is driven by a political agenda. As much as the Western world runs on a petroleum based economy it also runs on a carbohydrate based economy. Carbohydrate provides the raw material for edible manufactured products. The production of wheat and corn are heavily subsidized by our governments.

    Without carbohydrate based products ninety or more percent of super market shelves would sit empty So it is important to discourage people, even diabetics, from straying into the realm of diets with a daily carbohydrate intake of 50 grams or less. One has to only follow the money.

    • rdfeinman says:

      I agree that following the money is key but I am not sure that it is corporate money as much as NIH money and grant money from other agencies. Good Calories, Bad Calories brilliantly described how these agencies were taken over by the lipophobes. It is a rare researcher who has the nerve to go against the bias of a granting agency. The low-fat fiasco has not only had a terrible effect on public health but has also repressed serious science.

  2. gretchen says:

    This study is an example of the often-unconscious bias that affects every aspect of our lives.

    When I was a reporter/editor at a daily newspaper, I insisted that the stories that came across my desk use the word “said” when quoting someone. Too many reporters used “claimed” when they didn’t agree with someone and “noted that” when they agreed.

    Big difference between “Ann claimed that the world is not flat” and “Ann noted that the world is not flat.” But sometimes I think authors are unaware of their biases. That’s why the world has editors, but when the bottom line rules, many publications don’t budget for careful editing.

    Irl Hirsch (I think) once did a study on the effectiveness of using continuous glucose monitors in teenagers. Overall, they found no benefit. But when they looked into how many of the teens had actually used the things, they found a big benefit. IOW, putting a CGM in your sock drawer doesn’t really improve your glucose control. But because of ITT rules, they had to include those subjects, and he wasn’t allowed to mention the subgroup analysis in the journal article because it was done after the results were in.

    Stupid!

  3. Gerard ONeil says:

    Nice article Doctor. Low Carbing has been my passion for a long time. I started serious low carbing
    in 1998 after determining that over the years I had been adding a couple pounds a year each and every year. I was 48 at the time and my lipid levels were TERRIBLE. I read Atkins and I thought He could not deliver on his promises. One year later, 55 pounds lighter , lipid levels greatly improved………..well i knew this way of eating is the way to go. Kept the weight off for 13 years.
    My Doctor could not believe my HDL cholesterol is consistantly over 130. Keep up the good fight.

  4. David M says:

    Richard, I agree that granting agencies wield enormous influence and create what I refer to as ‘academic research inertia’.

    He who controls the grants controls the direction of the funded research. The central problem is that once an organization buys into a proposition and it becomes ‘mainstream all activities become focused on propping it up. Anyone who dares challenge mainstream thinking is castigated. Any researcher who wants a grant quickly learns to design protocols and formulate conclusions that support the granting agency’s views no matter what the data shows.

  5. Peter says:

    Nice analysis. Should anyone in the land of Sunshine Down Under formally request a little light to be shed on the raw data it would be great to be kept updated through your blog…

    Peter

  6. I always head straight for the statistical analysis section. There, after a description of the computer software methods used, you will find many clues on how the researchers messed with the actual data. The ITT method? It’s just a license to make up stuff.

  7. Zooko says:

    So did any Australians volunteer to request the data yet?

    I could post on twitter and solicit an Australian to help out.

    By the way, I don’t agree that intention-to-treat analysis is dumb and wrong. I do agree that publishing *both* intention-to-treat and what-actually-happened analyses is far superior to publishing only one or the other. Let’s get the other from this experiment published. 🙂

    • rdfeinman says:

      Nobody has come forward. Feel free to ask. My post, of course, was in the nature of a reaction to perceived Atkins-bashing, and probably not the friendliest but he may be willing to part with the data in which case you don’t have to speak Strine to ask.

      On intention-to-treat (ITT), it is not so much that it is dumb or wrong but that, as stated, it sounds dumb and it is wrong to act as if statistics are our masters rather than our servants; you pick the statistics that fits the problem at hand. Blindly applying ITT is a mistake because it inherently reduces information. If you don’t have the information, then you must do ITT. All we know about the population, for example, is that they were assigned to low fat since the 1970s. All we know about the outcome is the average increase in calories, on average in the form of carbohydrate. So we must do ITT. Of course, that is what we always did and we don’t really need a new name for it. Doing ITT never gives you information and almost always reduces information. Doing both saves you from having to argue with “the third reviewer” () some of whom reviewed my ITT paper ( ) when I first tried to get it published; one is quoted in my paper.

Leave a comment