“Doctors prefer large studies that are bad to small studies that are good.”
The paper by Foster and coworkers entitled Weight and Metabolic Outcomes After 2 Years on a Low-Carbohydrate Versus Low-Fat Diet, published in 2010, had a surprisingly limited impact, especially given the effect of their first paper in 2003 on a one-year study. I have described the first low carbohydrate revolution as taking place around that time and, if Gary Taubes’s article in the New York Times Magazine was the analog of Thomas Paine’s Common Sense, Foster’s 2003 paper was the shot hear ’round the world.
The paper showed that the widely accepted idea that the Atkins diet, admittedly good for weight loss, was a risk for cardiovascular disease, was not true. The 2003 Abstract said “The low-carbohydrate diet was associated with a greater improvement in some risk factors for coronary heart disease.” The publication generated an explosive popularity of the Atkins diet, ironic in that Foster had said publicly that he undertook the study in order to “once and for all,” get rid of the Atkins diet. The 2010 paper by extending the study to 2 years would seem to be very newsworthy. So what was wrong? Why is the new paper more or less forgotten? Two things. First, the paper was highly biased and its methods were so obviously flawed — obvious even to the popular press — that it may have been a bit much even for the media. It remains to be seen whether it will really be cited but I will suggest here that it is a classic in misleading research and in the foolishness of intention-to-treat (ITT).
Second, the zeitgeist had changed from 8 years before. (Not for the better). Science no longer entered into it. The USDA, the NIH and the private health agencies now simply write anything they want and nobody objects. Low-fat is still the name of the game and plays a big part in the USDA guidelines and hence in the school lunch program of the Healthy Hunger-Free Kids Act endorsed by Michelle Obama and a progression of suits with maudlin expressions of optimism. There are numerous disclaimers about good fats, bad fats so that you can’t really hold them to anything but the Guidelines for Americans made it clear that there is a State Diet and you can talk about other stuff all you want as long as you don’t expect to be funded by any government agency or as long as you don’t anticipate any recognition. Anyway, the Foster study.
There are a lot of odd twists and turns in this study, but getting right to the results, Figure 2 is labeled “Predicted absolute mean change in body weight for participants ….” Predicted ? That sounds strange. What about the data? The figure shows no difference in change in body weight between the low carb diet and the low fat diet. Well, it happens. Usually, the low-carb diet does better but no guarantee of that. Figure 3, however, shown below, indicates changes in triglycerides for the 3, 6, 12 and 24 month time periods. Now reduction in triglycerides is virtually the hallmark of low carbohydrate diets and the big difference in reductions on the two diets seen after 3 or 6 months (shown by the arrow) is the usual result when comparing low-carbohydrate and low-fat diets but, in this case they actually come together after 24 months. How is that possible.
This seemed strange so I realized I had to find out where “predicted” came from and that meant reading the Methods section and particularly the Statistical Analysis section on how the data had been handled. You rarely read these sections unless you think there is a problem. Large studies, like this usually have a statistician and they use standard methods whose details may or may not be understood by a non-statistician (or the authors for that matter). As I kept reading the statistical section, I found it increasingly tedious and hard to read until I hit this passage (I’ve highlighted the key words):
“The previously mentioned longitudinal models preclude the use of less robust approaches, such as fixed imputation methods (for example, last observation carried forward or the analysis of participants with complete data [that is, complete case analyses]). These alternative approaches assume that missing data are unrelated to previously observed outcomes or baseline covariates, including treatment (that is, missing completely at random).”
What’s going on here? In a nutshell, they used “data” from people who dropped out of the experiment. To do this, all they had to do was “assume that all participants who withdraw would follow first the maximum and then minimum patient trajectory of weight.” Whatever this means, if anything, the key words are “withdraw” and “assume.” In other words, this is a step beyond intention-to-treat where you would include, for example, the weight of people who showed up to be weighed but had not actually followed one or another diet. Here there is no data. A pattern of behavior is assumed and data is — let’s face it — made up. Insofar as this is appropriate the results could, in theory, be fit to a model for a three year study, or a ten year study, or whatever since the experiments didn’t actually have to be performed. These experiments are expensive; think of the money that could be saved if we could work only with “predicted” data. Which makes one wonder: who funded this kind of research?
So this is an extreme kind of intention-to-treat which limits the kind of conclusions you can draw.
It is odd that ITT is controversial it is so blatantly foolish, but a reasonable way to deal with potential disagreement is simply to publish both the ITT data and the data that includes only the compliers, the so-called “per protocol” group. This is what was done in the Vitamin E study described in the last post on this subject. This data is missing from Foster’s paper. So, where is the data? One thing that caught my eye was the statement at the end of the article: “Data set: Available from Dr. Foster … subject to study group approval and National Institutes of Health policy.” When I tried to get the dataset, Dr. Foster assured me that they were still planning future publications. We’ll see.
So, was the decline in performance due to including the made-up data from the drop outs? One way to get an idea of whether that is true is, for each time point, plot the number of people who discontinued treatment against triglycerides at that time point.
Since we are suspicious of the idea that triglycerides lowering was the same on both low-fat and low-carbohydrate arms, we plot the difference between the two values (the double-headed arrow in Figure 3, above). The results shown above, indicate a direct correlation, that is the more people who dropped out, the more the two measurements were similar. In other words, the fact that, as the authors put it, “Decreases in triglyceride levels were greater in the low-carbohydrate than in the low-fat group at 3 and 6 months but not at 12 or 24 months” were almost surely due to the fact that the differences were diluted by people who weren’t on the low-carbohydrate diet; and ITT or whatever this was, always makes the better diet look worse than it is.
It gets worse. The Discussion:
“Our study has 2 main findings. First, neither dietary fat nor carbohydrate intake influenced weight loss when combined with a comprehensive lifestyle intervention. Second, because both diet groups achieved nearly identical weight loss, we were able to determine that a low carbohydrate diet has greater beneficial long-term effects on HDL cholesterol concentrations than a low-fat diet.” (my emphasis)
The first sentence gets to the heart of the matter. It is, after all, what we want to know. Is one macronutrient better or worse than another? So what were the dietary intakes? Well, it’s not in the paper. The paper is quite long with a tedious Appendix of the lifesyle intervention but I read it carefully. I really did. The data weren’t there. I was going to write to the authors when I found out, I think through somebody’s blog, maybe Tom Naughton’s, that the article had been covered in a story in the Los Angeles Times and as reported by Bob Kaplan:
“Of the 307 participants enrolled in the study, not one had their food intake recorded or analyzed by investigators. The authors did not monitor, chronicle or report any of the subjects’ diets. No meals were administered by the authors; no meals were eaten in front of investigators. There were no self-reports, no questionnaires.
The lead authors, Gary Foster and James Hill, explained in separate e-mails that self-reported data are unreliable and therefore they didn’t collect or analyze any.”
I confess to feeling a bit betrayed. I don’t like getting scientific information from the LA Times. (I know. How long have I been in this business?) How can you say “neither dietary fat nor carbohydrate intake influenced weight loss” if you haven’t measured fat or carbohydrate? I guess if you can “impute” data, you can make up the conclusion but it seems blatantly dishonest. No? Well, we don’t want to accuse anybody in light of the second factor that I mentioned at the beginning of this post: the state of mind of the nutritional world. The science simply doesn’t count and there is an accepted low-fat dogma. From this perspective, what would happen if the authors reported the facts as they found them or, more important, if they measured the relevant data, if they asked what people eat, as Kaplan put it “the single most important question … that any reasonably intelligent high school student would ask?” In short, if they tried to give real information about low carbohydrate diets, is there any chance they would be funded again? In other words, we don’t know what the authors were thinking but given the pervasive state of nutritional policy, is it not asking a lot to fight City Hall?
City Hall, having failed to show any risk of low-carbohydrate diets, indeed witnessing studies that show only benefit and having spent hundreds of millions on studies that showed no benefit of low-fat diets have fallen on the strategy of saying that, well, none of it counts. All macronutrients are irrelevant, only calories count. So who funded this kind of research?
“Washington University (grant UL1 RR024992); Temple University (grant R01 AT1103); University of Pennsylvania (grant UL1RR024134); University of Colorado (grant UL1 RR000051); and the National Center for Research Resources, a component of the National Institutes of Health (DK 56341), to Washington University Clinical Nutrition Research Unit,”
that is, the NIH and while, I am sure that it is true that “The funding source had no role in the design, conduct, or reporting of the study,” something is wrong here.