Archive for the ‘Red Meat’ Category

As the nutrition world implodes, there are a lot of accusations about ulterior motives and personal gain. (A little odd, that in this period of unbelievable greed — CEO’s ripping off public companies for hundreds of millions of dollars, congress trying to give tax breaks to billionaires — book authors are upbraided for trying to make money). So let me declare that I am not embarrassed to be an author for the money — although the profits from my book do go to research, it is my own research and the research of my colleagues. So beyond general excellence (not yet reviewed by David Katz), I think “World Turned Upside Down” does give you some scientific information about red meat and cancer that you can’t get from the WHO report on the subject.

The WHO report has not yet released the evidence to support their claim that red meat will give you cancer but it is worth going back to one of the previous attacks.  Chapters 18 and 19 discussed a paper by Sinha et al, entitled “Meat Intake and Mortality.”    The Abstract says “Conclusion: Red and processed meat intakes were associated with modest increases in total mortality, cancer mortality, and cardiovascular disease mortality,” I had previously written a blogpost about the study indicating how weak the association was. In that post, I had used the data on men but when I incorporated the information into the book, I went back to Sinha’s paper and analyzed the original data. For some reason, I also checked the data on women. That turned out to be pretty surprising:


I described on Page 286: “The population was again broken up into five groups or quintiles. The lower numbered quintiles are for the lowest consumption of red meat. Looking at all cause mortality, there were 5,314 deaths [in lowest quintile] and when you go up to quintile 05, highest red meat consumption, there are 3,752 deaths. What? The more red meat, the lower the death rate? Isn’t that the opposite of the conclusion of the paper? And the next line has [calculated] relative risk which now goes the other way: higher risk with higher meat consumption. What’s going on? As near as one can guess, “correcting” for the confounders changed the direction….” They do not show most of the data or calculations but I take this to be equivalent to a multivariate analysis, that is, red meat + other things gives you risk. If they had broken up the population by quintiles of smoking, you would see that that was the real contributor. That’s how I interpreted it but, in any case, their conclusion is about meat and it is opposite to what the data say.

So how much do you gain from eating red meat? “A useful way to look at this data is from the standpoint of conditional probability. We ask: what is the probability of dying in this experiment if you are a big meat‑eater? The answer is simply the number of people who both died during the experiment and were big meat‑eaters …. = 0.0839 or about 8%. If you are not a big meat‑eater, your risk is …. = 0.109 or about 11%.” Absolute gain is only 3 %. But that’s good enough for me.

Me, at Jubilat, the Polish butcher in the neighborhood: “The Boczak Wedzony (smoked bacon). I’ll take the whole piece.”


Boczak Wedzony from Jubilat Provisions

Rashmi Sinha is a Senior Investigator and Deputy Branch Chief and Senior at the NIH. She is a member of the WHO panel, the one who says red meat will give you cancer (although they don’t say “if you have the right confounders.”)

So, buy my book: AmazonAlibris, or

Direct:  Personalized, autographed copy $ 20.00 free shipping USA only.  Use coupon code: SEPT16



I was walking on a very dark street and I assumed that the voice I heard was a guy talking on a cell phone. Apparently about a dinner party, he was saying “Remember, I don’t eat red meat.” Only a few years ago, that would have sounded strange. Of course, a few years ago a man talking to himself on the street would have been strange. He would have been assumed to be deranged, more so if he told you that he was talking on the telephone. But yesterday’s oddity pops up everywhere today. Neo-vegetarianism affects us all. It’s all described very well by Jane Kramer’s excellent review of veggie cookbooks in the April 14 New Yorker,

“…from one chili party to the next, everything changed. Seven formerly enthusiastic carnivores called to say they had stopped eating meat entirely…. Worse, on the night of that final party, four of the remaining carnivores carried their plates to the kitchen table, ignoring the cubes of beef and pancetta, smoky and fragrant in their big red bean pot, and headed for my dwindling supply of pasta. “Stop!” I cried. “That’s for the vegetarians!”

Illustration by Robin Feinman. Reference:


“…789 deaths were reported in Doll and Hill’s original cohort. Thirty-six of these were attributed to lung cancer. When these lung cancer deaths were counted in smokers versus non-smokers, the correlation virtually sprang out: all thirty-six of the deaths had occurred in smokers. The difference between the two groups was so significant that Doll and Hill did not even need to apply complex statistical metrics to discern it. The trial designed to bring the most rigorous statistical analysis to the cause of lung cancer barely required elementary mathematics to prove his point.”

Siddhartha Mukherjee —The Emperor of All Maladies.

 Scientists don’t like philosophy of science. It is not just that pompous phrases like hypothetico-deductive systems are such a turn-off but that we rarely recognize it as what we actually do. In the end, there is no definition of science and it is hard to generalize about actual scientific behavior. It’s a human activity and precisely because it puts a premium on creativity, it defies categorization. As the physicist Steven Weinberg put it, echoing Justice Stewart on pornography:

“There is no logical formula that establishes a sharp dividing line between a beautiful explanatory theory and a mere list of data, but we know the difference when we see it — we demand a simplicity and rigidity in our principles before we are willing to take them seriously [1].”

A frequently stated principle is that “observational studies only generate hypotheses.” The related idea that “association does not imply causality” is also common, usually cited by those authors who want you to believe that the association that they found does imply causality. These ideas are not right or, at least, they insufficiently recognize that scientific experiments are not so easily wedged into categories like “observational studies.”  The principles are also invoked by bloggers and critics to discredit the continuing stream of observational studies that make an association between their favorite targets, eggs, red meat, sugar-sweetened soda and a metabolic disease or cancer. In most cases, the studies are getting what they deserve but the bills of indictment are not quite right.  It is usually not simply that they are observational studies but rather that they are bad observational studies and, in any case, the associations are so weak that it is reasonable to say that they are an argument for a lack of causality. On the assumption that good experimental practice and interpretation can be even roughly defined, let me offer principles that I think are a better representation, insofar as we can make any generalization, of what actually goes on in science:

 Observations generate hypotheses. 

Observational studies test hypotheses.

Associations do not necessarily imply causality.

In some sense, all science is associations. 

Only mathematics is axiomatic.

 If you notice that kids who eat a lot of candy seem to be fat, or even if you notice that candy makes you yourself fat, that is an observation. From this observation, you might come up with the hypothesis that sugar causes obesity. A test of your hypothesis would be to see if there is an association between sugar consumption and incidence of obesity. There are various ways — the simplest epidemiologic approach is simply to compare the history of the eating behavior of individuals (insofar as you can get it) with how fat they are. When you do this comparison you are testing your hypothesis. There are an infinite number of things that you could have measured as an independent variable, meat, TV hours, distance from the French bakery but you have a hypothesis that it was candy. Mike Eades described falling asleep as a child by trying to think of everything in the world. You just can’t test them all. As Einstein put it “your theory determines the measurement you make.”

Associations predict causality. Hypotheses generate observational studies, not the other way around.

In fact, association can be strong evidence for causation and frequently provide support for, if not absolute proof, of the idea to be tested. A correct statement is that association does not necessarily imply causation. In some sense, all science is observation and association. Even thermodynamics, that most mathematical and absolute of sciences, rests on observation. As soon as somebody observes two systems in thermal equilibrium with a third but not with each other (zeroth law), the jig is up. When somebody builds a perpetual motion machine, that’s it. It’s all over.

Biological mechanisms, or perhaps any scientific theory, are never proved. By analogy with a court of law, you cannot be found innocent, only not guilty. That is why excluding a theory is stronger than showing consistency. The grand epidemiological study of macronutrient intake vs diabetes and obesity shows that increasing carbohydrate is associated with increased calories even under conditions where fruits and vegetables also went up and fat, if anything went down. It is an observational study but it is strong because it gives support to a lack of causal effect of increased carbohydrate and decreased fat on outcome. The failure of total or saturated fat to have any benefit is the kicker here. It is now clear that prospective experiments have, in the past, and will continue to show, the same negative outcome. Of course, in a court of law, if you are found not guilty of child abuse, people may still not let you move into their neighborhood. It is that saturated fat should never have been indicted in the first place.

An association will tell you about causality 1) if the association is strong and 2) if there is a plausible underlying mechanism and 3) if there is no more plausible explanation — for example, countries with a lot of TV sets have modern life styles that may predispose to cardiovascular disease; TV does not cause CVD.

Re-inventing the wheel. Bradford Hill and the history of epidemiology.

Everything written above is true enough or, at least, it seemed that way to me. I thought of it as an obvious description of what everybody knows. The change to saying that “association does not necessarily imply causation” is important but not that big a deal. It is common sense or logic and I had made it into a short list of principles. It was a blogpost of reasonable length. I described it to my colleague Gene Fine. His response was “aren’t you re-inventing the wheel?” Bradford Hill, he explained, pretty much the inventor of modern epidemiology, had already established these and a couple of other principles. Gene cited The Emperor of All Maladies, an outstanding book on the history of cancer.  I had read The Emperor of All Maladies on his recommendation and I remembered Bradford Hill and the description of the evolution of the ideas of epidemiology, population studies and random controlled trials. I also had a vague memory, of reading the story in James LeFanu’s The Rise and Fall of Modern Medicine, another captivating history of medicine. However, I had not really absorbed these as principles. Perhaps we’re just used to it, but saying that an association implies causality only if it is a strong association is not exactly a scientific breakthrough. It seems an obvious thing that you might say over coffee or in response to somebody’s blog. It all reminded me of learning, in grade school, that the Earl of Sandwich had invented the sandwich and thinking “this is an invention?”  Woody Allen thought the same thing and wrote the history of the sandwich and the Earl’s early failures — “In 1741, he places bread on bread with turkey on top. This fails. In 1745, he exhibits bread with turkey on either side. Everyone rejects this except David Hume.”

At any moment in history our background knowledge — and accepted methodology —  may be limited. Some problems seem to have simple solutions. But simple ideas are not always accepted. The concept of the random controlled trial (RCT), obvious to us now, was hard won and, proving that any particular environmental factor — diet, smoking, pollution or toxic chemicals was the cause of a disease and that, by reducing that factor, the disease could be prevented, turned out to be a very hard sell, especially to physicians whose view of disease may have been strongly colored by the idea of an infective agent.

Hill_CausationThe Rise and Fall of Modern Medicine describes Bradford Hill’s two demonstrations that streptomycin in combination with PAS (para-aminosalicylic acid) could cure tuberculosis and that tobacco causes lung cancer as one of the Ten Definitive Moments in the history of modern medicine (others shown in the textbox). Hill was Professor of Medical Statistics at the London School of Hygiene and Tropical Medicine but was not formally trained in statistics and, like many of us, thought of proper statistics as common sense. An early near fatal case of tuberculosis also prevented formal medical education. His first monumental accomplishment was, ironically, to demonstrate how tuberculosis could be cured with the combination of streptomycin and PAS.  In 1941, Hill and co-worker Richard Doll undertook a systematic investigation of the risk factors for lung cancer. His eventual success was accompanied by a description of the principles that allow you to say when association can be taken as causation.

 Ten Definitive Moments from Rise and Fall of Modern Medicine.

1941: Penicillin

1949: Cortisone

1950: streptomycin, smoking and Sir Austin Bradford Hill

1952: chlorpromazine and the revolution in psychiatry

1955: open-heart surgery – the last frontier

1963: transplanting kidneys

1964: the triumph of prevention – the case of strokes

1971: curing childhood cancer

1978: the first ‘Test-Tube’ baby

1984: Helicobacter – the cause of peptic ulcer

Wiki says: “in 1965, built  upon the work of Hume and Popper, Hill suggested several aspects of causality in medicine and biology…” but his approach was not formal — he never referred to his principles as criteria — he recognized them as common sense behavior and his 1965 presentation to the Royal Society of Medicine, is a remarkably sober, intelligent document. Although described as an example of an article that, as here, has been read more often in quotations and paraphrases, it is worth reading the original even today.

Note: “Austin Bradford Hill’s surname was Hill and he always used the name Hill, AB in publications. However, he is often referred to as Bradford Hill. To add to the confusion, his friends called him Tony.” (This comment is from Wikipedia, not Woody Allen).

The President’s Address

Bradford Hill’s description of the factors that might make you think an association implied causality:


1. Strength. “First upon my list I would put the strength of the association.” This, of course, is exactly what is missing in the continued epidemiological scare stories. Hill describes

“….prospective inquiries into smoking have shown that the death rate from cancer of the lung in cigarette smokers is nine to ten times the rate in non-smokers and the rate in heavy cigarette smokers is twenty to thirty times as great.”

But further:

“On the other hand the death rate from coronary thrombosis in smokers is no more than twice, possibly less, the death rate in nonsmokers. Though there is good evidence to support causation it is surely much easier in this case to think of some features of life that may go hand-in-hand with smoking – features that might conceivably be the real underlying cause or, at the least, an important contributor, whether it be lack of exercise, nature of diet or other factors.”

Doubts about an odds ratio of two or less. That’s where you really have to wonder about causality. The progression of epidemiologic studies that tell you red meat, HFCS, etc. will cause diabetes, prostatic cancer, or whatever, these rarely hit an odds ratio of 2.  While the published studies may contain disclaimers of the type in Hill’s paper, the PR department of the university where the work is done, and hence the public media, show no such hesitation and will quickly attribute causality to the study as if the odds ratio were 10 instead of 1.2.

2. Consistency: Hill listed the repetition of the results in other studies under different circumstances as a criterion for considering how much an association implied causality. Not mentioned but of great importance, is that this test cannot be made independent of the first criterion. Consistently weak associations do not generally add up to a strong association. If there is a single practice in modern medicine that is completely out of whack with respect to careful consideration of causality, it is the meta-analysis where studies with no strength at all are averaged so as to create a conclusion that is stronger than any of its components.

3. Specificity. Hill was circumspect on this point, recognizing that we should have an open mind on what causes what. On specificity of cancer and cigarettes, Hill noted that the two sites in which he showed a cause and effect relationship were the lungs and the nose.

4. Temporality: Obviously, we expect the cause to precede the effect or, as some wit put it “which got laid first, the chicken or the egg.”  Hill recognized that it was not so clear for diseases that developed slowly. “Does a particular diet lead to disease or do the early stages of the disease lead to those peculiar dietetic habits?” Of current interest are the epidemiologic studies that show a correlation between diet soda and obesity which are quick to see a causal link but, naturally, one should ask “Who drinks diet soda?”

5. Biological gradient:  the association should show a dose response curve. In the case of cigarettes, the death rate from cancer of the lung increases linearly with the number of cigarettes smoked. A subset of the first principle, that the association should be strong, is that the dose-response curve should have a meaningful slope and, I would add, the numbers should be big.

6. Plausibilityy: On the one hand, this seems critical — the association of egg consumption with diabetes is obviously foolish — but the hypothesis to be tested may have come from an intuition that is far from evident. Hill said, “What is biologically plausible depends upon the biological knowledge of the day.”

7. Coherence: “data should not seriously conflict with the generally known facts of the natural history and biology of the disease”

8. Experiment: It was another age. It is hard to believe that it was in my lifetime. “Occasionally it is possible to appeal to experimental, or semi-experimental, evidence. For example, because of an observed association some preventive action is taken. Does it in fact prevent?” The inventor of the random controlled trial would be amazed how many of these are done, how many fail to prevent. And, most of all, he would have been astounded that it doesn’t seem to matter. However, the progression of failures, from Framingham to the Women’s Health Initiative, the lack of association between low fat, low saturated fat and cardiovascular disease, is strong evidence for the absence of causation.

9. Analogy: “In some circumstances it would be fair to judge by analogy. With the effects of thalidomide and rubella before us we would surely be ready to accept slighter but similar evidence with another drug or another viral disease in pregnancy.”

Hill’s final word on what has come to be known as his criteria for deciding about causation:

“Here then are nine different viewpoints from all of which we should study association before we cry causation. What I do not believe — and this has been suggested — is that we can usefully lay down some hard-and-fast rules of evidence that must be obeyed before we accept cause and effect. None of my nine viewpoints can bring indisputable evidence for or against the cause-and-effect hypothesis and none can be required as a sine qua non. What they can do, with greater or less strength, is to help us to make up our minds on the fundamental question – is there any other way of explaining the set of facts before us, is there any other answer equally, or more, likely than cause and effect?” This may be the first critique of the still-to-be-invented Evidence-based Medicine.

Nutritional Epidemiology.

The decision to say that an observational study implies causation is equivalent to an assertion that the results are meaningful, that it is not a random association at all, that it is scientifically sound. Critics of epidemiological studies have relied on their own perceptions and appeal to common sense and when I started this blogpost, I was one of them, and I had not appreciated the importance of Bradford Hill’s principles. The Emperor of All Maladies described Hill’s strategies for dealing with association and causation “which have remained in use by epidemiologists to date.”  But have they? The principles are in the texts. Epidemiology, Biostatistics, and Preventive Medicine has a chapter called “The study of causation in Epidemiologic Investigation and Research” from which the dose-response curve was modified. Are these principles being followed? Previous posts in this blog and others have have voiced criticisms of epidemiology as it’s currently practiced in nutrition but we were lacking a meaningful reference point. Looking back now, what we see is a large number of research groups doing epidemiology in violation of most of Hill’s criteria.

The red meat scare of 2011 was Pan, et al and I described in a previous post, the remarkable blog from Harvard . Their blog explained that the paper was unnecessarily scary because it had described things in terms of “relative risks, comparing death rates in the group eating the least meat with those eating the most. The absolute risks… sometimes help tell the story a bit more clearly. These numbers are somewhat less scary.”  I felt it was appropriate to ask “Why does Dr. Pan not want to tell the story as clearly as possible?  Isn’t that what you’re supposed to do in science? Why would you want to make it scary?” It was, of course, a rhetorical question.

Looking at Pan, et al. in light of Bradford Hill, we can examine some of their data. Figure 2 from their paper shows the risk of diabetes as a function of red meat in the diet. The variable reported is the hazard ratio which can be considered roughly the same as the odds ratio, that is, relative odds of getting diabetes. I have indicated, in pink, those values that are not statistically significant and I grayed out the confidence interval to make it easy to see that these do not even hit the level of 2 that Bradford Hill saw as some kind of cut-off.


The hazard ratios for processed meat are somewhat higher but still less than 2. This is weak data and violates the first and most important of Hill’s criteria. As you go from quartile 2 to 3, there is an increase in risk, but at Q4, the risk goes down and then back up at Q5, in distinction to principle 5 which suggests the importance of dose-response curves. But, stepping back and asking what the whole idea is, asking why you would think that meat has a major — and isolatable role separate from everything else — in a disease of carbohydrate intolerance, you see that this is not rational, this is not science. And Pan is not making random observations. This is a test of the hypothesis that red meat causes diabetes. Most of us would say that it didn’t make any sense to test such a hypothesis but the results do not support the hypothesis.

What is science?

Science is a human activity and what we don’t like about philosophy of science is that it is about the structure and formalism of science rather than what scientists really do and so there aren’t even any real definitions. One description that I like, from a colleague at the NIH: “What you do in science, is you make a hypothesis and then you try to shoot yourself down.” One of the more interesting sidelights on the work of Hill and Doll, as described in Emperor, was that during breaks from the taxing work of analyzing the questionnaires that provided the background on smoking, Doll himself would step out for a smoke. Doll believed that cigarettes were unlikely to be a cause — he favored tar from paved highways as the causative agent — but as the data came in, “in the middle of the survey, sufficiently alarmed, he gave up smoking.” In science, you try to shoot yourself down and, in the end, you go with the data.

TIME: You’re partnering with, among others, Harvard University on this. In an alternate Lady Gaga universe, would you have liked to have gone to Harvard?

Lady Gaga: I don’t know. I am going to Harvard today. So that’ll do.

— Belinda Luscombe, Time Magazine, March 12, 2012

There was a sense of déja-vu about the latest red meat scare and I thought that my previous post as well as those of others had covered the bases but I just came across a remarkable article from the Harvard Health Blog. It was entitled “Study urges moderation in red meat intake.” It describes how the “study linking red meat and mortality lit up the media…. Headline writers had a field day, with entries like ‘Red meat death study,’ ‘Will red meat kill you?’ and ‘Singing the blues about red meat.”’

What’s odd is that this is all described from a distance as if the study by Pan, et al (and likely the content of the blog) hadn’t come from Harvard itself but was rather a natural phenomenon, similar to the way every seminar on obesity begins with a slide of the state-by-state development of obesity as if it were some kind of meteorologic event.

When the article refers to “headline writers,” we are probably supposed to imagine sleazy tabloid publishers like the ones who are always pushing the limits of first amendment rights in the old Law & Order episodes.  The Newsletter article, however, is not any less exaggerated itself. (My friends in English Departments tell me that self-reference is some kind of hallmark of real art). And it is not true that the Harvard study was urging moderation. In fact, it is admitted that the original paper “sounded ominous. Every extra daily serving of unprocessed red meat (steak, hamburger, pork, etc.) increased the risk of dying prematurely by 13%. Processed red meat (hot dogs, sausage, bacon, and the like) upped the risk by 20%.” That is what the paper urged. Not moderation. Prohibition. Who wants to buck odds like that? Who wants to die prematurely?

It wasn’t just the media. Critics in the blogosphere were also working over-time deconstructing the study.  Among the faults that were cited, a fault common to much of the medical literature and the popular press, was the reporting of relative risk.

The limitations of reporting relative risk or odds ratio are widely discussed in popular and technical statistical books and I ran through the analysis in the earlier post. Relative risk destroys information.  It obscures what the risks were to begin with.  I usually point out that you can double your odds of winning the lottery if you buy two tickets instead of one. So why do people keep doing it?  One reason, of course, is that it makes your work look more significant.  But, if you don’t report the absolute change in risk, you may be scaring people about risks that aren’t real. The nutritional establishment is not good at facing their critics but on this one, they admit that they don’t wish to contest the issue.

Nolo Contendere.

“To err is human, said the duck as it got off the chicken’s back”

 — Curt Jürgens in The Devil’s General

Having turned the media loose to scare the American public, Harvard now admits that the bloggers are correct.  The Health NewsBlog allocutes to having reported “relative risks, comparing death rates in the group eating the least meat with those eating the most. The absolute risks… sometimes help tell the story a bit more clearly. These numbers are somewhat less scary.” Why does Dr. Pan not want to tell the story as clearly as possible?  Isn’t that what you’re supposed to do in science? Why would you want to make it scary?

The figure from the Health News Blog:

Deaths per 1,000 people per year

    1 serving unprocessed meat a week   2 servings unprocessed meat a day


    3 servings unprocessed meat a week   2 servings unprocessed meat a day



Unfortunately, the Health Blog doesn’t actually calculate the  absolute risk for you.  You would think that they would want to make up for Dr. Pan scaring you.   Let’s calculate the absolute risk.  It’s not hard.Risk is usually taken as probability, that is, number cases divided by total number of participants.  Looking at the men, the risk of death with 3 servings per week is equal to the 12.3 cases per 1000 people = 12.3/1000 = 0.1.23 = 1.23 %. Now going to 14 servings a week (the units in the two columns of the table are different) is 13/1000 = 1.3 % so, for men, the absolute difference in risk is 1.3-1.23 = 0.07, less than 0.1 %.  Definitely less scary. In fact, not scary at all. Put another way, you would have to drastically change the eating habits (from 14 to 3 servings) of 1, 429 men to save one life.  Well, it’s something.  Right? After all for millions of people, it could add up.  Or could it?  We have to step back and ask what is predictable about 1 % risk. Doesn’t it mean that if a couple of guys got hit by cars in one or another of the groups whether that might not throw the whole thing off? in other words, it means nothing.

Observational Studies Test Hypotheses but the Hypotheses Must be Testable.

It is commonly said that observational studies only generate hypotheses and that association does not imply causation.  Whatever the philosophical idea behind these statements, it is not exactly what is done in science.  There are an infinite number of observations you can make.  When you compare two phenomena, you usually have an idea in mind (however much it is unstated). As Einstein put it “your theory determines the measurement you make.”  Pan, et al. were testing the hypothesis that red meat increases mortality.  If they had done the right analysis, they would have admitted that the test had failed and the hypothesis was not true.  The association was very weak and the underlying mechanism was, in fact, not borne out.  In some sense, in science, there is only association. God does not whisper in our ear that the electron is charged. We make an association between an electron source and the response of a detector.  Association does not necessarily imply causality, however; the association has to be strong and the underlying mechanism that made us make the association in the first place, must make sense.

What is the mechanism that would make you think that red meat increased mortality.  One of the most remarkable statements in the original paper:

“Regarding CVD mortality, we previously reported that red meat intake was associated with an increased risk of coronary heart disease2, 14 and saturated fat and cholesterol from red meat may partially explain this association.  The association between red meat and CVD mortality was moderately attenuated after further adjustment for saturated fat and cholesterol, suggesting a mediating role for these nutrients.” (my italics)

This bizarre statement — that saturated fat played a role in increased risk because it reduced risk— was morphed in the Harvard News Letters plea bargain to “The authors of the Archives paper suggest that the increased risk from red meat may come from the saturated fat, cholesterol, and iron it delivers;” the blogger forgot to add “…although the data show the opposite.” Reference (2) cited above had the conclusion that “Consumption of processed meats, but not red meats, is associated with higher incidence of CHD and diabetes mellitus.” In essence, the hypothesis is not falsifiable — any association at all will be accepted as proof. The conclusion may be accepted if you do not look at the data.

The Data

In fact, the data are not available. The individual points for each people’s red meat intake are grouped together in quintiles ( broken up into five groups) so that it is not clear what the individual variation is and therefore what your real expectation of actually living longer with less meat is.  Quintiles are some kind of anachronism presumably from a period when computers were expensive and it was hard to print out all the data (or, sometimes, a representative sample).  If the data were really shown, it would be possible to recognize that it had a shotgun quality, that the results were all over the place and that whatever the statistical correlation, it is unlikely to be meaningful in any real world sense.  But you can’t even see the quintiles, at least not the raw data. The outcome is corrected for all kinds of things, smoking, age, etc.  This might actually be a conservative approach — the raw data might show more risk — but only the computer knows for sure.


“…mathematically, though, there is no distinction between confounding and explanatory variables.”

  — Walter Willett, Nutritional Epidemiology, 2o edition.

You make a lot of assumptions when you carry out a “multivariate adjustment for major lifestyle and dietary risk factors.”   Right off , you assume that the parameter that you want to look at — in this case, red meat — is the one that everybody wants to look at, and that other factors can be subtracted out. However, the process of adjustment is symmetrical: a study of the risk of red meat corrected for smoking might alternatively be described as a study of the risk from smoking corrected for the effect of red meat. Given that smoking is an established risk factor, it is unlikely that the odds ratio for meat is even in the same ballpark as what would be found for smoking. The figure below shows how risk factors follow the quintiles of meat consumption.  If the quintiles had been derived from the factors themselves we would have expected even better association with mortality.

The key assumption is that the there are many independent risk factors which contribute in a linear way but, in fact, if they interact, the assumption is not appropriate.  You can correct for “current smoker,” but biologically speaking, you cannot correct for the effect of smoking on an increased response to otherwise harmless elements in meat, if there actually were any.  And, as pointed out before, red meat on a sandwich may be different from red meat on a bed of cauliflower puree.

This is the essence of it.  The underlying philosophy of this type of analysis is “you are what you eat.” The major challenge to this idea is that carbohydrates, in particular, control the response to other nutrients but, in the face of the plea of nolo contendere,  it is all moot.

Who paid for this and what should be done.

We paid for it. Pan, et al was funded in part by 6 NIH grants.  (No wonder there is no money for studies of carbohydrate restriction).  It is hard to believe with all the flaws pointed out here and, in the end, admitted by the Harvard Health Blog and others, that this was subject to any meaningful peer review.  A plea of no contest does not imply negligence or intent to do harm but something is wrong. The clear attempt to influence the dietary habits of the population is not justified by an absolute risk reduction of less than one-tenth of one per cent, especially given that others have made the case that some part of the population, particularly the elderly may not get adequate protein. The need for an oversight committee of impartial scientists is the most important conclusion of Pan, et al.  I will suggest it to the NIH.