Posts Tagged ‘red meat,’

 

Carrot_Nation-3c

I was walking on a very dark street and I assumed the guy was talking on a cell phone, apparently about a dinner party. The voice was saying “Remember, I don’t eat red meat.” Just a few years ago, such a statement would have sounded strange. Of course, a few years ago a man talking to himself on the street would have been strange. He would have been assumed to be insane. Even more insane if he told you that he was actually talking on the telephone. But yesterday’s oddity pops up everywhere today. Neo-vegetarianism affects us all. Described well by Jane Kramer’s excellent review of veggie cookbooks in the April 14 New Yorker,

“…from one chili party to the next, everything changed. Seven formerly enthusiastic carnivores called to say they had stopped eating meat entirely, and would like to join my vegetarians for the pesto. Worse, on the night of that final party, four of the remaining carnivores carried their plates to the kitchen table, ignoring the cubes of beef and pancetta, smoky and fragrant in their big red bean pot, and headed for my dwindling supply of pasta. “Stop!” I cried. “That’s for the vegetarians!”

The New Yorker review describes well the different forms of vegetarianism and the various arguments for them, some better than others. The treatment of animals, more than their slaughter, is probably most upsetting. Just the review, in this week’s London Review of Books, of “Farmageddon. The True Cost of Cheap Meat” and “Planet Carnivore” is sufficiently scary as to be unreadable. Most of us just live with it. Personally, I think of it by analogy with the announcements on airplanes that, under conditions of low pressure, you should put your own oxygen mask on before helping others. When we start treating people better we can help the animals. Maybe a rationalization. In any case, one cause that doesn’t sit well is “…the health argument (doctors and nutritionists, alarmed by the rise in illness and obesity in a high-fat Big Mac world)…”

It’s not a high fat world any more than it’s a vegetarian country. A better description might be: Doctors and nutritionists, alarmed by the reduction in funding for anything but the party line, and imbued with a missionary zeal, try hard to find something wrong with meat, especially red meat. A lifeline for bloggers, there is a “meat kills” article every couple of months, usually an epidemiologic study with an odds ratio of 1.4. Odds ratio is what it sounds like: Your odds of getting a disease with the intervention vs your odds under control conditions. (Similar to a hazard ratio (HR) which is  the ratio of probabilities).  The ORs or HRs are commonly around 1.5 for the usual “is associated with” paper. For comparison, the odds ratio for getting lung cancer if you smoke is about 20 compared to not smoking. If you are a heavy smoker, they’re about 30.

Since we don’t really know what causes cancer or even heart disease or especially all-cause mortality, most of us let the “meat will kill you stories” go by, like other “breaking” news stories. But when the dictum is “meat causes diabetes” it is hard to ignore. Far-fetched and dangerous for it’s obscuring the elephant in the room: carbohydrate.

One of the worst of the meat scares was Pan, et al (JAMA Intern Med. 2013;173(14):1328-1335.) from the Harvard School of Public Health, a major supplier of these studies. This stuff has been deconstructed by several bloggers but a new technique is to look at the changes in consumption over time, rather than at a single time-point for ingestion.  Most of these studies are based on food questionnaires and measuring differences increases the error.  Error in a parameter that already has some uncertainty. (When you take the derivative of a function, you make the signal-to-noise ratio worse). It is like weighing the captain by weighing the ship before and after they are on board. The data are likely to have great scatter and you have more room to lump them into quintiles or otherwise find a way to come up with some kind of positive correlation. (This irony is a cover for my emotional reaction to a very serious collapse in scientific standards in the medical literature.) Pan’s paper concludes:

“Increasing red meat consumption over time is associated with an elevated subsequent risk of T2DM [type 2 diabetes mellitus]….”

You have to read the original to evaluate papers like this. First off, as in much of the medical literature, there is only one figure but several mind-numbing tables. This is a sufficiently serious problem that a whole book, “Medical Illuminations”  (recommended), has been written about it. The tables give you the raw data (at least as averaged into big groups) and the outcome from “corrected models.” However, when you plot the raw data you see that the reduction in red meat intake, the ultimate recommendation of the paper, leads to an increase in diabetes. What? That’s the opposite of the authors’ conclusion.

CarrotNation_RedMeat_DM2_July1

The table does not list this conclusion. You have to calculate it yourself. The table shows “models” that have been “corrected” for confounders. Most of us think that when you get a positive result, you have to make sure that there weren’t underlying factors (other than the one you are interested in) that account for the outcome. So, for example, if you say that increase in a particular food is associated with a disease, you are expected to subtract out the effect of any increased calories. If your primary data don’t show an effect then you are, more or less, out of luck. You can, however, “correct” with something known to cause the disease, something expected to make things worse. If this makes things better, you may have shown a benefit in your outcome but it becomes far-fetched unless there is  a very small number of variables. Generally though, if your “confounders” improve the correlation, they are the controlling variables.

I wrote a letter to the editor saying “The authors measured the effect of reducing meat consumption, which increased the frequency of diabetes in all the cohorts studied, opposite to the expectation of a consistent dose-response curve.” The journal published the letter along with the authors’ answers (they get the last word). The journal has a strict policy on brevity and you are not allowed to use any figures so I couldn’t send the picture of what things are really like.  The authors answer to the dose-response question:

 “Figure 1 in our article1 showed that increasing red meat intake within a 4-year period was positively associated with T2DM in the subsequent 4 years in a dose responsive manner, not “the effect of reducing meat consumption, which increased the frequency of diabetes in all the cohorts studied,” as claimed by Dr Feinman.

Astounding. The first highlighted sentence does not contradict the second. My figure shows that increasing red meat or decreasing red meat increased diabetes. How is this possible? It is possible if the data have too much randomness to be reliable.

So how do they justify their conclusion? Simple, they correct the data for confounders. They correct for initial red meat intake which makes the effect of an increase in meat stronger as you would expect. They then correct for age but they don’t show you what that effect is. In fact, they correct for race, marital status, family history of T2DM, history of hypertension, history of hypercholesterolemia, smoking status, initial and changes in alcohol intake,…” — I’m not making this up — “physical activity, total energy intake, and diet quality, postmenopausal status and menopausal hormone use plus initial body mass index and weight change.” Mirabile dictu, they are able to get the answer to come out the way they want.

It would probably be hard to explain to the authors why this doesn’t make any sense. If you have to do so much work to get the answer, it can’t mean anything. It’s all like the old joke about the woman who calls the police because the guy next door is exposing himself. When the cops come, she shows them the window.  The cop says “Lady, that window is too high to see anything.” She says “Sure. Where you are, but stand on this chair and you will see.”

So, does all this mean anything at all? Well, it means that diabetes is not correlated with red meat unless you include many other factors. Maybe those factors are what we should be warned about. But it is simple. This is not done in a vacuum. There are big epidemiologic studies. The real point is, as in my Letter to the Editor, “Red meat consumption decreased as T2DM increased during the past 30 years.” The data are compelling:

Carrot_Nation_RedMeat_DM2_May5

Their answer was “ this ecological relationship cannot be used to argue against the causal relationship between red meat intake and T2DM because many other factors have changed over time.”

This statement stands as the embodiment of the total lack of common sense and the irrational perspective of the epidemiologist. (Okay. Just these epidemiologists). There are always more factors. If your data don’t come out the way you want, drag in as many factors as you need (age, initial red meat intake, race, marital status, shirt size, etc.) until it does. If somebody else’s data shows you that you are wrong, point out all the things that they have left out. The end of common sense. The end of science.

But why do they do this? I am not sure why you would think that red meat had much to do with diabetes but the study showed that you were wrong. Research gives you a lot of failures. You just go on to something else. Nobody knows about motivation, nobody knows what was on their mind. Seven possible reasons are NIH grants P01CA087969, R01CA050385, U19CA055075, R01DK058845, P30DK046200, U54CA155626 and K99HL098459. Nonetheless, one has the sense that the authors really believe their conclusion and that there is a general emotional and puritanical reaction to red meat and its agents.

“Components in red meat that may contribute to T2DM…”

“…The time has been

That, when the brains were out, the man would die,

And there an end…”

— William Shakespeare, Macbeth.

A big problem: the underlying mechanism. What might actually be the agent that confers such danger on red meat? Pan, et al say “Components in red meat that may contribute to T2DM risk include heme iron, high saturated fat and cholesterol, added sodium and nitrites and nitrates in processed meat, etc.”

This list is notable for the presence of saturated fat and cholesterol. Isn’t that dead? The latest report about evidence that saturated fat does not pose a risk has a certain degree of squabbling but it is only one in a long line of individual studies and meta-analyses that drive a stake through the heart of cholesterol and saturated fat as a risk. Walter Willett, an author on Pan, et al just couldn’t face the result and wanted the paper withdrawn, but the history of risk of saturated fat and cholesterol is demonstration of one failure after another, some from his own lab. The idea never dies. One interesting part of the squabbling was the statement, “A 2009 review concluded that replacing saturated fats with carbohydrates had no benefit, while replacing them with polyunsaturated fats reduced the risk of heart disease. Several scientists say that should have been mentioned in the new paper.” Presumably it is the second part, rather than the first that they want mentioned.

But underneath it all is the moralistic, puritanical mindset. In trying to face the evidence in the original report, Alice Lichtenstein said, “It would be unfortunate if these results were interpreted to suggest that people can go back to eating butter and cheese with abandon.”  Abandon? I guess we are supposed to think of the gutted pig scene in Fellini’s Satyricon.

federico-fellini-satyricon-movie-pig

 

All such moralistic proscriptions have the risk of what pyschologists call counter-control.  I personally rarely eat meat before 6 PM, but when I found out that Mark Bittman says that that is what we all must do, it made me get out left-over spareribs for lunch. Along these lines, it is heartening to see that in her review of Deborah Madison’s Vegetable Literacy, Kramer points out that, in the preparation of cardoon risotto “there is permission to simmer it in a ‘light chicken stock,’ and even an acknowledgement that vegetable stock might ‘overwhelm’ the flavor of that delicately bitter member of the sunflower family.…” And, in the end, “The book is sly. Think of it as a pro-choice cookbook decorously wrapped in carrots and beans and lettuce leaves. Apart from the chicken broth, you won’t find anything ‘animal’ listed but read what she has to say about some of those recipes, and you will detect the beginning of a stealth operation — a call to sit down at the dinner table together and put an end to the testy herbivore-carnivore divide.” This suggests that they might both be in tune with my own philosophy which I call antidiscarnivorianism.

Illustration by Robin Feinman. Reference: http://en.wikipedia.org/wiki/Carrie_Nation

 

“…789 deaths were reported in Doll and Hill’s original cohort. Thirty-six of these were attributed to lung cancer. When these lung cancer deaths were counted in smokers versus non-smokers, the correlation virtually sprang out: all thirty-six of the deaths had occurred in smokers. The difference between the two groups was so significant that Doll and Hill did not even need to apply complex statistical metrics to discern it. The trial designed to bring the most rigorous statistical analysis to the cause of lung cancer barely required elementary mathematics to prove his point.”

Siddhartha Mukherjee —The Emperor of All Maladies.

 Scientists don’t like philosophy of science. It is not just that pompous phrases like hypothetico-deductive systems are such a turn-off but that we rarely recognize it as what we actually do. In the end, there is no definition of science and it is hard to generalize about actual scientific behavior. It’s a human activity and precisely because it puts a premium on creativity, it defies categorization. As the physicist Steven Weinberg put it, echoing Justice Stewart on pornography:

“There is no logical formula that establishes a sharp dividing line between a beautiful explanatory theory and a mere list of data, but we know the difference when we see it — we demand a simplicity and rigidity in our principles before we are willing to take them seriously [1].”

A frequently stated principle is that “observational studies only generate hypotheses.” The related idea that “association does not imply causality” is also common, usually cited by those authors who want you to believe that the association that they found does imply causality. These ideas are not right or, at least, they insufficiently recognize that scientific experiments are not so easily wedged into categories like “observational studies.”  The principles are also invoked by bloggers and critics to discredit the continuing stream of observational studies that make an association between their favorite targets, eggs, red meat, sugar-sweetened soda and a metabolic disease or cancer. In most cases, the studies are getting what they deserve but the bills of indictment are not quite right.  It is usually not simply that they are observational studies but rather that they are bad observational studies and, in any case, the associations are so weak that it is reasonable to say that they are an argument for a lack of causality. On the assumption that good experimental practice and interpretation can be even roughly defined, let me offer principles that I think are a better representation, insofar as we can make any generalization, of what actually goes on in science:

 Observations generate hypotheses. 

Observational studies test hypotheses.

Associations do not necessarily imply causality.

In some sense, all science is associations. 

Only mathematics is axiomatic.

 If you notice that kids who eat a lot of candy seem to be fat, or even if you notice that candy makes you yourself fat, that is an observation. From this observation, you might come up with the hypothesis that sugar causes obesity. A test of your hypothesis would be to see if there is an association between sugar consumption and incidence of obesity. There are various ways — the simplest epidemiologic approach is simply to compare the history of the eating behavior of individuals (insofar as you can get it) with how fat they are. When you do this comparison you are testing your hypothesis. There are an infinite number of things that you could have measured as an independent variable, meat, TV hours, distance from the French bakery but you have a hypothesis that it was candy. Mike Eades described falling asleep as a child by trying to think of everything in the world. You just can’t test them all. As Einstein put it “your theory determines the measurement you make.”

Associations predict causality. Hypotheses generate observational studies, not the other way around.

In fact, association can be strong evidence for causation and frequently provide support for, if not absolute proof, of the idea to be tested. A correct statement is that association does not necessarily imply causation. In some sense, all science is observation and association. Even thermodynamics, that most mathematical and absolute of sciences, rests on observation. As soon as somebody observes two systems in thermal equilibrium with a third but not with each other (zeroth law), the jig is up. When somebody builds a perpetual motion machine, that’s it. It’s all over.

Biological mechanisms, or perhaps any scientific theory, are never proved. By analogy with a court of law, you cannot be found innocent, only not guilty. That is why excluding a theory is stronger than showing consistency. The grand epidemiological study of macronutrient intake vs diabetes and obesity shows that increasing carbohydrate is associated with increased calories even under conditions where fruits and vegetables also went up and fat, if anything went down. It is an observational study but it is strong because it gives support to a lack of causal effect of increased carbohydrate and decreased fat on outcome. The failure of total or saturated fat to have any benefit is the kicker here. It is now clear that prospective experiments have, in the past, and will continue to show, the same negative outcome. Of course, in a court of law, if you are found not guilty of child abuse, people may still not let you move into their neighborhood. It is that saturated fat should never have been indicted in the first place.

An association will tell you about causality 1) if the association is strong and 2) if there is a plausible underlying mechanism and 3) if there is no more plausible explanation — for example, countries with a lot of TV sets have modern life styles that may predispose to cardiovascular disease; TV does not cause CVD.

Re-inventing the wheel. Bradford Hill and the history of epidemiology.

Everything written above is true enough or, at least, it seemed that way to me. I thought of it as an obvious description of what everybody knows. The change to saying that “association does not necessarily imply causation” is important but not that big a deal. It is common sense or logic and I had made it into a short list of principles. It was a blogpost of reasonable length. I described it to my colleague Gene Fine. His response was “aren’t you re-inventing the wheel?” Bradford Hill, he explained, pretty much the inventor of modern epidemiology, had already established these and a couple of other principles. Gene cited The Emperor of All Maladies, an outstanding book on the history of cancer.  I had read The Emperor of All Maladies on his recommendation and I remembered Bradford Hill and the description of the evolution of the ideas of epidemiology, population studies and random controlled trials. I also had a vague memory, of reading the story in James LeFanu’s The Rise and Fall of Modern Medicine, another captivating history of medicine. However, I had not really absorbed these as principles. Perhaps we’re just used to it, but saying that an association implies causality only if it is a strong association is not exactly a scientific breakthrough. It seems an obvious thing that you might say over coffee or in response to somebody’s blog. It all reminded me of learning, in grade school, that the Earl of Sandwich had invented the sandwich and thinking “this is an invention?”  Woody Allen thought the same thing and wrote the history of the sandwich and the Earl’s early failures — “In 1741, he places bread on bread with turkey on top. This fails. In 1745, he exhibits bread with turkey on either side. Everyone rejects this except David Hume.”

At any moment in history our background knowledge — and accepted methodology —  may be limited. Some problems seem to have simple solutions. But simple ideas are not always accepted. The concept of the random controlled trial (RCT), obvious to us now, was hard won and, proving that any particular environmental factor — diet, smoking, pollution or toxic chemicals was the cause of a disease and that, by reducing that factor, the disease could be prevented, turned out to be a very hard sell, especially to physicians whose view of disease may have been strongly colored by the idea of an infective agent.

Hill_CausationThe Rise and Fall of Modern Medicine describes Bradford Hill’s two demonstrations that streptomycin in combination with PAS (para-aminosalicylic acid) could cure tuberculosis and that tobacco causes lung cancer as one of the Ten Definitive Moments in the history of modern medicine (others shown in the textbox). Hill was Professor of Medical Statistics at the London School of Hygiene and Tropical Medicine but was not formally trained in statistics and, like many of us, thought of proper statistics as common sense. An early near fatal case of tuberculosis also prevented formal medical education. His first monumental accomplishment was, ironically, to demonstrate how tuberculosis could be cured with the combination of streptomycin and PAS.  In 1941, Hill and co-worker Richard Doll undertook a systematic investigation of the risk factors for lung cancer. His eventual success was accompanied by a description of the principles that allow you to say when association can be taken as causation.

 Ten Definitive Moments from Rise and Fall of Modern Medicine.

1941: Penicillin

1949: Cortisone

1950: streptomycin, smoking and Sir Austin Bradford Hill

1952: chlorpromazine and the revolution in psychiatry

1955: open-heart surgery – the last frontier

1963: transplanting kidneys

1964: the triumph of prevention – the case of strokes

1971: curing childhood cancer

1978: the first ‘Test-Tube’ baby

1984: Helicobacter – the cause of peptic ulcer

Wiki says: “in 1965, built  upon the work of Hume and Popper, Hill suggested several aspects of causality in medicine and biology…” but his approach was not formal — he never referred to his principles as criteria — he recognized them as common sense behavior and his 1965 presentation to the Royal Society of Medicine, is a remarkably sober, intelligent document. Although described as an example of an article that, as here, has been read more often in quotations and paraphrases, it is worth reading the original even today.

Note: “Austin Bradford Hill’s surname was Hill and he always used the name Hill, AB in publications. However, he is often referred to as Bradford Hill. To add to the confusion, his friends called him Tony.” (This comment is from Wikipedia, not Woody Allen).

The President’s Address

Bradford Hill’s description of the factors that might make you think an association implied causality:

Hill_Environment1965

1. Strength. “First upon my list I would put the strength of the association.” This, of course, is exactly what is missing in the continued epidemiological scare stories. Hill describes

“….prospective inquiries into smoking have shown that the death rate from cancer of the lung in cigarette smokers is nine to ten times the rate in non-smokers and the rate in heavy cigarette smokers is twenty to thirty times as great.”

But further:

“On the other hand the death rate from coronary thrombosis in smokers is no more than twice, possibly less, the death rate in nonsmokers. Though there is good evidence to support causation it is surely much easier in this case to think of some features of life that may go hand-in-hand with smoking – features that might conceivably be the real underlying cause or, at the least, an important contributor, whether it be lack of exercise, nature of diet or other factors.”

Doubts about an odds ratio of two or less. That’s where you really have to wonder about causality. The progression of epidemiologic studies that tell you red meat, HFCS, etc. will cause diabetes, prostatic cancer, or whatever, these rarely hit an odds ratio of 2.  While the published studies may contain disclaimers of the type in Hill’s paper, the PR department of the university where the work is done, and hence the public media, show no such hesitation and will quickly attribute causality to the study as if the odds ratio were 10 instead of 1.2.

2. Consistency: Hill listed the repetition of the results in other studies under different circumstances as a criterion for considering how much an association implied causality. Not mentioned but of great importance, is that this test cannot be made independent of the first criterion. Consistently weak associations do not generally add up to a strong association. If there is a single practice in modern medicine that is completely out of whack with respect to careful consideration of causality, it is the meta-analysis where studies with no strength at all are averaged so as to create a conclusion that is stronger than any of its components.

3. Specificity. Hill was circumspect on this point, recognizing that we should have an open mind on what causes what. On specificity of cancer and cigarettes, Hill noted that the two sites in which he showed a cause and effect relationship were the lungs and the nose.

4. Temporality: Obviously, we expect the cause to precede the effect or, as some wit put it “which got laid first, the chicken or the egg.”  Hill recognized that it was not so clear for diseases that developed slowly. “Does a particular diet lead to disease or do the early stages of the disease lead to those peculiar dietetic habits?” Of current interest are the epidemiologic studies that show a correlation between diet soda and obesity which are quick to see a causal link but, naturally, one should ask “Who drinks diet soda?”

5. Biological gradient:  the association should show a dose response curve. In the case of cigarettes, the death rate from cancer of the lung increases linearly with the number of cigarettes smoked. A subset of the first principle, that the association should be strong, is that the dose-response curve should have a meaningful slope and, I would add, the numbers should be big.

6. Plausibilityy: On the one hand, this seems critical — the association of egg consumption with diabetes is obviously foolish — but the hypothesis to be tested may have come from an intuition that is far from evident. Hill said, “What is biologically plausible depends upon the biological knowledge of the day.”

7. Coherence: “data should not seriously conflict with the generally known facts of the natural history and biology of the disease”

8. Experiment: It was another age. It is hard to believe that it was in my lifetime. “Occasionally it is possible to appeal to experimental, or semi-experimental, evidence. For example, because of an observed association some preventive action is taken. Does it in fact prevent?” The inventor of the random controlled trial would be amazed how many of these are done, how many fail to prevent. And, most of all, he would have been astounded that it doesn’t seem to matter. However, the progression of failures, from Framingham to the Women’s Health Initiative, the lack of association between low fat, low saturated fat and cardiovascular disease, is strong evidence for the absence of causation.

9. Analogy: “In some circumstances it would be fair to judge by analogy. With the effects of thalidomide and rubella before us we would surely be ready to accept slighter but similar evidence with another drug or another viral disease in pregnancy.”

Hill’s final word on what has come to be known as his criteria for deciding about causation:

“Here then are nine different viewpoints from all of which we should study association before we cry causation. What I do not believe — and this has been suggested — is that we can usefully lay down some hard-and-fast rules of evidence that must be obeyed before we accept cause and effect. None of my nine viewpoints can bring indisputable evidence for or against the cause-and-effect hypothesis and none can be required as a sine qua non. What they can do, with greater or less strength, is to help us to make up our minds on the fundamental question – is there any other way of explaining the set of facts before us, is there any other answer equally, or more, likely than cause and effect?” This may be the first critique of the still-to-be-invented Evidence-based Medicine.

Nutritional Epidemiology.

The decision to say that an observational study implies causation is equivalent to an assertion that the results are meaningful, that it is not a random association at all, that it is scientifically sound. Critics of epidemiological studies have relied on their own perceptions and appeal to common sense and when I started this blogpost, I was one of them, and I had not appreciated the importance of Bradford Hill’s principles. The Emperor of All Maladies described Hill’s strategies for dealing with association and causation “which have remained in use by epidemiologists to date.”  But have they? The principles are in the texts. Epidemiology, Biostatistics, and Preventive Medicine has a chapter called “The study of causation in Epidemiologic Investigation and Research” from which the dose-response curve was modified. Are these principles being followed? Previous posts in this blog and others have have voiced criticisms of epidemiology as it’s currently practiced in nutrition but we were lacking a meaningful reference point. Looking back now, what we see is a large number of research groups doing epidemiology in violation of most of Hill’s criteria.

The red meat scare of 2011 was Pan, et al and I described in a previous post, the remarkable blog from Harvard . Their blog explained that the paper was unnecessarily scary because it had described things in terms of “relative risks, comparing death rates in the group eating the least meat with those eating the most. The absolute risks… sometimes help tell the story a bit more clearly. These numbers are somewhat less scary.”  I felt it was appropriate to ask “Why does Dr. Pan not want to tell the story as clearly as possible?  Isn’t that what you’re supposed to do in science? Why would you want to make it scary?” It was, of course, a rhetorical question.

Looking at Pan, et al. in light of Bradford Hill, we can examine some of their data. Figure 2 from their paper shows the risk of diabetes as a function of red meat in the diet. The variable reported is the hazard ratio which can be considered roughly the same as the odds ratio, that is, relative odds of getting diabetes. I have indicated, in pink, those values that are not statistically significant and I grayed out the confidence interval to make it easy to see that these do not even hit the level of 2 that Bradford Hill saw as some kind of cut-off.

TheBlog_Cause_Pan_Fig2_

The hazard ratios for processed meat are somewhat higher but still less than 2. This is weak data and violates the first and most important of Hill’s criteria. As you go from quartile 2 to 3, there is an increase in risk, but at Q4, the risk goes down and then back up at Q5, in distinction to principle 5 which suggests the importance of dose-response curves. But, stepping back and asking what the whole idea is, asking why you would think that meat has a major — and isolatable role separate from everything else — in a disease of carbohydrate intolerance, you see that this is not rational, this is not science. And Pan is not making random observations. This is a test of the hypothesis that red meat causes diabetes. Most of us would say that it didn’t make any sense to test such a hypothesis but the results do not support the hypothesis.

What is science?

Science is a human activity and what we don’t like about philosophy of science is that it is about the structure and formalism of science rather than what scientists really do and so there aren’t even any real definitions. One description that I like, from a colleague at the NIH: “What you do in science, is you make a hypothesis and then you try to shoot yourself down.” One of the more interesting sidelights on the work of Hill and Doll, as described in Emperor, was that during breaks from the taxing work of analyzing the questionnaires that provided the background on smoking, Doll himself would step out for a smoke. Doll believed that cigarettes were unlikely to be a cause — he favored tar from paved highways as the causative agent — but as the data came in, “in the middle of the survey, sufficiently alarmed, he gave up smoking.” In science, you try to shoot yourself down and, in the end, you go with the data.

TIME: You’re partnering with, among others, Harvard University on this. In an alternate Lady Gaga universe, would you have liked to have gone to Harvard?

Lady Gaga: I don’t know. I am going to Harvard today. So that’ll do.

– Belinda Luscombe, Time Magazine, March 12, 2012

There was a sense of déja-vu about the latest red meat scare and I thought that my previous post as well as those of others had covered the bases but I just came across a remarkable article from the Harvard Health Blog. It was entitled “Study urges moderation in red meat intake.” It describes how the “study linking red meat and mortality lit up the media…. Headline writers had a field day, with entries like ‘Red meat death study,’ ‘Will red meat kill you?’ and ‘Singing the blues about red meat.”’

What’s odd is that this is all described from a distance as if the study by Pan, et al (and likely the content of the blog) hadn’t come from Harvard itself but was rather a natural phenomenon, similar to the way every seminar on obesity begins with a slide of the state-by-state development of obesity as if it were some kind of meteorologic event.

When the article refers to “headline writers,” we are probably supposed to imagine sleazy tabloid publishers like the ones who are always pushing the limits of first amendment rights in the old Law & Order episodes.  The Newsletter article, however, is not any less exaggerated itself. (My friends in English Departments tell me that self-reference is some kind of hallmark of real art). And it is not true that the Harvard study was urging moderation. In fact, it is admitted that the original paper “sounded ominous. Every extra daily serving of unprocessed red meat (steak, hamburger, pork, etc.) increased the risk of dying prematurely by 13%. Processed red meat (hot dogs, sausage, bacon, and the like) upped the risk by 20%.” That is what the paper urged. Not moderation. Prohibition. Who wants to buck odds like that? Who wants to die prematurely?

It wasn’t just the media. Critics in the blogosphere were also working over-time deconstructing the study.  Among the faults that were cited, a fault common to much of the medical literature and the popular press, was the reporting of relative risk.

The limitations of reporting relative risk or odds ratio are widely discussed in popular and technical statistical books and I ran through the analysis in the earlier post. Relative risk destroys information.  It obscures what the risks were to begin with.  I usually point out that you can double your odds of winning the lottery if you buy two tickets instead of one. So why do people keep doing it?  One reason, of course, is that it makes your work look more significant.  But, if you don’t report the absolute change in risk, you may be scaring people about risks that aren’t real. The nutritional establishment is not good at facing their critics but on this one, they admit that they don’t wish to contest the issue.

Nolo Contendere.

“To err is human, said the duck as it got off the chicken’s back”

 — Curt Jürgens in The Devil’s General

Having turned the media loose to scare the American public, Harvard now admits that the bloggers are correct.  The Health NewsBlog allocutes to having reported “relative risks, comparing death rates in the group eating the least meat with those eating the most. The absolute risks… sometimes help tell the story a bit more clearly. These numbers are somewhat less scary.” Why does Dr. Pan not want to tell the story as clearly as possible?  Isn’t that what you’re supposed to do in science? Why would you want to make it scary?

The figure from the Health News Blog:

Deaths per 1,000 people per year

    1 serving unprocessed meat a week   2 servings unprocessed meat a day
    Women    

7.0

8.5
    3 servings unprocessed meat a week   2 servings unprocessed meat a day
    Men

12.3

13.0

Unfortunately, the Health Blog doesn’t actually calculate the  absolute risk for you.  You would think that they would want to make up for Dr. Pan scaring you.   Let’s calculate the absolute risk.  It’s not hard.Risk is usually taken as probability, that is, number cases divided by total number of participants.  Looking at the men, the risk of death with 3 servings per week is equal to the 12.3 cases per 1000 people = 12.3/1000 = 0.1.23 = 1.23 %. Now going to 14 servings a week (the units in the two columns of the table are different) is 13/1000 = 1.3 % so, for men, the absolute difference in risk is 1.3-1.23 = 0.07, less than 0.1 %.  Definitely less scary. In fact, not scary at all. Put another way, you would have to drastically change the eating habits (from 14 to 3 servings) of 1, 429 men to save one life.  Well, it’s something.  Right? After all for millions of people, it could add up.  Or could it?  We have to step back and ask what is predictable about 1 % risk. Doesn’t it mean that if a couple of guys got hit by cars in one or another of the groups whether that might not throw the whole thing off? in other words, it means nothing.

Observational Studies Test Hypotheses but the Hypotheses Must be Testable.

It is commonly said that observational studies only generate hypotheses and that association does not imply causation.  Whatever the philosophical idea behind these statements, it is not exactly what is done in science.  There are an infinite number of observations you can make.  When you compare two phenomena, you usually have an idea in mind (however much it is unstated). As Einstein put it “your theory determines the measurement you make.”  Pan, et al. were testing the hypothesis that red meat increases mortality.  If they had done the right analysis, they would have admitted that the test had failed and the hypothesis was not true.  The association was very weak and the underlying mechanism was, in fact, not borne out.  In some sense, in science, there is only association. God does not whisper in our ear that the electron is charged. We make an association between an electron source and the response of a detector.  Association does not necessarily imply causality, however; the association has to be strong and the underlying mechanism that made us make the association in the first place, must make sense.

What is the mechanism that would make you think that red meat increased mortality.  One of the most remarkable statements in the original paper:

“Regarding CVD mortality, we previously reported that red meat intake was associated with an increased risk of coronary heart disease2, 14 and saturated fat and cholesterol from red meat may partially explain this association.  The association between red meat and CVD mortality was moderately attenuated after further adjustment for saturated fat and cholesterol, suggesting a mediating role for these nutrients.” (my italics)

This bizarre statement — that saturated fat played a role in increased risk because it reduced risk— was morphed in the Harvard News Letters plea bargain to “The authors of the Archives paper suggest that the increased risk from red meat may come from the saturated fat, cholesterol, and iron it delivers;” the blogger forgot to add “…although the data show the opposite.” Reference (2) cited above had the conclusion that “Consumption of processed meats, but not red meats, is associated with higher incidence of CHD and diabetes mellitus.” In essence, the hypothesis is not falsifiable — any association at all will be accepted as proof. The conclusion may be accepted if you do not look at the data.

The Data

In fact, the data are not available. The individual points for each people’s red meat intake are grouped together in quintiles ( broken up into five groups) so that it is not clear what the individual variation is and therefore what your real expectation of actually living longer with less meat is.  Quintiles are some kind of anachronism presumably from a period when computers were expensive and it was hard to print out all the data (or, sometimes, a representative sample).  If the data were really shown, it would be possible to recognize that it had a shotgun quality, that the results were all over the place and that whatever the statistical correlation, it is unlikely to be meaningful in any real world sense.  But you can’t even see the quintiles, at least not the raw data. The outcome is corrected for all kinds of things, smoking, age, etc.  This might actually be a conservative approach — the raw data might show more risk — but only the computer knows for sure.

Confounders

“…mathematically, though, there is no distinction between confounding and explanatory variables.”

  — Walter Willett, Nutritional Epidemiology, 2o edition.

You make a lot of assumptions when you carry out a “multivariate adjustment for major lifestyle and dietary risk factors.”   Right off , you assume that the parameter that you want to look at — in this case, red meat — is the one that everybody wants to look at, and that other factors can be subtracted out. However, the process of adjustment is symmetrical: a study of the risk of red meat corrected for smoking might alternatively be described as a study of the risk from smoking corrected for the effect of red meat. Given that smoking is an established risk factor, it is unlikely that the odds ratio for meat is even in the same ballpark as what would be found for smoking. The figure below shows how risk factors follow the quintiles of meat consumption.  If the quintiles had been derived from the factors themselves we would have expected even better association with mortality.

The key assumption is that the there are many independent risk factors which contribute in a linear way but, in fact, if they interact, the assumption is not appropriate.  You can correct for “current smoker,” but biologically speaking, you cannot correct for the effect of smoking on an increased response to otherwise harmless elements in meat, if there actually were any.  And, as pointed out before, red meat on a sandwich may be different from red meat on a bed of cauliflower puree.

This is the essence of it.  The underlying philosophy of this type of analysis is “you are what you eat.” The major challenge to this idea is that carbohydrates, in particular, control the response to other nutrients but, in the face of the plea of nolo contendere,  it is all moot.

Who paid for this and what should be done.

We paid for it. Pan, et al was funded in part by 6 NIH grants.  (No wonder there is no money for studies of carbohydrate restriction).  It is hard to believe with all the flaws pointed out here and, in the end, admitted by the Harvard Health Blog and others, that this was subject to any meaningful peer review.  A plea of no contest does not imply negligence or intent to do harm but something is wrong. The clear attempt to influence the dietary habits of the population is not justified by an absolute risk reduction of less than one-tenth of one per cent, especially given that others have made the case that some part of the population, particularly the elderly may not get adequate protein. The need for an oversight committee of impartial scientists is the most important conclusion of Pan, et al.  I will suggest it to the NIH.

April 1, 2012.  Piltdown, East Sussex, UK . Two prominent researchers, Drs. Ferdinand I. Charm and June E. Feigen of the University of Piltdown Center for Applied Nutrition (PCAN), submit the following guest review on a ground-breaking area of nutrition.

Nutrition is frequently accused of being a loose kind of science, not defining its terms and speaking imprecisely.  Complex carbohydrates, for example, still refer, in organic chemistry, to polysaccharides such as starches and for many years, it was absolute dogma in nutrition that complex carbohydrates were more slowly absorbed than simple sugars.  Science advances, however, and when measurements were actually made it was found not to be so simple, giving rise to the concept of the glycemic index.  The term “complex,” had since then been used loosely but has currently evolved to have a more precise meaning derived from mathematics, that is, as in complex numbers, having a real part and an imaginary part although the recent Guidelines from the USDA make it difficult to tell which is which.  In any case, the glycemic index has expanded to the concept of a glycemic load and now there is even more hope on the horizon.

Nutrition has borrowed a page from particle physics in the application of quantum chromodynamics. In the way of background, the discovery of the large number of subatomic particles and the need to classify them meant that designations had to go beyond charge and spin to include strangeness and the three flavors of quarks.  Ultimately, it was decided that quarks have an additional degree of freedom, called color and the strong interaction was identified as a color force.  A large amount of evidence supports this idea with interaction via the gluons.

Nutritional Chromodynamics.

A similar idea has arisen in nutrition and it is now clear that the more color, the better and extensive experimental work at CARN is currently under way (Figure 1). The recent CRAYOLA  study showed the value of spectral nutrient density. Support for the theory was summarized in a recent press release:

Blueberries were up there, the wild type being the best.

 “The wild blueberries are blue inside as well as blue outside. The ones we normally eat are sort of white inside. So there are more of the antioxidants in these all-blue blueberries.”

Along the line of color is good, cranberries were close behind as were blackberries.

 But what about vegetables?

 Dried red beans topped the list overall–red kidney and pinto beans were also in the top 10. But surprisingly, so are artichokes. “This is sort of interesting because they are not deeply colored, the inside, the part that we actually consume is white or very pale green but never the less they contain very large amounts of antioxidants.”

 There are nuts that did not make it into the top twenty but did have high enough content worthy of mention– pecans, hazelnuts and walnuts were the ones with the greatest antioxidant content. But the antioxidants are concentrated, so you need only a handful a day to get the amount you need.

 The problem here may be the bland coloration of the nuts. This has been jarring to some theorists, leading many to question whether the Standard Model of nutrition will last, or whether the highly abstract bean-string theory will ultimately prevail.  The recent identification of chocolate with the dark matter that fills the majority of the universe, however, has established the field of nutritional chromodynamics.  Still, critics point to the problem of red meat, one of the very few foods that actually decreased during the epidemic of obesity.  By applying the USDA Nutritional Guidelines, however, this result can be made to vanish.

Figure 1 Souper-Collider at CARN (Centre Alimentaire de Recherche Nucléaire).

Although this is pretty convincing, there is the uncertainty principle.   Because the outcome of a nutritional experiment and its support for the experimenter’s theory rarely commute, it is impossible to simultaneously measure outcome and whether the results mean anything.  Again borrowing from particle physics, there is the concept of the virtual particle that mediates interaction between other particles.  The evolving principle in the field of nutritional chromodynamics is the existence of the  mayon, the virtual particle that mediates the so-called Dietary Weak Interaction or DWI, as in “phytochemicals may prevent cancer.”

And then there is the matter of Quark. Most physicists know that Quark is the German word for sour cream and many physicists on tour in Germany have their picture taken in front of delicatessens selling Quark (at least those who don’t have their picture taken in front of a jewelry store).  Less widely known outside of the German-speaking countries is that Quark colloquially means nonsense or trash.  In any case, it’s pretty clear at this point that, the Tevatron results notwithstanding, blueberries and sour cream are the real Top Quark.

I thought that, for a change of pace, I would take a Mediterranean perspective.  The Mediterranean Diet is widely considered as an ideal diet since it is not explicitly low-fat (most of the time) while still allowing people to avoid saying low-carbohydrate which is not fashionable in many circles.  At the end of this post, however, I have included a couple of recipes from Judy Barnes Baker’s new cookbook, Nourished; a Cookbook for Health, Weight Loss, and Metabolic Balance.  For general health, Mediterranean diets have the advantage that nobody is really sure what they are and hence there are no long term trials of the type that makes low-fat diets look so bad, as in the Women’s Health Initiative.

Tournedos Rossini

Start with Giochino Rossini.  It is generally known that his life as a composer included significant time for food. He retired at a relatively early age and devoted the rest of his life to cooking and eating. (William Tell was his last opera). Rossini said that he had only cried twice as an adult. The first time was when he heard Paganini play the violin and the second, when a truffled turkey fell in the water at a boating party.

 

Because his later life was more or less in seclusion, there is some confusion about his gastronomic experiences.  It is not even clear whether Tournedos Rossini was made for him or by him.  In fact it is not even clear where the name Tournedos comes from.  Derived from tourner en dos, turning to the back, it may refer to the method of cooking or possibly that somebody had to turn their back during the preparation so as not to let anyone see the secret of the final sauce.  The recipe, although simple in outline, has expensive ingredients and the final sauce will determine the quality of the chef. It simply involves frying a steak and then putting a slab of pate de foie gras with truffles on top. The sauce is based on a beef reduction. More at Global Gourmet.

  1. Sauté the 4 center-cut filets mignons, chain muscle removed, 6 ounces in the 2 tablespoons (30 milliliters) clarified butter or vegetable oil on both sides until rare.
  2. Remove excess fat with with paper towel and place on heated plates.
  3. Place warm pate de foie gras slices on each tournedo.
  4. Cover with Périgueux Sauce:

Bring 1-1/2 cups (375 milliliters) of demi-glace to slow simmer. Add 5 tablespoons (75 milliliters) of truffle essence and 2 ounces (50 grams) of either chopped or sliced truffles. Off heat and cover with tight-fitting lid, allow truffles to infuse into the sauce for at least 15 minutes. (The sauce using truffles sliced into shapes rather than pieces is called Périgourdine).

5. Finish with a little truffle butter.

Lardo di Colonnata

Not really a make-at-home item, this traditional creation from Tuscany captures the care in processing  that makes Italian food famous.  The original curing method supposedly goes back to the year 1000, and has been handed down from generation to generation.  The lard, of course, comes from pigs that have not undergone the genetic transformation that American pigs have.  In any case, you will need marble tubs which you should keep in the basement assuming that there are no caves in your neighborhood.  You rub the tubs with garlic and then layer the pork lard and cover with brine, add sea salt and spices and herbs. You continue with additional layers until the tub is full and then cover with a wooden lid. Curing time is about 6 to 10 months.

Greek Barbecue

As described on one of the Greek food sites “anyone visiting Greece would wonder exactly what is meant by the Mediterranean diet for while those of us outside the Med have been eating more whole grains, extra virgin olive oil and fresh vegetables…. as the Greeks become more affluent they eat more meat.” I haven’t been in Greece for many years but I remember quite a bit of meat then. Of course, affluence is a sometime thing but the trend, as in other countries, is for festive holiday foods to be increasingly available all year round.  The most popular food for Easter is whole lamb roasted on a spit  The recipe is simple, if not convenient for the small family “You will need 1 whole lamb, skinned and gutted…”  Seasoning can be simple salt and pepper or basted with ladolemono, mixture of lemon juice, olive oil and oregano.

As the site points out, Lamb on the spit “is especially popular [at Easter] because it follows 40 days of fasting for lent and people are definitely ready for some meat, though not everyone fasts the entire forty days.” This reminds me of little known angle on the Seven Countries study.

Ancel Keys auf Naxos

The idea of a Mediterranean diet derives, in some way, from Ancel Keys’s Seven Countries study. He discovered that the two countries with the highest consumption of fat, had the lowest incidence of cardiovascular disease (Crete) and the highest (Finland), and he attributed this to the type of fat, olive oil for Crete and animal fat for Finland.  It was later pointed out that there were large differences in CVD between different areas of Finland that had the same diet.  This information was ignored by Keys who was also a pioneer in this approach to conflicting data.  Another of the rarely cited responses to the Seven Countries study was a letter written by Katerina Sarri and Anthony Kafatos of the University of Crete and published in the journal Public Health Nutrition: 8(6), 666 (2005):

“In the December 2004 issue of your journal…Geoffrey Cannon referred to … the fact that Keys and his colleagues seemed to have ignored the possibility that Greek Orthodox Christian fasting practices could have influenced the dietary habits of male Cretans in the 1960s. For this reason, we had a personal communication with Professor Christos Aravanis, who was responsible for carrying out and following up the Seven Countries Study in Greece. Professor Aravanis confirmed that, in the 1960s, 60% of the study participants were fasting during the 40 days of Lent, and strictly followed all fasting periods of the church according to the Greek Orthodox Church dietary doctrines. These mainly prescribe the periodic abstention from meat, fish, dairy products, eggs and cheese, as well as abstention from olive oil consumption on certain Wednesdays and Fridays….”

“this was not noted in the study, and no attempt was made to differentiate between fasters and non-fasters. In our view this was a remarkable and troublesome omission.”

Kokoretsi.

Leopold Bloom ate with relish the inner organs of beasts and fowls. He liked thick giblet soup, nutty gizzards, a stuffed roast heart, liver slices fried with crustcrumbs, fried hencod’s roes. Most of all he liked grilled mutton kidneys which gave to his palate a fine tang of faintly scented urine.

– James Joyce, Ulysses.

Along with Greek Barbecue, it is traditional at Easter to serve kokoretsi which is  made from the internal organs of the lamb. Liver, spleen, heart, glands are threaded onto skewers along with  the fatty membrane from the lamb intestines. When the skewer is full, the lamb intestines are wrapped around the whole creation. It is then barbecued over low heat for about 3-5 hours.

 

One of the regrettable aspects of the decline in food quality in the United States is the general disappearance of organ meats although the Paleo movement may help with this.  Organ meats were once very popular; the quotation above is probably the second most widely quoted passage from James Joyce’s Ulysses. Because of various ethnic influences, they were probably more popular in New York than in America (which begins somewhere in New Jersey).  I found Jimmy Moore’s confrontation with beef tongue  quite remarkable in that (in its corned form (like corned beef)), tongue was once a staple of my diet.  When I was in grade school, there were many weeks where I would bring tongue sandwiches on Silvercup bread for lunch every day.  Silvercup, made in Queens was the New York version of Wonder Bread. The Silvercup sign is still a fixture of the New York landscape — it is now the site of Silvercup Studios, the major film and television production company that kept the name (and the sign) when the Bakery folded and the studio bought the building in 1983. (You name the TV show, it was probably produced at Silvercup).

Of course, everybody draws the line somewhere. Although I used to eat with my friends at Puglia, the Little Italy restaurant that specialized in whole sheep’s head, I passed on this delicacy mostly because of the eyeballs.  Also, although you gotta’ love the euphemism Rocky Mountain Oysters, bull testicles don’t do it for me, at least if I know for sure in advance. (I don’t really mind, in retrospect, if the folk-myths about the tacos outside the bullring in Mexico City were really true).

Etymology of Food Words

Whether it is the steak or the cook whose back is turned in Tournedos, it is generally difficult to find the etymology of food words, although some are obvious. The conversion of Welsh Rabbit to Welsh Rarebit is surely an attempt to be more politically correct and avoid Welsh profiling.  One disagreement that I remember from way back when I was in college is now settled. There used to be many ideas about the origin of the word pumpernickel.  One of my favorites at the time was that Napoleon had said that it was “pain pour Nicole” (his horse). Great but not true, it is now agreed that it comes from the German, pampern, to fart and Nickel meaning goblin, along the lines of Saint Nick for Santa Claus.  So pumpernickel means Devil’s Fart presumably due to the effect of the unprocessed grain that gives it its earthy quality.  Which reminds me of the ADA’s take on fiber that I quoted in an earlier post: “it is important that you increase your fiber intake gradually, to prevent stomach irritation, and that you increase your intake of water and other liquids, to prevent constipation.”  foods with fiber “have a wealth of nutrition, containing many important vitamins and minerals.” In fact, fiber “may contain nutrients that haven’t even been discovered yet!” (their exclamation point).

In Brooklyn, the Mediterranean diet means Italian sausage, largely from Southern Italy.  I had always assumed that Soppresata (pronounced, as in Naples, without the final vowel) was so-called because it was super-saturated with fat, but I have been unable to confirm this; since first writing this post, Italian friends have suggested that it comes from Sop-pressata, that is “pressed on,” but this is also unconfirmed.  There are many varieties but supposedly the best is from Calabria.  For something like this, with so many varieties which each cook is sure is the best, there is no exact recipe, but you can get started with this from About.com Italian Food.

 6.6 pounds (3 k) of pork meat — a combination of loin and other lean cuts

1 pound (500 g) lard (a block of fat)

1 pound (500 g) pork side, the cut used to make bacon

Salt, pepper

Cloves, garlic and herbs (rosemary, lemon peel, parsley etc

1/2 cup grappa (I think you could also use brandy if you want)

The basic ideas is to remove all the gristle, and chop it with the lard and the pork side. About.com recommends a meat grinder but I suspect that the knife blade of a food processor is better.  Then, wash the casing well in vinegar, dry it thoroughly, and rub with a mixture of well ground salt and pepper. “Shake away the excess, fill the casing, pressing down so as to expel all air, close the casing, and tie the salami with string. Hang for 2-3 days in a warm place, and then for a couple of months in a cool, dry, drafty spot and the sopressata is ready.”

At exactly what moment these simple, natural ingredients turn into processed red meat is unknown.

Simple Mediterranean

I’ve included two recipes from Judy Barnes Baker’s new book, Nourished; a Cookbook for Health, Weight Loss, and Metabolic Balance.  Currently in press, publication will be announced on her website.  For very simple Mediterranean, she suggested the following from the The Silver Spoon. Translated from  Il cucchiaio d’argento, published in 1950 by Editoriale Domas, the back cover describes it as “the bible of authentic Italian cooking and Italy’s best-selling cookbook for the last fifty years.”

Eggs En Cocotte with Bacon Fat

Serves 4

4 small slices bacon fat

4 tablespoons heavy cream

4 eggs

2 tablespoons Parmesan cheese, freshly grated

 Preheat the oven to 350º, if you wish to bake the eggs. Parboil the bacon fat in boiling water for about 1 minute, then drain. Put 1 tablespoon cream and a slice of bacon fat in each of four ramekins, break an egg into each and sprinkle with the Parmesan. Place the ramekins in a roasting pan, add boiling water to come about halfway up the sides and bake for 6-8 minutes or until the egg whites are lightly set. Alternatively, place the roasting pan over low heat for 6-8 minutes. The combination of bacon fat and cream—a strong savory taste and a milder flavor—gives the eggs a very delicate flavor.

Recipes from: Nourished; a Cookbook for Health, Weight Loss, and Metabolic Balance  (Judy Barnes Baker is the author although I and others are mistakenly listed by Amazon as co-authors).

Καλή όρεξη

Asher Peres was a physicist, an expert in information theory who died in 2005 and was remembered for his scientific contributions as well as for his iconoclastic wit and ironic aphorisms. One of his witticisms was that “unperformed research has no results ”  Peres had undoubtedly never heard of intention-to-treat (ITT), the strange statistical method that has appeared recently, primarily in the medical literature.  According to ITT, the data from a subject assigned at random to an experimental group must be included in the reported outcome data for that group even if the subject does not follow the protocol, or even if they drop out of the experiment.  At first hearing, the idea is counter-intuitive if not completely idiotic  – why would you include people who are not in the experiment in your data? – suggesting that a substantial burden of proof rests with those who want to employ it.  No such obligation is usually met and particularly in nutrition studies, such as comparisons of isocaloric weight loss diets, ITT is frequently used with no justification and sometimes demanded by reviewers.   Not surprisingly, there is a good deal of controversy on this subject.  Physiologists or chemists, hearing this description usually walk away shaking their head or immediately come up with one or another obvious reductio ad absurdum, e.g. “You mean, if nobody takes the pill, you report whether or not they got better anyway?” That’s exactly what it means.

On the naive assumption that some people really didn’t understand what was wrong with ITT — I’ve been known to make a few elementary mistakes in my life — I wrote a paper on the subject.  It received negative, actually hostile. reviews from two public health journals — I include an amusing example at the end of this post.  I even got substantial grief from Nutrition & Metabolism, where I was the editor at the time, but where it was finally published.  The current post will be based on that paper and I will provide a couple of interesting cases from the medical literature.  In the next post I will discuss a quite remarkable new instance — Foster’s two year study of low carbohydrate diets — of the abuse of common sense that is the major alternative to ITT.

To put a moderate spin on the problem, there is nothing wrong with ITT, if you explicitly say what the method shows — the effect of assigning subjects to an experimental protocol; the title of my paper was Intention-to-treat.  What is the question? If you are very circumspect about that question, then there is little problem.  It is common, however, for the Abstract of a paper to correctly state that patients “were assigned to a diet” but by the time the Results are presented, the independent variable has become, not “assignment to the diet,” but “the diet” which most people would assume meant what people ate, rather than what they were told to eat. Caveat lector.  My paper was a kind of over-kill and I made several different arguments but the common sense argument gets to the heart of the problem in a practical way.  I’ll describe that argument and also give a couple of real examples.

Common sense argument against intention-to-treat

Consider an experimental comparison of two diets in which there is a simple, discrete outcome, e.g. a threshold amount of weight lost or remission of an identifiable symptom. Patients are randomly assigned to two different diets: diet group A or diet group B and a target of, say, 5 kg weight loss is considered success. As shown in the table above, in group A, half of the subject are able to stay on the diet but, for whatever reason, half are not. The half of the patients in group A who did stay on the diet, however, were all able to lose the target 5 kg.  In group B, on the other hand, everybody is able to stay on the diet but only half are able to lose the required amount of weight. An ITT analysis shows no difference in the two outcomes, while just looking at those people who followed the diet shows 100 % success.  This is one of the characteristics of ITT: it always makes the better diet look worse than it is.

         Diet A         Diet B
Compliance (of 100 patients)   50   100
Success (reached target)   50    50
ITT success   50/100 = 50%   50/100 = 50%
“per protocol” (followed diet) success   50/50 = 100%   50/100 = 50%

Now, you are the doctor.  With such data in hand should you advise a patient: “well, the diets are pretty much the same. It’s largely up to you which you choose,” or, looking at the raw data (both compliance and success), should the recommendation be: “Diet A is much more effective than diet B but people have trouble staying on it. If you can stay on diet A, it will be much better for you so I would encourage you to see if you could find a way to do so.” Which makes more sense? You’re the doctor.

I made several arguments trying to explain that there are two factors, only one of which (whether it works) is clearly due to the diet. The other (whether you follow the diet) is under control of other factors (whether WebMD tells you that one diet or the other will kill you, whether the evening news makes you lose your appetite, etc.)  I even dragged in a geometric argument because Newton had used one in the Principia: “a 2-dimensional outcome space where the length of a vector tells how every subject did…. ITT represents a projection of the vector onto one axis, in other words collapses a two dimensional vector to a one-dimensional vector, thereby losing part of the information.” Pretentious? Moi?

Why you should care.  Case I. Surgery or Medicine?

Does your doctor actually read these academic studies using ITT?  One can only hope not.  Consider the analysis by Newell  of the Coronary Artery Bypass Surgery (CABS) trial.  This paper is astounding for its blanket, tendentious insistence on what is correct without any logical argument.  Newell considers that the method of

 “the CABS research team was impeccable. They refused to do an ‘as treated’ analysis: ‘We have refrained from comparing all patients actually operated on with all not operated on: this does not provide a measure of the value of surgery.”

Translation: results of surgery do not provide a measure of the value of surgery.  So, in the CABS trial, patients were assigned to Medicine or Surgery. The actual method used and the outcomes are shown in the Table below. Intention-to-treat analysis was, as described by Newell, “used, correctly.” Looking at the table, you can see that a 7.8% mortality was found in those assigned to receive medical treatment (29 people out of 373 died), and a 5.3% mortality (21 deaths out of 371) for assignment to surgery.  If you look at the outcomes of each modality as actually used, it turns out that that medical treatment had a 9.5% (33/349) mortality rate compared with 4.1% (17/419) for surgery, an analysis that Newell says “would have wildly exaggerated the apparent value of surgery.”

Survivors and deaths after allocation to surgery or medical treatment
Allocated medicine Allocated surgery
  Received surgery     Received medicine   Received surgery     Received medicine
Survived 2 years   48   296   354   20
Died    2    27    15    6
Total   50   323   369   26

Common sense suggests that appearances are not deceiving. If you were one of the 33-17 = 16 people who were still alive, you would think that it was the potential report of your death that had been exaggerated.  The thing that is under the control of the patient and the physician, and which is not a feature of the particular modality, is getting the surgery implemented. Common sense dictates that a patient is interested in surgery, not the effect of being told that surgery is good.  The patient has a right to expect that if they comply, the physician would avoid conditions where, as stated by Hollis,  “most types of deviations from protocol would continue to occur in routine practice.” The idea that “Intention to treat analysis is … most suitable for pragmatic trials of effectiveness rather than for explanatory investigations of efficacy” assumes that practical considerations are the same everywhere and that any practitioner is locked into the same abilities or lack of abilities as the original experimenter.

What is the take home message.  One general piece of advice that I would give based on this discussion in the medical literature: don’t get sick.

Why you should care.  Case II. The effect of vitamin E supplementation

A clear cut case of how off-the-mark ITT can be is a report on the value of antioxidant supplements. The Abstract of the paper concluded that “there were no overall effects of ascorbic acid, vitamin E, or beta carotene on cardiovascular events among women at high risk for CVD.” The study was based on an ITT analysis but,on the fourth page of the paper, it turns out that removing subjects due to

“noncompliance led to a significant 13% reduction in the combined end point of CVD morbidity and mortality… with a 22% reduction in MI …, a 27% reduction in stroke …. a 23% reduction in the combination of MI, stroke, or CVD death (RR (risk ratio), 0.77; 95% CI, 0.64–0.92 [P = 005]).”

The media universally reported the conclusion from the Abstract, namely that there was no effect of vitamin E. This conclusion is correct if you think that you can measure the effect of vitamin E without taking the pill out of the bottle.  Does this mean that vitamin E is really of value? The data would certainly be accepted as valuable if the statistics were applied to a study of the value of replacing barbecued pork with whole grain cereal. Again, “no effect” was the answer to the question: “what happens if you are told to take vitamin E” but it still seems is reasonable that the effect of a vitamin means the effect of actually taking the vitamin.

The ITT controversy

Advocates of ITT see its principles as established and may dismiss a common sense approach as naïve. The issue is not easily resolved; statistics is not axiomatic: there is no F=ma, there is no zeroth law.  A good statistics book will tell you in the Introduction that what we do in statistics is to try to find a way to quantify our intuitions. If this is not appreciated, and you do not go back to consideration of exactly what the question is that you are asking, it is easy to develop a dogmatic approach and insist on a particular statistic because it has become standard.

As I mentioned above, I had a good deal of trouble getting my original paper published and one  anonymous reviewer said that “the arguments presented by the author may have applied, maybe, ten or fifteen years ago.” This criticism reminded me of Molière’s Doctor in Spite of Himself:

Sganarelle is disguised as a doctor and spouts medical double-talk with phony Latin, Greek and Hebrew to impress the client, Geronte, who is pretty dumb and mostly falls for it but:

Geronte: …there is only one thing that bothers me: the location of the liver and the heart. It seemed to me that you had them in the wrong place: the heart is on the left side but the liver is on the right side.

Sgnarelle: Yes. That used to be true but we have changed all that and medicine uses an entirely new approach.

Geronte: I didn’t know that and I beg your pardon for my ignorance.

 In the end, it is reasonable that scientific knowledge be based on real observations. This has never before been thought to include data that was not actually in the experiment. I doubt that nous avons changé tout cela.

…the association has to be strong and the causality has to be plausible and consistent. And you have to have some reason to make the observation; you can’t look at everything.  And experimentally, observation may be all that you have — almost all of astronomy is observational.  Of course, the great deconstructions of crazy nutritional science — several from Mike Eades blog and Tom Naughton’s hysterically funny-but-true course in how to be a scientist —  are still right on but, strictly speaking, it is the faulty logic of the studies and the whacko observations that is the problem, not simply that they are observational.  It is the strength and reliability of the association that tells you whether causality is implied.  Reducing carbohydrates lowers triglycerides.  There is a causal link.  You have to be capable of the state of mind of the low-fat politburo not to see this (for example, Circulation, May 24, 2011; 123(20): 2292 – 2333).

It is frequently said that observational studies are only good for generating hypotheses but it is really the other way around.  All studies are generated by hypotheses.  As Einstein put it: your theory determines what you measure.  I ran my post on the red meat story passed April Smith  and her reaction was “why red meat? Why not pancakes” which is exactly right.  Any number of things can be observed. Once you pick, you have a hypothesis.

Where did the first law of thermodynamics come from?

Thermodynamics is an interesting case.  The history of the second law involves a complicated interplay of observation and theory.  The idea that there was an absolute limit to how efficient you could make a machine and by extension that all real processes were necessarily inefficient largely comes from the brain power of Carnot. He saw that you could not extract as work all of the heat you put into a machine. Clausius encapsulated it into the idea of the entropy as in my Youtube video.

©2004 Robin A. Feinman

 The origins of the first law, the conservation of energy, are a little stranger.  It turns out that it was described more than twenty years after the second law and it has been attributed to several people, for a while, to the German physicist von Helmholtz.  These days, credit is given to a somewhat eccentric German physician named Robert Julius Mayer. Although trained as a doctor, Mayer did not like to deal with patients and was instead more interested in physics and religion which he seemed to think were the same thing.  He took a job as a shipboard physician on an expedition to the South Seas since that would give him time to work on his main interests.  It was in Jakarta where, while treating an epidemic with the practice then of blood letting, that he noticed that the venous blood of the sailors was much brighter than when they were in colder climates as if “I had struck an artery.” He attributed this to a reduced need for the sailors to use oxygen for heat and from this observation, he somehow leapt to the grand principle of conservation of energy, that the total amount of heat and work and any other forms of energy does not change but can only be interconverted. It is still unknown what kind of connections in his brain led him to this conclusion.  The period (1848) corresponds to the point at which science separated from philosophy. Mayer seems to have had one foot in each world and described things in the following incomprehensible way:

  • If two bodies find themselves in a given difference, then they could remain  in a state of rest after the annihilation of [that] difference if the  forces that were communicated to them as a result of the leveling of  the difference could cease to exist; but if they are assumed to be indestructible,  then the still persisting forces, as causes of changes in relationship,  will again reestablish the original present difference.

(I have not looked for it but one can only imagine what the original German was like). Warmth Disperses and Time Passes. The History of Heat, Von Baeyer’s popular book on thermodynamics, describes the ups and downs of Mayer’s life, including the death of three of his children which, in combination with rejection of his ideas, led to hospitalization but ultimate recognition and knighthood.  Surely this was a great observational study although, as von Baeyer put it, it did require “the jumbled flashes of insight in that sweltering ship’s cabin on the other side of the world.”

It is also true that association does imply causation but, again, the association has to have some impact and the proposed causality has to make sense.  In some way, purely observational experiments are rare.  As Pasteur pointed out, even serendipity is favored by preparation.  Most observational experiments must be a reflection of some hypothesis. Otherwise you’re wasting tax-payer’s money; a kiss of death on a grant application is to imply that “it would be good to look at.…”  You always have to have something in mind.  The great observational studies like the Framingham Study are bad because they have no null hypothesis. When the Framingham study first showed that there was no association between dietary total and saturated fat or dietary cholesterol, the hypothesis was quickly defended. The investigators were so tied to a preconceived hypothesis, that there was hardly any point in making the observations.

In fact, a negative result is always stronger than one showing consistency; consistent sunrises will go by the wayside if the sun fails to come up once.  It is the lack of an association between the decrease in fat consumption during the epidemic of obesity and diabetes that is so striking.  The figure above shows that the  increase in carbohydrate consumption is consistent with the causal role of dietary carbohydrate in creating anabolic hormonal effects and with the poor satiating effects of carbohydrates — almost all of the increase of calories during the epidemic of obesity and diabetes has been due to carbohydrates.  However, this observation is not as strong as the lack of an identifiable association of obesity and diabetes with fat consumption.  It is the 14 % decrease in the absolute amount of saturated fat for men that is the problem.  If the decrease in fat were associated with decrease in obesity, diabetes and cardiovascular disease, there is little doubt that the USDA would be quick to identify causality.  In fact, whereas you can find the occasional low-fat trial that succeeds, if the diet-heart hypothesis were as described, they should not fail. There should not be a single Women’s Health Initiative, there should not be a single Framingham study, not one.

Sometimes more association would be better.  Take intention-to-treat. Please. In this strange statistical idea, if you assign a person to a particular intervention, diet or drug, then you must include the outcome data (weight loss, change in blood pressure) for that person even if the do not comply with the protocol (go off the diet, stop taking the pills).  Why would anybody propose such a thing, never mind actually insist on it as some medical journals or granting agencies do?  When you actually ask people who support ITT, you don’t get coherent answers.  They say that if you just look at per protocol data (only from people who stayed in the experiment), then by excluding the drop-outs, you would introduce bias but when you ask them to explain that you get something along the lines of Darwin and the peas growing on the wrong side of the pod. The basic idea, if there is one, is that the reason that people gave up on their diet or stopped taking the pills was because of an inherent feature of the intervention: made them sick, drowsy or something like that.  While this is one possible hypothesis and should be tested, there are millions of others — the doctor was subtly discouraging about the diet, or the participants were like some of my relatives who can’t remember where they put their pills, or the diet book was written in Russian, or the diet book was not written in Russian etc. I will discuss ITT in a future post but for the issue at hand:  if you do a per-protocol you will observe what happens to people when stay on their diet and you will have an association between the content of the diet and performance.  With an ITT analysis, you will be able to observe what happens when people are told to follow a diet and you will have an association between assignment to a diet and performance.  Both are observational experiments with an association between variables but they have different likelihood of providing a sense of causality.

“Dost thou think, because thou art virtuous, there shall be no more cakes and ale?”

– William Shakespeare, Twelfth Night.

Experts on nutrition are like experts on sexuality.  No matter how professional they are in general, in some way they are always trying to justify their own lifestyle.  Theyo share a tendency to think that their own lifestyle is the one that everybody else should follow and they are always eager to save us from our own sins, sexual or dietary. The new puritans want to save us from red meat. It is unknown whether Michael Pollan’s In Defense of Food was reporting the news or making the news but it’s coupling of not eating too much and not eating meat is common.  More magazine’s take on saturated fat was very sympathetic to my own point of view and I probably shouldn’t complain that tacked on at the end was the conclusion that “most physicians will probably wait for more research before giving you carte blanche to order juicy porterhouse steaks.” I’m not sure that my physician knows about the research that already exists or that I am waiting for his permission on a zaftig steak.

Daily Red Meat Raises Chances Of Dying Early” was the headline in the Washington Post last year. This scary story was accompanied by the photo below. The gloved hand slicing roast beef with a scalpel-like instrument was probably intended to evoke CSI autopsy scenes, although, to me, the beef still looked pretty good if slightly over-cooked.  I don’t know the reporter, Rob Stein, but I can’t help feeling that we’re not talking Woodward and Bernstein here.  For those too young to remember Watergate, the reporters from the Post were encouraged to “follow the money” by Deep Throat, their anonymous whistle-blower. A similar character, claiming to be an insider and  identifying himself or herself as “Fat Throat,” has been sending intermittent emails to bloggers, suggesting that they “follow the data.”

The Post story was based on a research report “Meat Intake and Mortality” published in the medical journal, Archives of Internal Medicine by Sinha and coauthors.  It got a lot of press and had some influence and recently re-surfaced in the Harvard Men’s Health Watch in a two part article called, incredibly enough, “Meat or beans: What will you have?” (The Health Watch does admit that “red meat is a good source of iron and protein and…beans can trigger intestinal gas” and that they are “very different foods”) but somehow it is assumed that we can substitute one for the other.

Let me focus on Dr. Sinha’s article and try to explain what it really says.  My conclusion will be that there is no reason to think that any danger of red meat has been demonstrated and I will try to point out some general ways in which one can deal with these kinds of reports of scientific information.

A few points to remember first.  During the forty years that we describe as the obesity and diabetes epidemic, protein intake has been relatively constant; almost all of the increase in calories has been due to an increase in carbohydrates; fat, if anything, went down. During this period, consumption of almost everything increased.  Wheat and corn, of course went up.  So did fruits and vegetables and beans.  The two things whose consumption went down were red meat and eggs.  In other words there is some a priori reason to think that red meat is not a health risk and that the burden of proof should be on demonstrating harm.  Looking ahead, the paper, like analysis of the population data, will rely entirely on associations.

The conclusion of the study was that “Red and processed meat intakes were associated with modest increases in total mortality, cancer mortality, and cardiovascular disease mortality.”  Now, modest increase in mortality is a fairly big step down from “Dying Early,” and surely a step-down from the editorial quoted in the Washington Post.  Written by Barry Popkin, professor of global nutrition at the University of North Carolina it said: “This is a slam-dunk to say that, ‘Yes, indeed, if people want to be healthy and live longer, consume less red and processed meat.'” Now, I thought that the phrase “slam-dunk” was pretty much out after George Tenet, head of the CIA, told President Bush that the Weapons of Mass Destruction in Iraq was a slam-dunk.  I found an interview after his resignation quite disturbing; when the director of the CIA can’t lie convincingly, we are in big trouble.  And quoting Barry Popkin is like getting a second opinion from a member of the “administration.” It’s definitely different from investigative reporting like, you know, reading the article.

So what does the research article really say?  As I mentioned in my blog on eggs, when I read a scientific paper, I look for the pictures. The figures in a scientific paper usually make clear to the reader what is going on — that is the goal of scientific communication.  But there are no figures.  With no figures, Dr. Sinha’s research paper has to be analyzed for what it does have: a lot of statistics.  Many scientists share Mark Twain’s suspicion of statistics, so it is important to understand how it is applied.  A good statistics book will have an introduction that says something like “what we do in statistics, is try to put a number on our intuition.”  In other words, it is not really, by itself, science.  It is, or should be, a tool for the experimenter’s use. The problem is that many authors of papers in the medical literature allow statistics to become their master rather than their servant: numbers are plugged into a statistical program and the results are interpreted in a cut-and-dried fashion with no intervention of insight or common sense. On the other hand, many medical researchers see this as an impartial approach. So let it be with Sinha.

What were the outcomes? The study population of 322, 263 men and 223, 390 women was broken up into five groups (quintiles) according to meat consumption, the highest taking in about 7 times as much as the lower group (big differences).  The Harvard News Letter says that the men who ate the most red meat had a 31 % higher death rate than the men who ate the least meat.  This sounds serious but does it tell you what you want to know? In the media, scientific results are almost universally reported this way but it is entirely misleading.  To be fair, the Abstract of the paper itself reported this as a hazard ratio of 1.31 which, while still misleading, is less prejudicial. Hazard ratio is a little bit complicated but, in the end, it is similar to odds ratio or risk ratio which is pretty much what you think: an odds ratio of 2 means you’re twice as likely to win with one strategy as compared to the other.  A moment’s thought tells you that this is not good information because you can get an odds ratio of 2, that is, you can double your chances of winning the lottery, by buying two tickets instead of one.  You need to know the actual odds of each strategy.  Taking the ratio hides information.  Do reporters not know this?  Some have told me they do but that their editors are trying to gain market share and don’t care.  Let me explain it in detail.  If you already understand, you can skip the next paragraph.

A trip to Las Vegas

Taking the hazard ratio as more or less the same as odds ratio or risk ratio, let’s consider applying odds.  We are in Las Vegas and it turns out that there are two black-jack tables and, for some reason (different number of decks or something), the odds are different at the two tables.  Table 1 pays out on average once every 100 hands.  Table 2 pays out once in 67 hands. The odds are 1/100 or one in a hundred at the first table and 1/67 at the second.  The odds ratio is, obviously the ratio of the two odds or 1/67 divided by 1/00 or about 1.31.  (The odds ratio would be 1 if there were no difference between the two tables).

Right off, something is wrong: if you were just given the odds ratio you would have lost some important  information.  The odds ratio tells you that one gambling table is definitely better than the other but you need additional information to find out that the odds aren’t particularly good at either table: technically, information about the absolute risk was lost.

So knowing the odds ratio by itself is not much help.  But since we know the absolute risk of each table, does that help you decide which table to play?  Well, it depends who you are. For the guy who is at the blackjack table when you go up to your room to go to sleep and who is still sitting there when you come down for the breakfast buffet, things are going to be much better off at the second table.  He will play hundreds of hands and the better odds ratio of 1.31 will pay off in the long run.  Suppose, however, that you are somebody who will take the advice of my cousin the statistician who says to just go and play one hand for the fun of it, just to see if the universe really loves you (that’s what gamblers are really trying to find out).  You’re going to play the hand and then, win or lose, you are going to go do something else.  Does it matter which table you play at?  Obviously it doesn’t.  The odds ratio doesn’t tell you anything useful because you know that your chances of winning are pretty slim either way.

Now going over to the Red Meat article the hazard ratio (again, roughly the odds ratio) between high and low red meat intakes for all-cause mortality for men, for example, is 1.31 or, as they like to report in the media 31 % higher risk of dying which sounds pretty scary.  But what is the absolute risk?  To find that we have to find the actual number of people who died in the high red meat quintile and the low end quintile.  This is easy for the low end: 6,437 people died from the group of  64,452, so the odds of  dying are 6,437/64,452 or just about 0.10 or 10 %.  It’s a little trickier for the high red meat consumers.  There, 13,350 died.  Again,  dividing that by the number in that group, we find an absolute risk of 0.21 or 21 % which seems pretty high and the absolute difference in risk is an increase of 10 % which still seems pretty significant.  Or is it?  In these kinds of studies, you have to ask about confounders, variables that might bias the results.  Well, here, it is not hard to find.  Table 1 reveals that the high red meat group had 3 times the number of smokers. (Not 31 % more but 3 times more).  So the authors corrected the data for this and other effects (education, family history of cancer, BMI, etc.) which is how the final a value of 1.31 was obtained.  Since we know the absolute value of risk in the lowest red meat group, 0.1 we can calculate the risk in the highest red meat group which will be 0.131.  The absolute increase in risk from eating red meat, a lot more red meat, is then 0.131 – 0.10 = 0.031 or 3.1 % which is quite a bit less than we thought.

Now, we can see that the odds ratio of 1.31 is not telling us much — and remember this is for big changes, like 6 or 7 times as much meat; doubling red meat intake (quintiles 1 and 2) leads to a hazard ratio of 1.07.  What is a meaningful odds ratio?  For comparison, the odds ratio for smoking vs not smoking for incidence of lung disease is about 22.

Well, 3.1 % is not much but it’s something.  Are we sure?  Remember that this is a statistical outcome and that means that some people in the high red meat group had lower risk, not higher risk.  In other words, this is what is called statistically two-tailed, that is, the statistics reflect changes that go both ways.  What is the danger in reducing meat intake.  The data don’t really tell you that.  Unlike cigarettes, where there is little reason to believe that anybody’s lungs really benefit from cigarette smoke (and the statistics are due to random variation), we know that there are many benefits to protein especially if it replaces carbohydrate in the diet, that is, the variation may be telling us something real.  With odds ratios around 1.31 — again, a value of 1 means that there is no difference — you are almost as likely to benefit from adding red meat as you are reducing it.  The odds still favor things getting worse but it really is a risk in both directions. You are at the gaming tables.  You don’t get your chips back. If reducing red meat does not reduce your risk, it may increase it.  So much for the slam dunk.

What about public health? Many people would say that for a single person, red meat might not make a difference but if the population reduced meat by half, we would save thousands of lives.  The authors do want to do this.  At this point, before you and your family take part in a big experiment to save health statistics in the country, you have to ask how strong the relations are.  To understand the quality of the data, you must look for things that would not be expected to have a correlation.  “There was an increased risk associated with death from injuries and sudden death with higher consumption of red meat in men but not in women.”  The authors dismiss this because the numbers were smaller (343 deaths) but the whole study is about small differences and it sounds like we are dealing with a good deal of randomness.  Finally, the authors set out from the start to investigate red meat.  To be fair, they also studied white meat which was slightly beneficial. But what are we to compare the meat results to? Why red meat?  What about potatoes?  Cupcakes?   Breakfast cereal?  Are these completely neutral? If we ran these through the same computer, what would we see?  And finally there is the elephant in the room: carbohydrate. Basic biochemistry suggests that a roast beef sandwich may have a different effect than roast beef in a lettuce wrap.

So I’ve given you the perspective of a biochemistry professor.  This was a single paper and surely not the worst, but I think it’s not really about science.  It’s about sin.

*

Nutrition & Metabolism Society