Posts Tagged ‘hazard ratio’

The attack was quite sudden although it appeared to have been planned for many years. The paper was published last week (Augustin LS, Kendall CW, Jenkins DJ, Willett WC, Astrup A, Barclay AW, Bjorck I, Brand-Miller JC, Brighenti F, Buyken AE et al: Glycemic index, glycemic load and glycemic response: An International Scientific Consensus Summit from the International Carbohydrate Quality Consortium (ICQC). Nutr Metab Cardiovasc Dis 2015, 25(9):795-815.

Augustin_Stresa+Nov_27

As indicated by the title, responsibility was taken by the self-proclaimed ICQC.  It turned out to be a continuation of the long-standing attempt to use the glycemic index to co-opt the obvious benefits in control of the glucose-insulin axis while simultaneously attacking real low-carbohydrate diets. The authors participated in training in Stresa, Italy.

The operation was largely passive aggressive. While admitting the importance of dietary carbohydrate in controlling post-prandial glycemic,  low-carbohydrate diets were ignored. Well, not exactly. The authors actually had a strong attack.  The Abstract of the paper said (my emphasis):

Background and aims: The positive and negative health effects of dietary carbohydrates are of interest to both researchers and consumers.”

Methods: International experts on carbohydrate research held a scientific summit in Stresa, Italy, in June 2013 to discuss controversies surrounding the utility of the glycemic index (GI), glycemic load (GL) and glycemic response (GR).”

So, for the record, the paper is about dietary carbohydrate and about controversies.

The Results in Augustin, et al were simply

“The outcome was a scientific consensus statement which recognized the importance of postprandial glycemia in overall health, and the GI as a valid and reproducible method of classifying carbohydrate foods for this purpose…. Diets of low GI and GL were considered particularly important in individuals with insulin resistance.”

A definition is always a reproducible way of classifying things, and the conclusion is not controversial: glycemia is important.  Low-GI diets are a weak form of low-carbohydrate diet and they are frequently described as a politically correct form of carbohydrate restriction. It is at least a subset of carbohydrate restriction and one of the “controversies” cited in the Abstract is sensibly whether it is better or worse than total carbohydrate restriction. Astoundingly, this part of the controversy was ignored by the authors.  Our recent review of carbohydrate restriction in diabetes had this comparison:

 

 

15_Th_Westman_Jenkins_Mar25-2

A question of research integrity.

It is considered normal scientific protocol that, in a scientific field, especially one that is controversial, that you consider and cite alternative or competing points of view. So how do the authors see low-carbohydrate diets fitting in? If you search the pdf of Augustin, et al on “low-carbohydrate” or “low carbohydrate,” there are only two in the text:

“Very low carbohydrate-high protein diets also have beneficial effects on weight control and some cardiovascular risk factors (not LDL-cholesterol) in the short term, but are associated with increased mortality in long term cohort studies [156],”

and

“The lowest level of postprandial glycemia is achieved using very low carbohydrate-high protein diets, but these cannot be recommended for long term use.”

There are no references for the second statement but very low carbohydrate diets can be and frequently are recommended for long term use and have good results. I am not aware of “increased mortality in long term cohort studies” as in the first statement. In fact, low-carbohydrate diets are frequently criticized for not being subjected to long-term studies. So it was important to check out the studie(s) in reference 156:

[156] Pagona L, Sven S, Marie L, Dimitrios T, Hans-Olov A, Elisabete W. Low carbohydrate-high protein diet and incidence of cardiovascular diseases in Swedish women: prospective cohort study. BMJ 2012;344.

Documenting increased mortality.

The paper is not about mortality but rather about cardiovascular disease and, oddly, the authors are listed by their first names. (Actual reference: Lagiou P, Sandin S, Lof M, Trichopoulos D, Adami HO, Weiderpass E: . BMJ 2012, 344:e4026). This minor error probably reflects the close-knit “old boys” circle that functions on a first name basis although it may also indicate that the reference was not actually read so it was not discovered what the reference was really about.

Anyway, even though it is about cardiovascular disease, it is worth checking out. Who wants increased risk of anything. So what does Lagiou, et al say?

The Abstract of Lagiou says (my emphasis) “Main outcome measures: Association of incident cardiovascular diseases … with decreasing carbohydrate intake (in tenths), increasing protein intake (in tenths), and an additive combination of these variables (low carbohydrate-high protein score, from 2 to 20), adjusted for intake of energy, intake of saturated and unsaturated fat, and several non-dietary variables.”

Low-carbohydrate score? There were no low-carbohydrate diets. There were no diets at all. This was an analysis of “43, 396 Swedish women, aged 30-49 years at baseline, [who] completed an extensive dietary questionnaire and were followed-up for an average of 15.7 years.” The outcome variable, however, was only the “score” which the authors made up and which, as you might guess, was not seen and certainly not approved, by anybody with actual experience with low-carbohydrate diets. And, it turns out that “Among the women studied, carbohydrate intake at the low extreme of the distribution was higher and protein intake at the high extreme of the distribution was lower than the respective intakes prescribed by many weight control diets.” (In social media, this is called “face-palm”).

Whatever the method, though, I wanted to know how bad it was? The 12 years or so that I have been continuously on a low-carbohydrate diet might be considered pretty long term. What is my risk of CVD?

Results: A one tenth decrease in carbohydrate intake or increase in protein intake or a 2 unit increase in the low carbohydrate-high protein score were all statistically significantly associated with increasing incidence of cardiovascular disease overall (n=1270)—incidence rate ratio estimates 1.04 (95% confidence interval 1.00 to 1.08), 1.04 (1.02 to 1.06), and 1.05 (1.02 to 1.08).”

Rate ratio 1.04? And that’s an estimate.  That’s odds of 51:49.  That’s what I am supposed to be worried about. But that’s the relative risk. What about the absolute risk? There were 43 396 women in the study with 1270 incidents, or 2.9 % incidence overall.  So the absolute difference is about 1.48-1.42% = 0.06 % or less than 1/10 of 1 %.

Can such low numbers be meaningful? The usual answers is that if we scale them up to the whole population, we will save thousands of lives. Can we do that? Well, you can if the data are strong, that is, if we are really sure of the reliability of the independent variable. The relative risk in the Salk vaccine polio trial, for example, was in this ballpark but scaling up obviously paid off. In the Salk vaccine trial, however, we knew who got the vaccine and who didn’t. In distinction, food questionnaire’s have a bad reputation. Here is Lagiou’s description (you don’t really have to read this):

“We estimated the energy adjusted intakes of protein and carbohydrates for each woman, using the ‘residual method.’ This method allows evaluation of the “effect” of an energy generating nutrient, controlling for the energy generated by this nutrient, by using a simple regression of that nutrient on energy intake.…” and so on. I am not sure what it means but it certainly sounds like an estimate. So is the data itself any good? Well,

“After controlling for energy intake, however, distinguishing the effects of a specific energy generating nutrient is all but impossible, as a decrease in the intake of one is unavoidably linked to an increase in the intake of one or several of the others. Nevertheless, in this context, a low carbohydrate-high protein score allows the assessment of most low carbohydrate diets, which are generally high protein diets, because it integrates opposite changes of two nutrients with equivalent energy values.”

And “The long interval between exposure and outcome is a source of concern, because certain participants may change their dietary habits during the intervening period.”

Translation: we don’t really know what we did here.

In the end, Lagiou, et al admit “Our results do not answer questions concerning possible beneficial short term effects of low carbohydrate or high protein diets in the control of body weight or insulin resistance. Instead, they draw attention to the potential for considerable adverse effects on cardiovascular health of these diets….” Instead? I thought insulin resistance has an effect on CVD but if less than 1/10 of 1 % is “considerable adverse effects” what would something “almost zero” be.?

Coming back to the original paper by Augustin, et al, what about the comparison between low-GI diets and low-carbohydrate diets. The comparison in the figure above comes from Eric Westman’s lab. What do they have to say about that?

Augustin_

They missed this paper. Note: a comment I received suggested that I should have searched on “Eric” instead of “Westman.” Ha.

Overall, this is the evidence used by ICQC to tell you that low-carbohydrate diets would kill you. In the end, Augustin, et al is a hatchet-job, citing a meaningless paper at random. It is hard to understand why the journal took it. I will ask the editors to retract it.

As the nutrition world implodes, there are a lot of accusations about ulterior motives and personal gain. (A little odd, that in this period of unbelievable greed — CEO’s ripping off public companies for hundreds of millions of dollars, congress trying to give tax breaks to billionaires — book authors are upbraided for trying to make money). So let me declare that I am not embarrassed to be an author for the money — although the profits from my book do go to research, it is my own research and the research of my colleagues. So beyond general excellence (not yet reviewed by David Katz), I think “World Turned Upside Down” does give you some scientific information about red meat and cancer that you can’t get from the WHO report on the subject.

The WHO report has not yet released the evidence to support their claim that red meat will give you cancer but it is worth going back to one of the previous attacks.  Chapters 18 and 19 discussed a paper by Sinha et al, entitled “Meat Intake and Mortality.”    The Abstract says “Conclusion: Red and processed meat intakes were associated with modest increases in total mortality, cancer mortality, and cardiovascular disease mortality,” I had previously written a blogpost about the study indicating how weak the association was. In that post, I had used the data on men but when I incorporated the information into the book, I went back to Sinha’s paper and analyzed the original data. For some reason, I also checked the data on women. That turned out to be pretty surprising:

Sinha_Table3_Chapter18_Apr22

I described on Page 286: “The population was again broken up into five groups or quintiles. The lower numbered quintiles are for the lowest consumption of red meat. Looking at all cause mortality, there were 5,314 deaths [in lowest quintile] and when you go up to quintile 05, highest red meat consumption, there are 3,752 deaths. What? The more red meat, the lower the death rate? Isn’t that the opposite of the conclusion of the paper? And the next line has [calculated] relative risk which now goes the other way: higher risk with higher meat consumption. What’s going on? As near as one can guess, “correcting” for the confounders changed the direction….” They do not show most of the data or calculations but I take this to be equivalent to a multivariate analysis, that is, red meat + other things gives you risk. If they had broken up the population by quintiles of smoking, you would see that that was the real contributor. That’s how I interpreted it but, in any case, their conclusion is about meat and it is opposite to what the data say.

So how much do you gain from eating red meat? “A useful way to look at this data is from the standpoint of conditional probability. We ask: what is the probability of dying in this experiment if you are a big meat‑eater? The answer is simply the number of people who both died during the experiment and were big meat‑eaters …. = 0.0839 or about 8%. If you are not a big meat‑eater, your risk is …. = 0.109 or about 11%.” Absolute gain is only 3 %. But that’s good enough for me.

Me, at Jubilat, the Polish butcher in the neighborhood: “The Boczak Wedzony (smoked bacon). I’ll take the whole piece.”

Wedzony_Nov_8

Boczak Wedzony from Jubilat Provisions

Rashmi Sinha is a Senior Investigator and Deputy Branch Chief and Senior at the NIH. She is a member of the WHO panel, the one who says red meat will give you cancer (although they don’t say “if you have the right confounders.”)

So, buy my book: AmazonAlibris, or

Direct:  Personalized, autographed copy $ 20.00 free shipping USA only.  Use coupon code: SEPT16

 

“Dost thou think, because thou art virtuous, there shall be no more cakes and ale?”

— William Shakespeare, Twelfth Night.

Experts on nutrition are like experts on sexuality.  No matter how professional they are in general, in some way they are always trying to justify their own lifestyle.  They share a tendency to think that their own lifestyle is the one that everybody else should follow and they are always eager to save us from our own sins, sexual or dietary. The new puritans want to save us from red meat. It is unknown whether Michael Pollan’s In Defense of Food was reporting the news or making the news but it’s coupling of not eating too much and not eating meat is common.  More magazine’s take on saturated fat was very sympathetic to my own point of view and I probably shouldn’t complain that tacked on at the end was the conclusion that “most physicians will probably wait for more research before giving you carte blanche to order juicy porterhouse steaks.” I’m not sure that my physician knows about the research that already exists or that I am waiting for his permission on a zaftig steak.

Daily Red Meat Raises Chances Of Dying Early” was the headline in the Washington Post last year. This scary story was accompanied by the photo below. The gloved hand slicing roast beef with a scalpel-like instrument was probably intended to evoke CSI autopsy scenes, although, to me, the beef still looked pretty good if slightly over-cooked.  I don’t know the reporter, Rob Stein, but I can’t help feeling that we’re not talking Woodward and Bernstein here.  For those too young to remember Watergate, the reporters from the Post were encouraged to “follow the money” by Deep Throat, their anonymous whistle-blower. A similar character, claiming to be an insider and  identifying himself or herself as “Fat Throat,” has been sending intermittent emails to bloggers, suggesting that they “follow the data.”

The Post story was based on a research report “Meat Intake and Mortality” published in the medical journal, Archives of Internal Medicine by Sinha and coauthors.  It got a lot of press and had some influence and recently re-surfaced in the Harvard Men’s Health Watch in a two part article called, incredibly enough, “Meat or beans: What will you have?” (The Health Watch does admit that “red meat is a good source of iron and protein and…beans can trigger intestinal gas” and that they are “very different foods”) but somehow it is assumed that we can substitute one for the other.

Let me focus on Dr. Sinha’s article and try to explain what it really says.  My conclusion will be that there is no reason to think that any danger of red meat has been demonstrated and I will try to point out some general ways in which one can deal with these kinds of reports of scientific information.

A few points to remember first.  During the forty years that we describe as the obesity and diabetes epidemic, protein intake has been relatively constant; almost all of the increase in calories has been due to an increase in carbohydrates; fat, if anything, went down. During this period, consumption of almost everything increased.  Wheat and corn, of course went up.  So did fruits and vegetables and beans.  The two things whose consumption went down were red meat and eggs.  In other words there is some a priori reason to think that red meat is not a health risk and that the burden of proof should be on demonstrating harm.  Looking ahead, the paper, like analysis of the population data, will rely entirely on associations.

The conclusion of the study was that “Red and processed meat intakes were associated with modest increases in total mortality, cancer mortality, and cardiovascular disease mortality.”  Now, modest increase in mortality is a fairly big step down from “Dying Early,” and surely a step-down from the editorial quoted in the Washington Post.  Written by Barry Popkin, professor of global nutrition at the University of North Carolina it said: “This is a slam-dunk to say that, ‘Yes, indeed, if people want to be healthy and live longer, consume less red and processed meat.'” Now, I thought that the phrase “slam-dunk” was pretty much out after George Tenet, then head of the CIA, told President Bush that the Weapons of Mass Destruction in Iraq was a slam-dunk.  (I found an interview with Tenet after his resignation quite disturbing; when the director of the CIA can’t lie convincingly, we are in big trouble).  And quoting Barry Popkin is like getting a second opinion from a member of the “administration.” It’s definitely different from investigative reporting like, you know, reading the article.

So what does the research article really say?  As I mentioned in my blog on eggs, when I read a scientific paper, I look for the pictures. The figures in a scientific paper usually make clear to the reader what is going on — that is the goal of scientific communication.  But there are no figures.  With no figures, Dr. Sinha’s research paper has to be analyzed for what it does have: a lot of statistics.  Many scientists share Mark Twain’s suspicion of statistics, so it is important to understand how it is applied.  A good statistics book will have an introduction that says something like “what we do in statistics, is try to put a number on our intuition.”  In other words, it is not really, by itself, science.  It is, or should be, a tool for the experimenter’s use. The problem is that many authors of papers in the medical literature allow statistics to become their master rather than their servant: numbers are plugged into a statistical program and the results are interpreted in a cut-and-dried fashion with no intervention of insight or common sense. On the other hand, many medical researchers see this as an impartial approach. So let it be with Sinha.

What were the outcomes? The study population of 322, 263 men and 223, 390 women was broken up into five groups (quintiles) according to meat consumption, the highest taking in about 7 times as much as the lower group (big differences).  The Harvard News Letter says that the men who ate the most red meat had a 31 % higher death rate than the men who ate the least meat.  This sounds serious but does it tell you what you want to know? In the media, scientific results are almost universally reported this way but it is entirely misleading.  (Bob has 30 % more money than Alice but they may both be on welfare). To be fair, the Abstract of the paper itself reported this as a hazard ratio of 1.31 which, while still misleading, is less prejudicial. Hazard ratio is a little bit complicated but, in the end, it is similar to odds ratio or risk ratio which is pretty much what you think: an odds ratio of 2 means you’re twice as likely to win with one strategy as compared to the other.  A moment’s thought tells you that this is not good information because you can get an odds ratio of 2, that is, you can double your chances of winning the lottery, by buying two tickets instead of one.  You need to know the actual odds of each strategy.  Taking the ratio hides information.  Do reporters not know this?  Some have told me they do but that their editors are trying to gain market share and don’t care.  Let me explain it in detail.  If you already understand, you can skip the next paragraph.

A trip to Las Vegas

Taking the hazard ratio as more or less the same as odds ratio or risk ratio, let’s consider applying odds (in the current case, they are very similar).  So, we are in Las Vegas and it turns out that there are two black-jack tables and, for some reason (different number of decks or something), the odds are different at the two tables (odds are ways of winning divided by ways of not winning).  Table 1 pays out on average once every 100 hands.  Table 2 pays out once in 67 hands. The odds are 1/99 or close to one in a hundred at the first table and 1/66 at the second.  The odds ratio is, obviously the ratio of the two odds or 1/66 divided by 1/99 or about 1.55.  (The odds ratio would be 1 if there were no difference between the two tables).

Right off, something is wrong: if you were just given the odds ratio you would have lost some important  information.  The odds ratio tells you that one gambling table is definitely better than the other but you need additional information to find out that the odds aren’t particularly good at either table: technically, information about the absolute risk was lost.

So knowing the odds ratio by itself is not much help.  But since we know the absolute risk of each table, does that help you decide which table to play?  Well, it depends who you are. For the guy who is at the blackjack table when you go up to your hotel room to go to sleep and who is still sitting there when you come down for the breakfast buffet, things are going to be much better off at the second table.  He will play hundreds of hands and the better odds ratio of 1.5 will pay off in the long run.  Suppose, however, that you are somebody who will take the advice of my cousin the statistician who says to just go and play one hand for the fun of it, just to see if the universe really loves you (that’s what gamblers are really trying to find out).  You’re going to play the hand and then, win or lose, you are going to go do something else.  Does it matter which table you play at?  Obviously it doesn’t.  The odds ratio doesn’t tell you anything useful because you know that your chances of winning are pretty slim either way.

Now going over to the Red Meat article the hazard ratio (again, roughly the odds ratio) between high and low red meat intakes for all-cause mortality for men, for example, is 1.31 or, as they like to report in the media 31 % higher risk of dying which sounds pretty scary.  But what is the absolute risk?  To find that we have to find the actual number of people who died in the high red meat quintile and the low end quintile.  This is easy for the low end: 6,437 people died from the group of  64,452, so the probability (probability is ways of winning divided by total possibilities) of dying are 6,437/64,452 or just about 0.10 or 10 %.  It’s a little trickier for the high red meat consumers.  There, 13,350 died.  Again,  dividing that by the number in that group, we find an absolute risk of 0.21 or 21 % which seems pretty high and the absolute difference in risk is an increase of 10 % which still seems pretty significant.  Or is it?  In these kinds of studies, you have to ask about confounders, variables that might bias the results.  Well, here, it is not hard to find.  Table 1 reveals that the high red meat group had 3 times the number of smokers. (Not 31 % more but 3 times more).  So the authors corrected the data for this and other effects (education, family history of cancer, BMI, etc.) which is how the final a value of 1.31 was obtained.  Since we know the absolute value of risk in the lowest red meat group, 0.1 we can calculate the risk in the highest red meat group which will be 0.131.  The absolute increase in risk from eating red meat, a lot more red meat, is then 0.131 – 0.10 = 0.031 or 3.1 % which is quite a bit less than we thought.

Now, we can see that the odds ratio of 1.31 is not telling us much — and remember this is for big changes, like 6 or 7 times as much meat; doubling red meat intake (quintiles 1 and 2) leads to a hazard ratio of 1.07.  What is a meaningful odds ratio?  For comparison, the odds ratio for smoking vs not smoking for incidence of lung disease is about 22.

Well, 3.1 % is not much but it’s something.  Are we sure?  Remember that this is a statistical outcome and that means that some people in the high red meat group had lower risk, not higher risk.  In other words, this is what is called statistically two-tailed, that is, the statistics reflect changes that go both ways.  What is the danger in reducing meat intake.  The data don’t really tell you that.  Unlike cigarettes, where there is little reason to believe that anybody’s lungs really benefit from cigarette smoke (and the statistics are due to random variation), we know that there are many benefits to protein especially if it replaces carbohydrate in the diet, that is, the variation may be telling us something real.  With odds ratios around 1.31 — again, a value of 1 means that there is no difference — you are almost as likely to benefit from adding red meat as you are reducing it.  The odds still favor things getting worse but it really is a risk in both directions. You are at the gaming tables.  You don’t get your chips back. If reducing red meat does not reduce your risk, it may increase it.  So much for the slam dunk.

What about public health? Many people would say that for a single person, red meat might not make a difference but if the population reduced meat by half, we would save thousands of lives.  The authors do want to do this.  At this point, before you and your family take part in a big experiment to save health statistics in the country, you have to ask how strong the relations are.  To understand the quality of the data, you must look for things that would not be expected to have a correlation.  “There was an increased risk associated with death from injuries and sudden death with higher consumption of red meat in men but not in women.”  The authors dismiss this because the numbers were smaller (343 deaths) but the whole study is about small differences and it sounds like we are dealing with a good deal of randomness.  Finally, the authors set out from the start to investigate red meat.  To be fair, they also studied white meat which was slightly beneficial. But what are we to compare the meat results to? Why red meat?  What about potatoes?  Cupcakes?   Breakfast cereal?  Are these completely neutral? If we ran these through the same computer, what would we see?  And finally there is the elephant in the room: carbohydrate. Basic biochemistry suggests that a roast beef sandwich may have a different effect than roast beef in a lettuce wrap.

So I’ve given you the perspective of a biochemistry professor.  This was a single paper and surely not the worst, but I think it’s not really about science.  It’s about sin.

*

Nutrition & Metabolism Society